Search Results

Search found 133 results on 6 pages for 'munin'.

Page 5/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Using Lighttpd: apache prox or direct connection?

    - by Halfgaar
    Hi, I'm optimizing a site by using lighttpd for the static media. I've found that a recommended solution is to use Apache Proxy to point to the lighttpd server. But, does that use up an Apache thread/process per request? In my setup, I've noticed that all my processes are used up, even though they aren't doing anything, CPU wise. To free up apache processes, I've configured lighttpd and the amount of processes needed is lowered significantly, Munin shows. However, I've set it up to connect directly to lighty, to prevent apache workers from server static media. My question is: when using Apache Proxy, does that also use up a process/worker?

    Read the article

  • Using Lighttpd: apache proxy or direct connection?

    - by Halfgaar
    Hi, I'm optimizing a site by using lighttpd for the static media. I've found that a recommended solution is to use Apache Proxy to point to the lighttpd server. But, does that use up an Apache thread/process per request? In my setup, I've noticed that all my processes are used up, even though they aren't doing anything, CPU wise. To free up apache processes, I've configured lighttpd and the amount of processes needed is lowered significantly, Munin shows. However, I've set it up to connect directly to lighty, to prevent apache workers from being occupied by serving static media. My question is: when using Apache Proxy, does that also use up a process/worker per request?

    Read the article

  • Automating and deploying new linux servers

    - by luckytaxi
    I'm in the process of developing a method to automate new virtual machines into my environment. 90% of our machines are virtual but the process is similar for both physical and vmware based images. What I do now is I use cobbler to install the base OS. The kickstart script has post hooks to modify the yum repo and installs puppet and func. Once the servers are running, I manually add them into nagios and sign the certificate via the puppetmaster. I've since migrated most of the resources to use mysql as the backend. I wanted to see what others are doing and my goal for 2011 is to have puppet inventory the hardware into mysql, and somehow i'll script a python script to have nagios grab the info and automatically add it for monitoring purposes. It's kind of tedious to have to add each new server into nagios, puppet's dashboard, munin, etc...

    Read the article

  • Reconnoiter - Anyone using it?

    - by Marco Ramos
    Reconnoiter is a new tool in the world of monitoring. It is not only a trending tool but also alerting/fault detection one. In my particular case, I reckon that it's in the trending capacities that Reconnoiter has a very huge potential. One of the premises Recoinnoter is built upon is that RRDTool large installations are very inneficient regarding I/O and I think this is RRDTool major problem. One of the things that would make me change from Cacti is, obviously, the cost of change and the learning curve. So, any of you has experience with Reconnoiter? How's the learning curve? Was it difficult to move from RRDTool frontend applications (Cacti, Munin, Ganglia) to Reconnoiter? I'm looking forward to read your opinions.

    Read the article

  • vps like [load] graphs

    - by foober
    I investigated a couple of tools but they were really annoying and not polished. kSar for exampe is supposed to graph sar output, but it doesn't work. There's a perl script around (sar2rrd) that's supposed to convert sar output in rrd format and generate graphs. Doesn't work. (at least it doesn't like the output of "atsar" as per debian/ubuntu package). Tried munin but it wants to mess with http servers, and for some reason it didn't really work, too. It displayed errors in the webpage generated by the http server it put on port 4949. So, is there a simple install and forget tool to generate daily load,cpu,memory,network graphs? It seems strange to me that this problem has not been solved, maybe I'm looking in the wrong places

    Read the article

  • What can cause peaks in pagetables in /proc/meminfo ?

    - by Fuzzy76
    I have a gameserver running Debian Lenny on a VPS host. Even when experiencing a fairly low load, the players start experiencing major lag (ping times rise from 50 ms to 150-500 ms) in bursts of 3 - 10 seconds. I have installed Munin server monitoring, but when looking at the graphs it looks like the server has plenty of CPU, RAM and bandwidth available. The only weird thing I noticed is some peaks in the memory graph attributed to "page_tables" which maps to PageTables in /proc/meminfo but I can't find any good information on what this might mean. Any ideas what might be causing this? If you need any more graps, just let me know. The interrupts/second count is at roughly 400-600 during this period (nearly all from eth0). The drop in committed was caused by me trying to lower the allocated memory for the server from 512MB to 256MB, but that didn't seem to help.

    Read the article

  • xen domUs crashes or unavailability

    - by Rush
    I've xen server with 8 domU. Server is Xeon E31270 with 16gb ram. I think it is enough for 8 machines. Sometimes domU's crashes and i can't figure out the reason. After crash i can connect to console and there is somthing like this: Oct 8 22:20:49 server kernel: [30892.320780] lowmem_reserve[]: 0 0 0 0 Oct 8 22:20:49 server kernel: [30892.320790] Node 0 DMA: 10*4kB 3*8kB 13*16kB 10*32kB 7*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 2*2048kB 0*4096kB = 8080kB Oct 8 22:20:49 server kernel: [30892.320817] Node 0 DMA32: 648*4kB 2*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 5760kB Oct 8 22:20:49 server kernel: [30892.320842] 1491 total pagecache pages Oct 8 22:20:49 server kernel: [30892.320847] 0 pages in swap cache Oct 8 22:20:49 server kernel: [30892.320852] Swap cache stats: add 0, delete 0, find 0/0 Oct 8 22:20:49 server kernel: [30892.320858] Free swap = 0kB Oct 8 22:20:49 server kernel: [30892.320862] Total swap = 0kB Oct 8 22:20:49 server kernel: [30892.324024] 524288 pages RAM Oct 8 22:20:49 server kernel: [30892.324024] 11010 pages reserved Oct 8 22:20:49 server kernel: [30892.324024] 424467 pages shared Oct 8 22:20:49 server kernel: [30892.324024] 503538 pages non-shared Oct 8 22:20:49 server kernel: [30892.330308] apache2 invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0 Oct 8 22:20:49 server kernel: [30892.330322] apache2 cpuset=/ mems_allowed=0 Oct 8 22:20:49 server kernel: [30892.330330] Pid: 23938, comm: apache2 Not tainted 2.6.32-5-xen-amd64 #1 Oct 8 22:20:49 server kernel: [30892.330337] Call Trace: Oct 8 22:20:49 server kernel: [30892.330349] [<ffffffff810b7180>] ? oom_kill_process+0x7f/0x23f Oct 8 22:20:49 server kernel: [30892.330358] [<ffffffff810b76a4>] ? __out_of_memory+0x12a/0x141 Oct 8 22:20:49 server kernel: [30892.330367] [<ffffffff810b77fb>] ? out_of_memory+0x140/0x172 Oct 8 22:20:49 server kernel: [30892.330376] [<ffffffff810bb59c>] ? __alloc_pages_nodemask+0x4e5/0x5f5 Oct 8 22:20:49 server kernel: [30892.330385] [<ffffffff810cc224>] ? do_wp_page+0x386/0x707 Oct 8 22:20:49 server kernel: [30892.330395] [<ffffffff8100c3a5>] ? __raw_callee_save_xen_pud_val+0x11/0x1e Oct 8 22:20:49 server kernel: [30892.330404] [<ffffffff8100c369>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e Oct 8 22:20:49 server kernel: [30892.330412] [<ffffffff810cdfc7>] ? handle_mm_fault+0x7aa/0x80f Oct 8 22:20:49 server kernel: [30892.330422] [<ffffffff8130f906>] ? do_page_fault+0x2e0/0x2fc Oct 8 22:20:49 server kernel: [30892.330433] [<ffffffff8130d7a5>] ? page_fault+0x25/0x30 Oct 8 22:20:49 server kernel: [30892.330439] Mem-Info: Oct 8 22:20:49 server kernel: [30892.330443] Node 0 DMA per-cpu: Oct 8 22:20:49 server kernel: [30892.330450] CPU 0: hi: 0, btch: 1 usd: 0 Oct 8 22:20:49 server kernel: [30892.330463] CPU 1: hi: 0, btch: 1 usd: 0 Oct 8 22:20:49 server kernel: [30892.330466] Node 0 DMA32 per-cpu: Oct 8 22:20:49 server kernel: [30892.330469] CPU 0: hi: 186, btch: 31 usd: 0 Oct 8 22:20:49 server kernel: [30892.330472] CPU 1: hi: 186, btch: 31 usd: 60 Oct 8 22:20:49 server kernel: [30892.330476] active_anon:342076 inactive_anon:115398 isolated_anon:0 Oct 8 22:20:49 server kernel: [30892.330477] active_file:268 inactive_file:481 isolated_file:0 Oct 8 22:20:49 server kernel: [30892.330477] unevictable:1125 dirty:2 writeback:13 unstable:0 Oct 8 22:20:49 server kernel: [30892.330478] free:3410 slab_reclaimable:1718 slab_unreclaimable:6946 Oct 8 22:20:49 server kernel: [30892.330478] mapped:899 shmem:113 pagetables:35697 bounce:0 Oct 8 22:20:49 server kernel: [30892.330502] Node 0 DMA free:8036kB min:32kB low:40kB high:48kB active_anon:1144kB inactive_anon:1268kB active_file:8kB inactive_file:8kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:11792kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:224kB kernel_stack:16kB pagetables:1228kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Oct 8 22:20:49 server kernel: [30892.330518] lowmem_reserve[]: 0 2004 2004 2004 Oct 8 22:20:49 server kernel: [30892.330523] Node 0 DMA32 free:5604kB min:5708kB low:7132kB high:8560kB active_anon:1367160kB inactive_anon:460324kB active_file:1064kB inactive_file:1916kB unevictable:4500kB isolated(anon):0kB isolated(file):0kB present:2052320kB mlocked:4500kB dirty:8kB writeback:52kB mapped:3600kB shmem:452kB slab_reclaimable:6872kB slab_unreclaimable:27560kB kernel_stack:3528kB pagetables:141560kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:992 all_unreclaimable? no Oct 8 22:20:49 server kernel: [30892.330539] lowmem_reserve[]: 0 0 0 0 Oct 8 22:20:49 server kernel: [30892.330544] Node 0 DMA: 1*4kB 2*8kB 13*16kB 10*32kB 7*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 2*2048kB 0*4096kB = 8036kB Oct 8 22:20:49 server kernel: [30892.330579] Node 0 DMA32: 609*4kB 2*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 5604kB Oct 8 22:20:49 server kernel: [30892.330605] 1522 total pagecache pages Oct 8 22:20:49 server kernel: [30892.330610] 0 pages in swap cache Oct 8 22:20:49 server kernel: [30892.330615] Swap cache stats: add 0, delete 0, find 0/0 Oct 8 22:20:49 server kernel: [30892.330621] Free swap = 0kB Oct 8 22:20:49 server kernel: [30892.330625] Total swap = 0kB Oct 8 22:20:49 server kernel: [30892.333018] 524288 pages RAM Oct 8 22:20:49 server kernel: [30892.333018] 11010 pages reserved Oct 8 22:20:49 server kernel: [30892.333018] 424367 pages shared Oct 8 22:20:49 server kernel: [30892.333018] 503658 pages non-shared Seems like there isn't enough memory for this domU. But there is no any memory problems reported in munin monitoring: As you see system uses around 0.2G and 1G is available. So my question is: Is it xen specific problem, that real memory usage and memory usage that shows munin are different (I've never seen such problems oh real hardware machines)? Or maybe it is just monitoring problem, that can't catch moment when there is unusual high load and domU go down? And how I can to defeat this problem? it is really annoying to catch messages in e-mail that domU went down. Btw, such situation was when domU had 2G memory.

    Read the article

  • CLI-Based monitoring tool for KVM

    - by Pinnacle
    I am developing a scheduler for running VMs on KVM. The scheduling has over-commitment of resources like memory and CPU. For this, I need a CLI-based monitoring tool that keeps me giving information about the resource usage of each VM, because it might be the case that due to over-provisioning of resources, VMs on a particular host are running very slowly depending on the benchmarks/programs each VM is running, and then I need to migrate a VM to another host and so on. I looked into libvirt-based tools like collects, MUNIN, Nagios-vert, etc.( http://libvirt.org/apps.html#monitoring ) I also looked into Ubuntu utility perf-kvm ( http://manpages.ubuntu.com/manpages/maverick/man1/perf-kvm.1.html ) I want to ask which CLI-based would be recommended by the community so that I can make a automated scheduler that takes care of the above situation.

    Read the article

  • Determine Location of Inode Usage

    - by Dave Forgac
    I recently installed Munin on a development web server to keep track of system usage. I've noticted that the system's inode usage is climbing by about 7-8% per day even though the disk usage has barely increased at all. I'm guessing something is writing a ton of tiny files but I can't find what / where. I know how to find disk space usage but I can't seem to find a way to summarize inode usage. Is there a good way to determine inode usage by directory so I can locate the source of the usage?

    Read the article

  • Solution to Manage and Monitor (Ubuntu) Machines

    - by Elmar Weber
    I'm looking for a tool like Canonical (system management and monitoring for Ubuntu) that is Open Source and free. The goal is to manage a dozen or so KVM machines for private testing purposes. I know of puppet and munin or RHQ as separate tools to manage and monitor, but I'd prefer something integrated. Any tips? Basic requirements would be: system package management and update (individual selection for each managed node) configuration of basic system services (Users, NFS, cron, ideally also Apache) monitoring (charting of system resources, disk, io, memory, etc) and alerting, ideally a default configuration with sensible values for alerts

    Read the article

  • Iptables and counters

    - by mehturt
    I'm trying to use iptables counters with munin to monitor traffic of hosts on my local subnet. For each host I set up a rule like this: iptables -I OUTPUT -d $ip This should count the packets going from firewall to $ip, correct? I found out that this does not seem to count all packets. I start tcpdump on my router (Linux) and I see packets to $ip that are not counted. For example I check number of packets for rule to my phone IP. I start tcpdump, refresh Gmail on my phoone, I see packets in tcpdump's output but iptables rule counters are not incremented. Then I open a web page on the same phone and the counters are incremented. What could be the reason?

    Read the article

  • What combination of soft to select? (your advice/opinion) [on hold]

    - by Flyer
    I'm thinking of upgrading my server soft along with OS. As of now, my VPS is running on Debian 6 with nginx (1.2.4) - apache (2.2.16). My VPS specs are 1Gb RAM, 2 cores of Intel(R) Xeon(R) CPU E5520 @ 2.27GHz. Now, here is the question. Which combo should I run? - nginx - apache (2.4.x) - PHP-fpm 5.5.x - nginx - apache (2.4.x) - mod_php 5.5.x - apache (2.4.x) - mod_php 5.5.x - apache (2.4.x) - PHP-fpm 5.5.x - nginx - PHP-fpm 5.5.x - nginx - mod_php 5.5.x I would really like some advice/opinion of people who are more experienced than me with these things. It's nothing big. Around 100-200k pageviews per month. I can also provide some screenshots of munin stats if needed.

    Read the article

  • Swap, Swapiness and Standby: swapping starts when waking up

    - by mdo
    I'm running running Ubuntu 12.04 on a Lenovo W500 (Core2Duo T9400, 4GB Ram) Current kernel: 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux -- but the problems exists since a couple of months, surviving quite a few software (includig kernel) updates I regularly put my machine into suspend-to-ram (S3) and when the machine comes back up Ubuntu starts to swap out processes. I was able to observe that the used swap-space starts to grow right after the box returns. See munin graphs below, the gap (obviously) shows the timeframe in STR. Needless to say that the box becomes unusable while swapping, load goes up beyond 10. What I've done so far: lowered swappiness from default (60) to 10 (via /etc/sysctl.conf: vm.swappiness=10) -- this has improved the situation much, but sometimes the problem comes back, I have not found a trigger (like memory usage) for this for now lowered swappiness to 5 -- perhaps this has brought an improvement again Before going to STR the box ran stable without (swapping) problems for hours. Today when the issue showed up again I used this script (- http://stackoverflow.com/questions/479953/how-to-find-out-which-processes-are-swapping-in-linux) to find what processes have the most used swap space. The result after the swap orgy is like that (all PIDs with more than 10M usage): Overall swap used: 2121344 kB ======================================== kB pid name ======================================== 439520 17491 java 208148 22719 firefox 136640 4337 /usr/bin/quodli 120852 5271 chrome 81832 5264 chrome 74284 17003 chrome 65368 16960 chrome 57088 3675 chrome 56184 30923 chrome 54412 11331 chrome 54264 3878 chrome 51508 18382 chrome 50088 3163 zeitgeist-fts 49772 15543 chrome 41344 15355 compiz 35040 1161 mysqld 32124 18374 chrome 30940 11339 chrome 30044 5752 chrome 28780 4235 plugin-containe 24576 31246 empathy-chat 23840 17703 chrome 22512 3207 ubuntuone-syncd 21588 1937 ntop 18336 2021 asterisk 17200 3915 chrome 13964 1935 Xorg 12036 10679 chrome 11104 30782 empathy 11056 2889 python 10932 16565 knotify4 The java instance at the top is IntelliJ. IntelliJ, Firefox and Chrome also were all used right before the box was put to STR. So my question is: can I somehow prevent these swapouts AND why do they happen? Is it perhaps related to some misidentification of idle processes? I'm not looking for resolutions like: turn off swap buy more RAM Thanks in advance!

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • How do you guys handle custom yum repository?

    - by luckytaxi
    I have a bunch of tools (nagios, munin, puppet, etc...) that gets installed on all my servers. I'm in the process of building a local yum repository. I know most folks just dump all the rpms into a single folder (broken down into the correct path) and then run createrepo inside the directory. However, what would happen if you had to update the rpms? I ask because I was going to throw each software into its own folder. Example one, put all packages inside one folder (custom_software) /admin/software/custom_software/5.4/i386 /admin/software/custom_software/5.4/x86_64 /admin/software/custom_software/4.6/i386 /admin/software/custom_software/4.6/x86_64 What I'm thinking of ... /admin/software/custom_software/nagios/5.4/i386 /admin/software/custom_software/nagios/5.4/x86_64 /admin/software/custom_software/nagios/4.6/i386 /admin/software/custom_software/nagios/4.6/x86_64 /admin/software/custom_software/puppet/5.4/i386 /admin/software/custom_software/puppet/5.4/x86_64 /admin/software/custom_software/puppet/4.6/i386 /admin/software/custom_software/puppet/4.6/x86_64 Ths way, if I had to update to the latest version of puppet, I can save manage the files accordingly. I wouldn't know which rpms belong to which software if I threw them into one big folder. Makes sense?

    Read the article

  • What can cause an increase in inactive memory and how to reclame it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB

    Read the article

  • lm-sensors - always returns 32 degrees (celsius) for temperature

    - by mopoke
    On my VIA EPIA motherboard (using VIA VT8231 ISA bridge), I get strange output for the lm-sensors temperature reading. It always returns 32 degrees (celsius). I have previously had correct output for temperature (my munin graphs show temperatures typically in the range of 50 to 60 degrees. I've tried uninstalling (and purging) the lm-sensors package, have re-run sensors-detect a number of times and rebooted but nothing seems to change the output. I am running Ubuntu Karmic Koala (9.10). Anyone got any bright ideas on what I might have missed? uname -a: Linux george 2.6.31-16-386 #53-Ubuntu SMP Tue Dec 8 06:39:34 UTC 2009 i686 GNU/Linux cpuinfo: processor : 0 vendor_id : CentaurHauls cpu family : 6 model : 7 model name : VIA Samuel 2 stepping : 3 cpu MHz : 399.000 cache size : 64 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu de tsc msr cx8 mtrr pge mmx 3dnow up bogomips : 800.04 clflush size : 32 power management: lspci: 00:00.0 Host bridge: VIA Technologies, Inc. VT8601 [Apollo ProMedia] (rev 05) 00:01.0 PCI bridge: VIA Technologies, Inc. VT8601 [Apollo ProMedia AGP] 00:11.0 ISA bridge: VIA Technologies, Inc. VT8231 [PCI-to-ISA Bridge] (rev 10) 00:11.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06) 00:11.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 1e) 00:11.3 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 1e) 00:11.4 Bridge: VIA Technologies, Inc. VT8235 ACPI (rev 10) 00:11.5 Multimedia audio controller: VIA Technologies, Inc. VT82C686 AC97 Audio Controller (rev 40) 00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II] (rev 51) 01:00.0 VGA compatible controller: Trident Microsystems CyberBlade/i1 (rev 6a) sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +32.0°C (crit = +60.0°C)

    Read the article

  • apache performance timing out

    - by Mike
    Im running a webserver where I'm hosting about 6-7 websites. Most of these websites get their content from MySQL which is hosted on the same server. Traffic average per day is about 500-600 unique visitors, about 150K hits per week. But for some reason sometimes websites send a timeout, OR sometimes websites dont load all images. I know that I should perhaps separate static content from dynamic content, but for now I think that's not a possibility. I would appreciate any suggestions on how could I improve the performance of apache, so it doesn't keep timing out. Server is running on Sempron LE 1300; 2.3GHz,512K Cache 2GB RAM 10Mbps/1Mbps Services: MySQL, ProFTPD, Apache. Private + Shared = RAM used Program ---------------------------------------------------- 1.2 MiB + 54.0 KiB = 1.2 MiB proftpd 4.1 MiB + 23.0 KiB = 4.1 MiB munin-node 20.8 MiB + 120.5 KiB = 20.9 MiB mysqld 47.3 MiB + 9.9 MiB = 57.3 MiB apache2 (22) top: Mem: 2075356k total, 1826196k used, 249160k free, Timeout 35 KeepAlive On MaxKeepAliveRequests 300 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 10 MinSpareServers 20 MaxSpareServers 20 MaxClients 60 MaxRequestsPerChild 1000 </IfModule> <IfModule mpm_worker_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Real benefits of tcp TIME-WAIT and implications in production environment

    - by user64204
    SOME THEORY I've been doing some reading on tcp TIME-WAIT (here and there) and what I read is that it's a value set to 2 x MSL (maximum segment life) which keeps a connection in the "connection table" for a while to guarantee that, "before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead". Since segments received (apart from SYN under specific circumstances) while a connection is either in TIME-WAIT or no longer existing would be discarded, why not close the connection right away? Q1: Is it because there is less processing involved in dealing with segments from old connections and less processing to create a new connection on the same tuple when in TIME-WAIT (i.e. are there performance benefits)? If the above explanation doesn't stand, the only reason I see the TIME-WAIT being useful would be if a client sends a SYN for a connection before it sends remaining segments for an old connection on the same tuple in which case the receiver would re-open the connection but then get bad segments and and would have to terminate it. Q2: Is this analysis correct? Q3: Are there other benefits to using TIME-WAIT? SOME PRACTICE I've been looking at the munin graphs on a production server that I administrate. Here is one: As you can see there are more connections in TIME-WAIT than ESTABLISHED, around twice as many most of the time, on some occasions four times as many. Q4: Does this have an impact on performance? Q5: If so, is it wise/recommended to reduce the TIME-WAIT value (and what to)? Q6: Is this ratio of TIME-WAIT / ESTABLISHED connections normal? Could this be related to malicious connection attempts?

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Is there a monitoring software suite that will alert me if it has received no activity in a time period?

    - by matt b
    This might be a very basic question, but I am not very familiar with the exact features of Nagios versus Munin versus other monitoring tools. Let's say we have a process that needs to run daily for some very important infrastructure reasons. We've had cases where the process did not run or was otherwise down for a number of days before anyone noticed. I'd like to set up a system that will enable me to easily know when the daily run did not take place for some reason. I can set up this process to send an email on every successful run (or every failed run), but I do not trust that the people receiving this email would notice an absence of an "I'm OK" message. What I am envisioning is some type of "tripwire" service which this V.I.P. (very-important-process) can send a status message to each time it runs, whether successfully or not; and if the "tripwire" service has not received any word from the VIP within a configurable amount of time, it can then send an alert to someone. (The difference between what I envision and the first approach I outlined is a service that sends a message only in abnormal conditions, rather than a service that sends messages each day that the status is normal/OK). Can Nagios be set up to send an alert like this, if it has not heard from a certain service/device/process in N days? Is there another tool out there which does have this feature?

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >