Search Results

Search found 6053 results on 243 pages for 'usage'.

Page 78/243 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by mfn
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • starting oracle database automatically.

    - by Searock
    I am using Fedora 8 and Oracle 10g Express Edition. Every time I start my fedora I have to click on start database. How can I add startdb.sh to startup so that it automatically executes when Fedora starts? I have tried adding the path to /etc/rc.d/rc.local but it still doesn't work. ./usr/lib/oracle/xe/app/oracle/product/10.2.0/server/config/scripts/startdb.sh I have even tried to add this script in /etc/init.d/oracle #!/bin/bash # # Run-level Startup script for the Oracle Instance and Listener # # chkconfig: 345 91 19 # description: Startup/Shutdown Oracle listener and instance ORA_HOME="/u01/app/oracle/product/9.2.0.1.0" ORA_OWNR="oracle" # if the executables do not exist -- display error if [ ! -f $ORA_HOME/bin/dbstart -o ! -d $ORA_HOME ] then echo "Oracle startup: cannot start" exit 1 fi # depending on parameter -- startup, shutdown, restart # of the instance and listener or usage display case "$1" in start) # Oracle listener and instance startup echo -n "Starting Oracle: " su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl start" su - $ORA_OWNR -c $ORA_HOME/bin/dbstart touch /var/lock/subsys/oracle echo "OK" ;; stop) # Oracle listener and instance shutdown echo -n "Shutdown Oracle: " su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl stop" su - $ORA_OWNR -c $ORA_HOME/bin/dbshut rm -f /var/lock/subsys/oracle echo "OK" ;; reload|restart) $0 stop $0 start ;; *) echo "Usage: $0 start|stop|restart|reload" exit 1 esac exit 0 and even this doesn't work. startdb.sh is located at /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/config/scripts/startdb.sh Thanks.

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • Fixed Resource Monitor Graph Scale in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums?

    Read the article

  • recursive grep started at / hangs

    - by Martin
    I have used following grep search pattern on multiple platforms: grep -r -I -D skip 'string_to_match' / For example on FreeBSD 8.0, FreeBSD 6.4 and Debian 6.0(squeeze). Command does a recursive search starting from root directory, assumes that binary files do not have the 'string_to_match' and skips devices, sockets and named pipes. FreeBSD 8.0 and FreeBSD 6.4 use GNU grep version 2.5.1 and Debian 6.0 uses GNU grep version 2.6.3. On FreeBSD 6.4, last information printed to stderr was "grep: /dev/cuad0: Device busy". After this grep just idles as according to "top -m io -o total" the I/O usage of grep is nonexistent. Same behavior is true under FreeBSD 8.0, but last information sent to stderr is "grep: /tmp/.wine-0: Permission denied" on my installation. In case of Debian, last output to stderr is "grep: /proc/sysrq-trigger: Input/output error". If I check the I/O usage of grep process under Debian, it is following: root@Debian:~# iotop -bp 22439 Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / ^Croot@Debian:~# What might cause this? Is there a way to view which file grep is currently processing in case lsof is not present? I'm able to use lsof under Debian and looks like the problematic file name there is "0xc6b2c230 file struct, ty=0, op=0xc0d34120". I'm not sure what this is.. I'm not able to use lsof or fstat under FreeBSD. PS: I know I could use find utility, but this is not the question.

    Read the article

  • Very different results from df after a few seconds

    - by tatus2
    When the backup moves the files from one server to the other the results from df change every few seconds in an impossible manner. The source host is running rsync. On the destination host I'm running the following command every few seconds: echo `date` `df|grep md0` Results are below: Sat Jun 29 23:57:12 CEST 2013 /dev/md0 4326425568 579316100 3527339636 15% /MD0 Sat Jun 29 23:57:14 CEST 2013 /dev/md0 4326425568 852513700 3254142036 21% /MD0 Sat Jun 29 23:57:15 CEST 2013 /dev/md0 4326425568 969970340 3136685396 24% /MD0 Sat Jun 29 23:57:17 CEST 2013 /dev/md0 4326425568 1255222180 2851433556 31% /MD0 Sat Jun 29 23:57:20 CEST 2013 /dev/md0 4326425568 1276006720 2830649016 32% /MD0 Sat Jun 29 23:57:24 CEST 2013 /dev/md0 4326425568 1355440016 2751215720 34% /MD0 Sat Jun 29 23:57:26 CEST 2013 /dev/md0 4326425568 1425090960 2681564776 35% /MD0 Sat Jun 29 23:57:27 CEST 2013 /dev/md0 4326425568 1474601872 2632053864 36% /MD0 Sat Jun 29 23:57:28 CEST 2013 /dev/md0 4326425568 1493627384 2613028352 37% /MD0 Sat Jun 29 23:57:32 CEST 2013 /dev/md0 4326425568 615934400 3490721336 15% /MD0 Sat Jun 29 23:57:33 CEST 2013 /dev/md0 4326425568 636071360 3470584376 16% /MD0 As you can see I start from USE of 15% and after 15 seconds I'm at 37% (I don't need to mention that the backup can not copy this huge amount of data in such a short time). After ~20 seconds the cycle closes. I'm again roughly at the same usage as earlier. The value is reasonable, ca. 35 Mb were copied. Can somebody explain to me what is going on? Does df only make an estimation of usage instead of used value?

    Read the article

  • Mysterious Windows 7 slowdown problem

    - by cletus
    I have a fairly beefy machine: Intel Q9450 8GB DDR2800 (4x2) Intel X25-M G2 80GB SSD Several other hard drives Windows 7 Ultimate 64 In the last month I've gotten a mysterious slowdown problem. When I start my IDE (IntelliJ IDEA) it usually takes about 20 seconds on the SSD. If my machine has been on for a day or two (as far as I can tell this is the only pattern) and I try to start the IDE, it brings my machine to a halt. CPU usage goes up to 25% per core (so it's basically 100% usage) and it takes up to 5 minutes to start. Other things I've noticed: iTunes will start to skip and stutter (my music is running off a second hard drive). The only persistent things I'm running are: AVG Anti-Virus Spybot (the slowdown predates this) Hamachi and Murmur (again the slowdown predates this) Apple Airport Base Agent HP OfficeJet 8500 driver/manager The browser I use is Chrome. I can't think why that'd be relevant but it's always on so I thought I'd mention it. When this happens I can't see a reason for it in the process list. No CPU hogs. No spikes in IO activity that I can see. Basically I'm at a loss to explain it and need to reboot, at which point everything returns to normal (for awhile). FWIW the Intel SSD is about 75-80% full. I know being too full can really degrade performance. I don't believe that's the issue here. Does anyone have any ideas on what I can do to fix this or at least help find what's going wrong? This same machine (sans SSD) could run Win XP and stay up fine for a month or two.

    Read the article

  • How to resolve virtual disk degraded in Windows Server 2012

    - by harrydev
    I am using the new Storage Spaces feature in Windows Server 2012. I have the following disks: FriendlyName CanPool OperationalStatus HealthStatus Usage Size ------------ ------- ----------------- ------------ ----- ---- PhysicalDisk2 False OK Healthy Auto-Select 2.73 TB PhysicalDisk3 False OK Healthy Auto-Select 2.73 TB PhysicalDisk4 False OK Healthy Auto-Select 2.73 TB PhysicalDisk5 False OK Healthy Auto-Select 2.73 TB There is also a separate OS disk. The above disks are part of a single storage pool: FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly ------------ ----------------- ------------ ------------ ---------- Pool OK Healthy False False Within this storage pool some virtual disks are defined, see below: FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size me ------------ ------------------- ----------------- ------------ -------------- ---- Docs Mirror OK Healthy False 500 GB Data Mirror Degraded Warning False 500 GB Work Mirror Degraded Warning False 2 TB Now the virtual disks are all running normal 2-way mirror, but two of the virtual disks are degraded. This is probably because one of the physical disks was offline for a short period of time. However, now the virtual disk cannot be repaired, even though, all physical disks are healthy. There is plenty of available space in the storage pool. This I cannot understand so I was hoping for some help, on how to resolve this? Below I have listed the full output from the Get-VirtualDisk CmdLet for the "Work" disk: ObjectId : {XXXXXXXX} PassThroughClass : PassThroughIds : PassThroughNamespace : PassThroughServer : UniqueId : XXXXXXXX Access : Read/Write AllocatedSize : 412316860416 DetachedReason : None FootprintOnPool : 824633720832 FriendlyName : Work HealthStatus : Warning Interleave : 262144 IsDeduplicationEnabled : False IsEnclosureAware : False IsManualAttach : False IsSnapshot : False LogicalSectorSize : 512 Name : NameFormat : NumberOfAvailableCopies : 0 NumberOfColumns : 2 NumberOfDataCopies : 2 OperationalStatus : Degraded OtherOperationalStatusDescription : OtherUsageDescription : Disk for data being worked on (not backed up) ParityLayout : PhysicalDiskRedundancy : 1 PhysicalSectorSize : 4096 ProvisioningType : Thin RequestNoSinglePointOfFailure : True ResiliencySettingName : Mirror Size : 2199023255552 UniqueIdFormat : Vendor Specific UniqueIdFormatDescription : Usage : Other PSComputerName :

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • How to find out what is causing a slow down of the application on this server?

    - by Jan P.
    This is not the typical serverfault question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks. Situation We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching. The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens. Problem In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour. We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in: Can you help me figure out what's wrong and maybe how to fix it? Additional information The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11. If any configuration information would help analyze the problem, just comment and I will add it. Graphs info phpinfo() apc status memcache status collectd Processes CPU Apache Load MySQL Vmem Disk New Relic Application performance Server overview Processes Network Disks (Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • MySQL getting stuck, eating up disk i/o

    - by bonez05
    Hi all, Using mySQL 5.0.51 on Solaris . At intermittent times it looks like MySQL is getting 'stuck' . The disk usage on the server spikes to 98% busy from reads. I used dtrace (specifically DTrace toolkit - iosnoop) to track down what processes was using all the reads. Mysql was calling tablename.TDM hundreds of times per second. There was no more than average load on the webserver that could account for this. There were no cronjobs running, and no other utilities like mysqldump or anything. It is a master / slave replication setup. As a jury-rigged fix, I altered the mysql table from 'tablename' to 'tablename2' and then back to 'tablename' This fixed the problem temporarily, and "unsticks" mysql. The disk usage goes back down and dtrace is no longer showing hundreds of reads to 'tablename.TDM' / second. A couple ideas I had are: 1. MySQL version bug 2. Infinite loop somewhere in my application (which i'm not sure how likely this is) 3. ?? Has anybody seen this before or have any insight? Thanks

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by nfm
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • Why is dwm.exe using so much memory?

    - by Leonard Challis
    I've scoured the web, but I'm sick of reading "scan your computer for viruses" and "upgrade your RAM" on answers to similar questions to this. I understand that dwm.exe is for (simply put) caching bitmaps for things like Aero-peek and similar, but as far as I have read it shouldn't be using vast amounts of memory. My colleague and I both have 4GB of RAM, Core 2 Duo, blah, blah -- essentially they're pretty capable. His dwm.exe is running at around 30mb, mind is currently running at about half a gig, though it does fluctuate quite a lot. This is the same while running the exact same applications (currently Zend studio, FireFox (with firemin - low memory usage), Outlook). Every so often I will get a notification asking me if I want to switch to Aero Basic because it's using too much memory, and sometimes it will just switch itself to basic and let me know why. I know it's possible to stop it switching, but I want to know why it is using too much memory otherwise it's just papering over the cracks. One thing to add is this seems to have started after a robbery on Monday, where two of my monitors were stolen, and I had to temporarily use a couple of alternative monitors. I am now using brand new monitors but the problem is the same. All drivers installed and working seemingly fine. Any ideas why the usage is so high? We are using windows 7 64-bit Professional.

    Read the article

  • Windows Server 2008 R2 running at a snail's pace

    - by Django Reinhardt
    Really weird problem here. Our main web server has started running at a snail's pace, for absolutely no reason we can discern. Even after restarting the machine, when there's no little or no ram usage and CPU usage is fluctuating between 0 and 30%, simple tasks, like opening Internet Explorer, or waiting for My Computer to open, take forever. There are no processes hogging system resources that we can see... the machine itself is just exhibiting extremely slow behaviour. I've never seen a machine do this. A lot of security updates had built up, so we decided to let Windows install them. When we looked through the history upon restarting, though, they had failed with error code 800706BA. I don't know if this could be related or not. Any help in this matter would be greatly appreciated. As mentioned in the title, we're running a Windows Server 2008 R2 machine. It's also running SQL Server and IIS. It has 16GB of RAM and a decent Quad Core processor. It's also been fine until now -- and we haven't changed a thing. Thanks for any help.

    Read the article

  • Shell wrong encoding

    - by csch
    Somehow I managed to screw up my shell-encoding. An example: root§server:ç£ cat --help Usage: cat ¡OPTION¿... ¡FILE¿... Concatenate FILE(s), or standard input, to standard output. -A, --show-all equivalent to -vET -b, --number-nonblank number nonempty output lines -e equivalent to -vE -E, --show-ends display $ at end of each line -n, --number number all output lines -s, --squeeze-blank suppress repeated empty output lines -t equivalent to -vT -T, --show-tabs display TAB characters as ^I -u (ignored) -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB --help display this help and exit --version output version information and exit With no FILE, or when FILE is -, read standard input. Examples: cat f - g Output f's contents, then standard input, then g's contents. cat Copy standard input to standard output. Report cat bugs to bug-coreutils§gnu.org GNU coreutils home page: <http://www.gnu.org/software/coreutils/> General help using GNU software: <http://www.gnu.org/gethelp/> For complete documentation, run: info coreutils 'cat invocation' root§server:ç£ It should look like: root@server:~# cat --help Usage: cat [OPTION]... [FILE]... Concatenate FILE(s), or standard input, to standard output. -A, --show-all equivalent to -vET -b, --number-nonblank number nonempty output lines -e equivalent to -vE -E, --show-ends display $ at end of each line -n, --number number all output lines -s, --squeeze-blank suppress repeated empty output lines -t equivalent to -vT -T, --show-tabs display TAB characters as ^I -u (ignored) -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB --help display this help and exit --version output version information and exit With no FILE, or when FILE is -, read standard input. Examples: cat f - g Output f's contents, then standard input, then g's contents. cat Copy standard input to standard output. Report cat bugs to [email protected] GNU coreutils home page: <http://www.gnu.org/software/coreutils/> General help using GNU software: <http://www.gnu.org/gethelp/> For complete documentation, run: info coreutils 'cat invocation' root@server:~# I have no clue what went wrong, do you have any ideas?

    Read the article

  • Site Goes Offline Every Day At Midnight - No One Knows Why

    - by HollerTrain
    0 down vote favorite Seems today a website I manage has been going online and offline between 12a and 12:25a. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I have a pingdom account which alerts me when the site goes offline so we can see every day, like clockwork, the site goes on/off. At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Three server processes consume no more than 50% of Dual Core CPU

    - by thor
    I have three processes running on Intel Core 2 Duo CPU. From watching output of 'top' and graphs of CPU load (drawn by MRTG, data collection via SNMP) I can see that CPU load is never more than 50%, and, most of the day, when those processes are busy CPU load has a ceiling at 50 %. I mean, CPU load grows up to 50% in the morning and stays there until late evening. My first thought was that only one core was used at 100% thus giving 50% of both CPUs. But, as there are three processes running and from 'top' I see that both cores are being loaded, so this is not the case. schedtool shows that CPU affinity for those three processes is at default, 0x03, allowing them to use both cores. If I force one process to one core (schedtool -a 0x01), and two others to second (schedtool -a 0x02), cumulative usage grows beyond 50%. Why three processes seem to consume only 50% of two cores? Why forcing them to different CPUs allows usage to grow higher? Any hints? P.S. Processes in question are Counter-Strike servers.

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Why does my microwave kill the Wi-Fi?

    - by Ohlin
    Every time I start the microwave in the kitchen, our home Wi-Fi stops working and all devices lose connection with our router! The kitchen and the Wi-Fi router are in opposite ends of the apartment but devices are being used a little here and there. We've been annoyed by the instability of the Wi-Fi for some time and it wasn't until recently we realized it was correlated to microwave usage. After some testing with having the microwave on and off we could narrow down the problem to only occurring when the router is in b/g/n mode and uses a set channel. If I change to b/g mode or set channel to auto then there is no problem any more...but still! The router is a Zyxel P-661HNU ("802.11n Wireless ADSL2+ 4-port Security Gateway" with latest firmware) and the microwave is made by Neff with an effect of 1000W (if this information might be useful to anyone). There is an "internet connection" light on the router and it doesn't go out when the interruption occurs so I think this is only an internal Wi-Fi issue. Now to my questions: What parts of the Wi-Fi can possibly be affected by the microwave usage? Frequency? Disturbances in the electrical system? How can setting Auto on channels make a difference? I thought the different channels were just some kind of separation system within the same frequency spectrum? Could this be a sign that the microwave is malfunctioning and slowly roasting us all at home? Is there any need to be worried? Since we were able to find router settings that cooperate well with our microwave's demand for attention, this question is mainly out of curiosity. But as most people out there...I just can't help the fact that I need to know how it's possible :-)

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >