Search Results

Search found 5942 results on 238 pages for 'total starnger'.

Page 54/238 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Troubleshooting High Load on Plesk LAMP Dedicated Server

    - by Callmeed
    I have 2 nearly identical dedicated servers with the same provider. They also run a nearly identical software stack: RedHat 5 64-bit, Plesk, PHP, Apache, & MySQL. We use them for hosting custom sites we build. The problem is, while our 1st server has a load average (in top) of around 0.3, the 2nd server consistently has a load average of around 4.0 or higher. Basic functions in Plesk are delayed and there is a bit of latency when executing shell commands. Anyone have ideas why it would be so high? And why it would differ from our other server so much? Here is my current top output (sorted by %MEM) ... Any help is much appreciated ... top - 21:48:04 up 100 days, 4:28, 1 user, load average: 3.74, 4.20, 4.23 Tasks: 336 total, 1 running, 335 sleeping, 0 stopped, 0 zombie Cpu(s): 0.8%us, 0.4%sy, 0.0%ni, 91.3%id, 7.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 12290884k total, 11886452k used, 404432k free, 2920212k buffers Swap: 2096472k total, 244k used, 2096228k free, 6560692k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22536 apache 15 0 860m 547m 6484 S 0.0 4.6 0:10.96 httpd 26467 apache 15 0 859m 546m 6408 S 0.0 4.5 0:07.67 httpd 3620 apache 15 0 859m 545m 5552 S 0.0 4.5 0:06.15 httpd 1895 apache 15 0 858m 544m 6356 S 0.0 4.5 0:08.25 httpd 16933 apache 15 0 858m 544m 5488 S 0.0 4.5 0:01.57 httpd 6431 apache 15 0 856m 542m 6076 S 10.6 4.5 0:05.32 httpd 14417 apache 15 0 856m 542m 5568 S 0.0 4.5 0:03.88 httpd 15403 apache 15 0 855m 541m 5616 S 0.0 4.5 0:03.73 httpd 19165 apache 15 0 853m 539m 6252 S 0.0 4.5 0:12.40 httpd 15898 apache 15 0 852m 539m 5376 S 0.0 4.5 0:02.68 httpd 14401 apache 15 0 851m 538m 5460 S 0.0 4.5 0:02.97 httpd 15393 apache 15 0 851m 538m 5404 S 0.0 4.5 0:03.12 httpd 15427 apache 15 0 851m 538m 5496 S 0.0 4.5 0:02.44 httpd 14412 apache 15 0 851m 538m 5324 S 0.0 4.5 0:02.15 httpd 18330 apache 15 0 851m 537m 5136 S 0.0 4.5 0:01.30 httpd 18303 apache 15 0 848m 535m 5140 S 0.0 4.5 0:00.47 httpd 21190 apache 15 0 845m 533m 3988 S 0.0 4.4 0:00.33 httpd 15923 root 18 0 822m 521m 9928 S 0.0 4.3 10:04.81 httpd 22021 apache 15 0 828m 520m 4964 S 0.0 4.3 0:00.16 httpd 22146 apache 15 0 823m 515m 3016 S 0.0 4.3 0:00.02 httpd 22345 apache 15 0 822m 514m 2408 S 0.0 4.3 0:00.00 httpd 14721 apache 15 0 733m 510m 488 S 0.0 4.3 0:00.00 httpd 5094 root 15 0 1452m 122m 15m S 1.0 1.0 852:24.24 java 4636 mysql 15 0 532m 57m 6440 S 1.0 0.5 488:05.84 mysqld 4799 popuser 15 0 166m 53m 2368 S 0.0 0.4 0:36.64 spamd 16761 popuser 15 0 159m 46m 2312 S 0.0 0.4 0:00.38 spamd 4797 root 15 0 158m 45m 2448 S 0.0 0.4 0:01.27 spamd 5074 root 34 19 255m 20m 2144 S 0.0 0.2 1:37.53 yum-updatesd 9917 named 15 0 366m 9804 1980 S 0.0 0.1 0:10.26 named 4332 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.06 sw-engine-cgi 4341 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.07 sw-engine-cgi 4350 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.09 sw-engine-cgi 4352 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.11 sw-engine-cgi 4376 ntp 15 0 23388 5020 3896 S 0.0 0.0 0:00.58 ntpd 4331 sw-cp-se 15 0 61336 4572 1480 S 0.0 0.0 5:53.22 sw-cp-serverd 4213 haldaemo 15 0 31252 4460 1684 S 0.0 0.0 0:01.52 hald 4778 postgres 18 0 117m 4164 3484 S 0.0 0.0 0:00.11 postmaster 18555 root 16 0 98.3m 3716 2852 S 0.0 0.0 0:00.01 sshd 4488 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4489 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4492 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4493 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4490 sso 18 0 119m 3040 220 S 0.0 0.0 0:00.00 sw-engine-cgi

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • Degraded RAID5 and no md superblock on one of remaining drive

    - by ark1214
    This is actually on a QNAP TS-509 NAS. The RAID is basically a Linux RAID. The NAS was configured with RAID 5 with 5 drives (/md0 with /dev/sd[abcde]3). At some point, /dev/sde failed and drive was replaced. While rebuilding (and not completed), the NAS rebooted itself and /dev/sdc dropped out of the array. Now the array can't start because essentially 2 drives have dropped out. I disconnected /dev/sde and hoped that /md0 can resume in degraded mode, but no luck.. Further investigation shows that /dev/sdc3 has no md superblock. The data should be good since the array was unable to assemble after /dev/sdc dropped off. All the searches I done showed how to reassemble the array assuming 1 bad drive. But I think I just need to restore the superblock on /dev/sdc3 and that should bring the array up to a degraded mode which will allow me to backup data and then proceed with rebuilding with adding /dev/sde. Any help would be greatly appreciated. mdstat does not show /dev/md0 # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md5 : active raid1 sdd2[2](S) sdc2[3](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdd4[3] sdc4[2] sdb4[1] sda4[0] 458880 blocks [5/4] [UUUU_] bitmap: 40/57 pages [160KB], 4KB chunk md9 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0] 530048 blocks [5/4] [UUUU_] bitmap: 33/65 pages [132KB], 4KB chunk mdadm show /dev/md0 is still there # mdadm --examine --scan ARRAY /dev/md9 level=raid1 num-devices=5 UUID=271bf0f7:faf1f2c2:967631a4:3c0fa888 ARRAY /dev/md5 level=raid1 num-devices=2 UUID=0d75de26:0759d153:5524b8ea:86a3ee0d spares=2 ARRAY /dev/md0 level=raid5 num-devices=5 UUID=ce3e369b:4ff9ddd2:3639798a:e3889841 ARRAY /dev/md13 level=raid1 num-devices=5 UUID=7384c159:ea48a152:a1cdc8f2:c8d79a9c With /dev/sde removed, here is the mdadm examine output showing sdc3 has no md superblock # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff0e - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 3 0 active sync /dev/sda3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdb3 /dev/sdb3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff20 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 19 1 active sync /dev/sdb3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdc3 mdadm: No md superblock detected on /dev/sdc3. [~] # mdadm --examine /dev/sdd3 /dev/sdd3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff44 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 51 3 active sync /dev/sdd3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed fdisk output shows /dev/sdc3 partition is still there. [~] # fdisk -l Disk /dev/sdx: 128 MB, 128057344 bytes 8 heads, 32 sectors/track, 977 cylinders Units = cylinders of 256 * 512 = 131072 bytes Device Boot Start End Blocks Id System /dev/sdx1 1 8 1008 83 Linux /dev/sdx2 9 440 55296 83 Linux /dev/sdx3 441 872 55296 83 Linux /dev/sdx4 873 977 13440 5 Extended /dev/sdx5 873 913 5232 83 Linux /dev/sdx6 914 977 8176 83 Linux Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 66 530113+ 83 Linux /dev/sda2 67 132 530145 82 Linux swap / Solaris /dev/sda3 133 182338 1463569695 83 Linux /dev/sda4 182339 182400 498015 83 Linux Disk /dev/sda4: 469 MB, 469893120 bytes 2 heads, 4 sectors/track, 114720 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/sda4 doesn't contain a valid partition table Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 66 530113+ 83 Linux /dev/sdb2 67 132 530145 82 Linux swap / Solaris /dev/sdb3 133 182338 1463569695 83 Linux /dev/sdb4 182339 182400 498015 83 Linux Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 66 530125 83 Linux /dev/sdc2 67 132 530142 83 Linux /dev/sdc3 133 182338 1463569693 83 Linux /dev/sdc4 182339 182400 498012 83 Linux Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 66 530125 83 Linux /dev/sdd2 67 132 530142 83 Linux /dev/sdd3 133 243138 1951945693 83 Linux /dev/sdd4 243139 243200 498012 83 Linux Disk /dev/md9: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md9 doesn't contain a valid partition table Disk /dev/md5: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md5 doesn't contain a valid partition table

    Read the article

  • Why would VMWare to go defunct? How to recover from/prevent it?

    - by Josh
    I am running VMWare Server 2.0.2 (Build 203138) on a dual core Intel i5 with Ubuntu Server 10.04 LTS system (kernel 2.6.32-22-server #33-Ubuntu SMP). Disk Subsystem is a software RAID5 array. The system has been set up for a little over a week. For the past 5 days I have been running at leat 3 VMs (Linux and a variety of Windows OSes) with no issues whatsoever. But while I was installing Linux onto a new VM, suddenly all VMs became unresponsive, including the one I was installing to. I could not log in to the VMWare Management Interface, and the system was somewhat unresponsive via SSH. When I looked at top, I saw: top - 16:14:51 up 6 days, 1:49, 8 users, load average: 24.29, 24.33 17.54 Tasks: 203 total, 7 running, 195 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 25.6%sy, 0.0%ni, 74.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8056656k total, 5927580k used, 2129076k free, 20320k buffers Swap: 7811064k total, 240216k used, 7570848k free, 5045884k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 21549 root 39 19 0 0 0 Z 100 0.0 15:02.44 [vmware-vmx] <defunct> 2115 root 20 0 0 0 0 S 1 0.0 170:32.08 [vmware-rtc] 2231 root 21 1 1494m 126m 100m S 1 1.6 892:58.05 /usr/lib/vmware/bin/vmware-vmx -# product=2; 2280 jnet 20 0 19320 1164 800 R 0 0.0 30:04.55 top 12236 root 20 0 833m 41m 34m S 0 0.5 88:34.24 /usr/lib/vmware/bin/vmware-vmx -# product=2; 1 root 20 0 23704 1476 920 S 0 0.0 0:00.80 /sbin/init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 [kthreadd] 3 root RT 0 0 0 0 S 0 0.0 0:00.00 [migration/0] 4 root 20 0 0 0 0 S 0 0.0 0:00.84 [ksoftirqd/0] 5 root RT 0 0 0 0 S 0 0.0 0:00.00 [watchdog/0] 6 root RT 0 0 0 0 S 0 0.0 0:00.00 [migration/1] The VMWare process for the virtual machine I was installing into became a zombie. Yet, it was still consuming 100% of the CPU time on one of the cores, and I couldn't reach it or any other virtual machines. (I was logged in to one virtual machine over SSH, another via X11, and a third via VNC. All three connections died). When I ran ps -ef and similar commands, I found that the defunct vmware-vmx process had it's parent PID set to init (1). I also used lsof -p 21549 and found that the defunct process had no open files. Yet it was using 100% of CPU time... I was unable to kill any vmware-vmx processes, including the defunct one, even with kill -9. As a last resort to resolve the situation I tried to reboot the box, however shutdown, halt, reboot, and init 6 all failed to reboot/shutdown, even when given appropriate --force settings. ControlAltDel produced a message about rebooting on the console, but the system would not reboot. I had to hard power-cycle the box to resolve the situation. (See my other question, Should I worry about the integrity of my linux software RAID5 after a crash or kernel panic?) What would cause a scenario like this? What else could I have done to resolve it besides a hard reboot? What can I do to prevent such a situation in the future?

    Read the article

  • Load average is have been high over some period

    - by user111196
    We have a dedicated MySQL server and below is the a snapshot of the top. The load average has been staying at nearly 100 for an hour plus ready. top - 20:54:28 up 7:31, 2 users, load average: 83.08, 96.88, 106.23 Tasks: 278 total, 2 running, 274 sleeping, 2 stopped, 0 zombie Cpu0 : 18.8%us, 10.2%sy, 0.0%ni, 70.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 51.2%us, 4.3%sy, 0.0%ni, 44.2%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu2 : 9.0%us, 10.3%sy, 0.0%ni, 80.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 18.8%us, 7.4%sy, 0.0%ni, 73.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 7.8%us, 8.8%sy, 0.0%ni, 83.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 10.3%us, 8.4%sy, 0.0%ni, 81.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 6.2%us, 7.5%sy, 0.0%ni, 86.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 6.2%us, 6.2%sy, 0.0%ni, 87.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu8 : 8.8%us, 10.4%sy, 0.0%ni, 80.5%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu9 : 63.7%us, 4.6%sy, 0.0%ni, 12.2%id, 0.0%wa, 4.3%hi, 15.2%si, 0.0%st Cpu10 : 9.2%us, 10.2%sy, 0.0%ni, 80.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 17.3%us, 5.9%sy, 0.0%ni, 76.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 8.0%us, 8.7%sy, 0.0%ni, 83.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 10.9%us, 7.4%sy, 0.0%ni, 81.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 6.2%us, 6.9%sy, 0.0%ni, 86.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 4.8%us, 6.1%sy, 0.0%ni, 89.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 33009800k total, 23174396k used, 9835404k free, 120604k buffers Swap: 35061752k total, 0k used, 35061752k free, 16459540k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3341 mysql 20 0 14.3g 4.6g 4240 S 417.8 14.5 1673:51 mysqld 24406 root 20 0 15008 1292 876 R 0.3 0.0 0:00.19 top 1 root 20 0 4080 852 608 S 0.0 0.0 0:01.92 init 2 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0.0 0.0 0:00.32 migration/0 4 root 15 -5 0 0 0 S 0.0 0.0 0:00.29 ksoftirqd/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 6 root RT -5 0 0 0 S 0.0 0.0 0:03.21 migration/1 7 root 15 -5 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 9 root RT -5 0 0 0 S 0.0 0.0 0:00.17 migration/2 10 root 15 -5 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 12 root RT -5 0 0 0 S 0.0 0.0 0:00.32 migration/3 13 root 15 -5 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 15 root RT -5 0 0 0 S 0.0 0.0 0:00.10 migration/4 16 root 15 -5 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 18 root RT -5 0 0 0 S 0.0 0.0 0:00.35 migration/5 We have also tried to run this command. What else command can help us diagnose the exact problem of this high load? netstat -nat |grep 3306 | awk '{print $6}' | sort | uniq -c | sort -n 1 LISTEN 1 SYN_RECV 410 ESTABLISHED 964 TIME_WAIT Output of vmstat 1: ---------------memory--------------- --swap-- --io-- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 12978936 30944 15172360 0 0 259 3 184 265 6 6 77 12 0

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • After each command tmux prints: ps1_update: command not found

    - by B.I.
    On Linux Ubuntu 11.04, after each command (cd, ls, vim...) successful or not, tmux prints out as a last line ps1_update: command not found. Is there any config option I am missing? Thank you very much! tmux.conf # http://lukaszwrobel.pl/blog/tmux-tutorial-split-terminal-windows-easily # just remember that after every modification, tmux must be refreshed # to take new settings into account. # This can be achieved either by restarting it or by typing in: # tmux source-file .tmux.conf # Here is a list of a few basic tmux commands: # Ctrl+b " - split pane horizontally. # Ctrl+b % - split pane vertically. # Ctrl+b arrow key - switch pane. # Hold Ctrl+b, don't release it and hold one of the arrow keys - resize pane. # !Ctrl+b c - (c)reate a new window. # !Ctrl+b n - move to the (n)ext window. # Ctrl+b p - move to the (p)revious window. # Shift+LMB - select text. # ALT+Arrows to move among panes. # rebind default prefix to C-a unbind C-b set -g prefix C-a # use ALT+Arrows to move around panes bind -n M-Left select-pane -L bind -n M-Right select-pane -R bind -n M-Up select-pane -U bind -n M-Down select-pane -D # activity monitoring setw -g monitor-activity on set -g visual-activity on # highlight current pane set-window-option -g window-status-current-bg yellow # enable pane switching with mouse set-option -g mouse-select-pane on # read bashrc source ~/.bashrc # Sane scrolling set -g terminal-overrides 'xterm*:smcup@:rmcup@' commandline print out ($(cat)user@tiki:~/.vim$ ls autoload bash_profile bashrc bundle README.md tmux.conf vimrc xmonad xmonad-ubuntu-conf xsessionrc ps1_update: command not found ($(cat)user@tiki:~/.vim$ ll total 56 drwxrwxr-x 2 user user 4096 Mar 17 10:20 autoload/ -rw-rw-r-- 1 user user 170 Mar 17 10:20 bash_profile -rw-rw-r-- 1 user user 4004 Apr 2 11:37 bashrc drwxrwxr-x 20 user user 4096 Aug 20 10:55 bundle/ -rw-rw-r-- 1 user user 11170 Aug 20 11:24 README.md -rw-rw-r-- 1 user user 1243 Mar 17 10:20 tmux.conf ps1_update: command not found ($(cat)user@tiki:~/.vim$ And the following is plain terminal output, without tmux running user@tiki:~$ ls backup_list.md Documents Dropbox examples.desktop hakers_and_painters.md~ hyundai Music projects ror Ubuntu One Videos windows.sh Desktop Downloads elif.txt hakers_and_painters.md help.txt maqola.txt Pictures Public tmp update_background.sh VirtualBox VMs user@tiki:~$ ll total 116 -rw-rw-r-- 1 user user 380 Aug 9 17:34 backup_list.md drwxr-xr-x 6 user user 4096 Jul 15 09:26 Desktop/ drwxr-xr-x 2 user user 4096 Jul 7 11:26 Documents/ drwxr-xr-x 11 user user 20480 Aug 20 13:53 Downloads/ -rwx------ 1 user user 729 May 7 14:45 update_background.sh* drwxr-xr-x 2 user user 4096 Dec 10 2013 Videos/ drwxrwxr-x 4 user user 4096 Sep 10 2013 VirtualBox VMs/ -rwxrwxr-x 1 user user 36 Jan 11 2014 windows.sh* user@tiki:~$ cd Desktop/ user@tiki:~/Desktop$ ll total 36 -rw-rw-r-- 1 user user 3388 Jul 14 17:10 daily--report.md -rw-rw-r-- 1 user user 71 Jan 28 2014 fernandez readme.md -rw-rw-r-- 1 user user 23 Jan 28 2014 fernandez readme.md~ drwx------ 4 user user 4096 Mar 23 14:02 my_docs/ drwx------ 2 user user 4096 Feb 3 2014 Origami/ drwx------ 7 user user 4096 Feb 1 2013 Plants_vs._Zombies_v1.2.0.1065/ -rwxr-xr-x 1 user user 301 Apr 15 11:28 Sky Fight.desktop* drwx------ 2 user user 4096 Feb 11 2014 webdesign/ -rwxrwxr-x 1 user user 26 Jan 11 2014 windows.sh~* user@tiki:~/Desktop$

    Read the article

  • Where is my VMware-ws FreeNAS CIFS(ZFS) bottle-neck?

    - by maka
    Background: I'm building a quiet HTPC + NAS that is also supposed to be used for general computer usage. I'm so far generally happy with things, it was just that I was expecting a little better IO performance. I have no clue if my expectations are unreal. The NAS is there as a general purpose file storage and as a media server for XBMC and other devices. ZFS is a requirement. Question: Where is my bottle-neck, and is there anything I can do config wise, to improve my performance? I'm thinking VM-disk settings could be something but I really have no idea where to go since I'm neither experienced with FreeNAS nor VMware-WS. Tests: When I'm on the host OS and copy files (from the SSD) to the CIFS share, I get around 30 Mbytes/sec read and write. When I'm on my laptop laptop, wired to the network, I get about the same specs. The test I've done are with a 16 GB ISO, and with about 200 MB of RARs and I've tried avoiding the RAM-cache by reading different files than the ones I'm writing ( 10 GB). It feels like having less CPU cores is a lot more efficient, since the resource manager in Windows reports less CPU-usage. With 4 cores in VMware, CPU usage was 50-80%, with 1 core it was 25-60%. EDIT: HD ActiveTime was quite high on SSD so I moved the page file, disabled hibernate and enabled Win DiskCache both on SSD and RAID. This resulted in no real performance difference for one file, but if i transferred 2 files the total speed went up to 50 Mbytes/s vs ~40. The ActiveTime avg also went down a lot (to ~20%) but has now higher bursts. DiskIO is on ~ 30-35 Mbytes/s avgs, with ~100Mb bursts. Network is on 200-250Mbits/s with ~45 active TCP connections. Hardware Asus F2A85-M Pro A10-5700 16GB DDR3 1600 OCZ Vertex 2 128GB SSD 2x Generic 1tb 7200 RPM drives as RAID0 (in win7) Intel Gigabit Desktop CT Software Host OS: Win7 (SSD) VMware Worksation 9 (SSD) FreeNAS 8.3 VM (20GB VDisk on SSD) CPU: I've tried 1, 2 and 4 cores. Virtualisation engine, Preferred mode: Automatic 10,24Gb ram 50Gb SCSI VDisk on the RAID0, VDisk is formatted as ZFS and exposed through CIFS through FreeNAS. NIC Bridge, Replicate physical network state Below are two typical process print-outs while I'm transfering one file to the CIFS share. last pid: 2707; load averages: 0.60, 0.43, 0.24 up 0+00:07:05 00:34:26 32 processes: 2 running, 30 sleeping Mem: 101M Active, 53M Inact, 1620M Wired, 2188K Cache, 149M Buf, 8117M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 102 0 50164K 10364K RUN 0:25 25.98% smbd 1897 root 6 44 0 168M 74808K uwait 0:02 0.00% python last pid: 2746; load averages: 0.93, 0.60, 0.33 up 0+00:08:53 00:36:14 33 processes: 2 running, 31 sleeping Mem: 101M Active, 53M Inact, 4722M Wired, 2188K Cache, 152M Buf, 5015M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 76 0 50164K 10364K RUN 0:52 16.99% smbd 1897 root 6 44 0 168M 74816K uwait 0:02 0.00% python I'm sorry if my question isn't phrased right, I'm really bad at these kind of things, and it is the first time I post here at SU. I also appreciate any other suggestions to something, I could have missed.

    Read the article

  • What's going on with my server? High load, lots of idle CPU time, low disk utilization

    - by Jonathan
    I run a web site and send a legitimate opt-in, daily email newsletter to subscribers. Both the web hosting and email sending are done by the same machine. I have about 100,000 subscribers who have opted in to my daily email newsletter. My PHP script did a pretty good job sending mail to all of them until fairly recently, but as the list has grown I can't keep up. When I run top, I have very high load--usually at least 6 or 7, sometimes as high as 15--even though I only have two CPUs. However, when I run sar, my CPU is idle an average of about 30% of the time. So, it seems I'm not CPU bound. When I run iostat, it seems as though I'm not disk bound because my %util for each device is very low (no more than 5%). Given that I don't seem to be CPU bound or disk bound, why is top reporting such high load? Additionally, since I don't seem to be CPU bound or disk bound, why is my email sending script not able to keep up? Here's what I see when running top: top - 11:33:28 up 74 days, 18:49, 2 users, load average: 7.65, 8.79, 8.28 Tasks: 168 total, 5 running, 162 sleeping, 0 stopped, 1 zombie Cpu(s): 38.9%us, 58.6%sy, 0.8%ni, 0.0%id, 0.7%wa, 0.2%hi, 0.8%si, 0.0%st Mem: 3083012k total, 2144436k used, 938576k free, 281136k buffers Swap: 2048248k total, 39164k used, 2009084k free, 1470412k cached Here's what I see when running iostat -mx: avg-cpu: %user %nice %system %iowait %steal %idle 34.80 1.20 55.24 0.37 0.00 8.38 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sda 0.19 71.70 1.59 29.45 0.02 0.07 5.90 0.55 17.82 1.16 3.59 sda1 0.00 0.00 0.00 0.00 0.00 0.00 7.10 0.00 13.80 13.72 0.00 sda2 0.05 50.45 1.13 24.57 0.01 0.29 24.25 0.35 13.43 1.15 2.97 sda3 0.05 10.17 0.20 2.33 0.01 0.05 43.75 0.05 20.96 2.45 0.62 sda4 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 70.50 70.50 0.00 sda5 0.07 0.22 0.03 0.07 0.00 0.00 32.84 0.08 856.19 8.03 0.08 sda6 0.02 5.45 0.03 0.72 0.00 0.02 67.55 0.02 26.72 5.26 0.39 sda7 0.00 1.56 0.00 0.42 0.00 0.01 38.04 0.00 8.88 5.84 0.24 sda8 0.01 3.84 0.20 1.35 0.00 0.02 28.55 0.05 31.90 4.08 0.63 Here's what I see when running sar: 09:40:02 AM CPU %user %nice %system %iowait %steal %idle 09:50:01 AM all 30.59 1.01 49.80 0.23 0.00 18.37 10:00:08 AM all 31.73 0.92 51.66 0.13 0.00 15.55 10:10:06 AM all 30.43 0.99 48.94 0.26 0.00 19.38 10:20:01 AM all 29.58 1.00 47.76 0.25 0.00 21.42 10:30:01 AM all 29.37 1.02 47.30 0.18 0.00 22.13 10:40:06 AM all 32.50 1.01 52.94 0.16 0.00 13.39 10:50:01 AM all 30.49 1.00 49.59 0.15 0.00 18.77 11:00:01 AM all 29.43 0.99 47.71 0.17 0.00 21.71 11:10:07 AM all 30.26 0.93 49.48 0.83 0.00 18.50 11:20:02 AM all 29.83 0.81 48.51 1.32 0.00 19.52 11:30:06 AM all 31.18 0.88 51.33 1.15 0.00 15.47 Average: all 26.21 1.15 42.62 0.48 0.00 29.54 Here are the top handful of processes listed at the particular time I happened to run top -c: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8180 mysql 16 0 57448 19m 2948 S 26.6 0.7 4702:26 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/bristno.pid --skip-external-locking 26956 brristno 17 0 0 0 0 Z 8.0 0.0 0:00.24 [php] <defunct> 26958 brristno 17 0 94408 43m 37m R 5.0 1.4 0:00.15 /usr/bin/php /home/brristno/public_html/dbv.php 22852 nobody 16 0 9628 2900 1524 S 0.7 0.1 0:00.17 /usr/local/apache/bin/httpd -k start -DSSL 8591 brristno 34 19 96896 13m 6652 S 0.3 0.4 0:29.82 /usr/local/bin/php /home/brristno/bin/mailer.php 1qwqyb6 i0gbor 24469 nobody 16 0 9628 2880 1508 S 0.3 0.1 0:00.08 /usr/local/apache/bin/httpd -k start -DSSL 25495 nobody 15 0 9628 2876 1500 S 0.3 0.1 0:00.06 /usr/local/apache/bin/httpd -k start -DSSL 26149 nobody 15 0 9628 2864 1504 S 0.3 0.1 0:00.04 /usr/local/apache/bin/httpd -k start -DSSL

    Read the article

  • 1600+ 'postfix-queue' processes - OK to have this many?

    - by atomicguava
    I have a Plesk 9.5.4 CentOS server running Postfix. I had been having massive problems with the mailq being full of 'double-bounce' email messages containing errors relating to 'Queue File Write Error', but I believe these are now fixed thanks to this thread. My new problem is that when I run top, I can see lots of processes called 'postfix-queue' and have fairly high load: top - 13:59:44 up 6 days, 21:14, 1 user, load average: 2.33, 2.19, 1.96 Tasks: 1743 total, 1 running, 1742 sleeping, 0 stopped, 0 zombie Cpu(s): 5.1%us, 8.8%sy, 0.0%ni, 85.3%id, 0.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3145728k total, 1950640k used, 1195088k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1324 apache 16 0 344m 33m 5664 S 21.7 1.1 0:03.17 httpd 32443 apache 15 0 350m 36m 6864 S 14.4 1.2 0:13.83 httpd 1678 root 15 0 13948 2568 952 R 2.0 0.1 0:00.37 top 1890 mysql 15 0 689m 318m 7600 S 1.0 10.4 219:45.23 mysqld 1394 apache 15 0 352m 41m 5972 S 0.7 1.3 0:03.91 httpd 1369 apache 15 0 344m 33m 5444 S 0.3 1.1 0:02.03 httpd 1592 apache 15 0 349m 37m 5912 S 0.3 1.2 0:02.52 httpd 1633 apache 15 0 336m 20m 1828 S 0.3 0.7 0:00.01 httpd 1952 root 19 0 335m 28m 10m S 0.3 0.9 1:35.41 httpd 1 root 15 0 10304 732 612 S 0.0 0.0 0:04.41 init 1034 mhandler 15 0 11520 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1036 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1041 mhandler 17 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1043 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1063 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1068 mhandler 15 0 11516 1128 860 S 0.0 0.0 0:00.00 postfix-queue 1071 mhandler 17 0 11512 1152 884 S 0.0 0.0 0:00.00 postfix-queue 1072 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1081 mhandler 16 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1082 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1089 popuser 15 0 33892 1972 1200 S 0.0 0.1 0:00.02 pop3d 1116 mhandler 16 0 11516 1164 884 S 0.0 0.0 0:00.00 postfix-queue 1117 mhandler 15 0 11516 1124 860 S 0.0 0.0 0:00.00 postfix-queue 1120 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1121 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1130 mhandler 17 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1131 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1149 root 17 -4 12572 680 356 S 0.0 0.0 0:00.00 udevd 1181 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1183 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1224 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1225 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1228 apache 15 0 345m 34m 5472 S 0.0 1.1 0:04.64 httpd 1241 mhandler 16 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1242 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1251 mhandler 17 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1252 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1258 apache 15 0 349m 37m 5444 S 0.0 1.2 0:01.28 httpd When I run ps -Al | grep -c postfix-queue it returns 1618! My question is this: is this normal or is there something else going wrong with Postfix? Right now, if I run mailq it is empty, and qshape deferred / qshape active are empty too. Thanks in advance for your help.

    Read the article

  • RedHat 5.5 server does not show per processor memory utilization

    - by Mike S
    I have been searching all over internet but not finding any leads. I have a system with a memory leak that I am trying to troubleshoot. Unfortunately I am not able to see per processor memory utilization. Here are the outputs of TOP and PS commands. Linux SERVER_NAME 2.6.18-194.8.1.el5 #1 SMP Wed Jun 23 10:52:51 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux top - 09:17:13 up 18:43, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 375 total, 1 running, 373 sleeping, 0 stopped, 1 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32922828k total, 32776712k used, 146116k free, 267128k buffers Swap: 5245212k total, 0k used, 5245212k free, 32141044k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 10348 744 620 S 0.0 0.0 0:05.65 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.05 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 % ps -auxf | sort -nr -k 4 | head -10 Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ xfs 6205 0.0 0.0 23316 3892 ? Ss Aug19 0:00 xfs -droppriv -daemon uuidd 6101 0.0 0.0 60976 224 ? Ss Aug19 0:00 /usr/sbin/uuidd USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND smmsp 6130 0.0 0.0 57900 1784 ? Ss Aug19 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue rpc 5126 0.0 0.0 8052 632 ? Ss Aug19 0:00 portmap root 99 0.0 0.0 0 0 ? S< Aug19 0:00 [events/1] root 98 0.0 0.0 0 0 ? S< Aug19 0:00 [events/0] root 97 0.0 0.0 0 0 ? S< Aug19 0:00 [watchdog/31] root 96 0.0 0.0 0 0 ? SN Aug19 0:00 [ksoftirqd/31] root 95 0.0 0.0 0 0 ? S< Aug19 0:00 [migration/31] Any help with this is appretiate.

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • What’s New in The Second Edition of Regular Expressions Cookbook

    - by Jan Goyvaerts
    %COOKBOOKFRAME% The second edition of Regular Expressions Cookbook is a completely revised edition, not just a minor update. All of the content from the first edition has been updated for the latest versions of the regular expression flavors and programming languages we discuss. We’ve corrected all errors that we could find and rewritten many sections that were either unclear or lacking in detail. And lack of detail was not something the first edition was accused of. Expect the second edition to really dot all i’s and cross all t’s. A few sections were removed. In particular, we removed much talk about browser inconsistencies as modern browsers are much more compatible with the official JavaScript standard. There is plenty of new content. The second edition has 101 more pages, bringing the total to 612. It’s almost 20% bigger than the first edition. We’ve added XRegExp as an additional regex flavor to all recipes throughout the book where XRegExp provides a better solution than standard JavaScript. We did keep the standard JavaScript solutions, so you can decide which is better for your needs. The new edition adds 21 recipes, bringing the total to 146. 14 of the new recipes are in the new Source Code and Log Files chapter. These recipes demonstrate techniques that are very useful for manipulating source code in a text editor and for dealing with log files using a grep tool. Chapter 3 which has recipes for programming with regular expressions gets only one new recipe, but it’s a doozy. If anyone has ever flamed you for using a regular expression instead of a parser, you’ll now be able to tell them how you can create your own parser by mixing regular expressions with procedural code. Combined with the recipes from the new Source Code and Log Files chapter, you can create parsers for whatever custom language or file format you like. If you have any interest in regular expressions at all, whether you’re a beginner or already consider yourself an expert, you definitely need a copy of the second edition of Regular Expressions Cookbook if you didn’t already buy the first. If you did buy the first edition, and you often find yourself referring back to it, then the second edition is a very worthwhile upgrade. You can buy the second edition of Regular Expressions Cookbook from Amazon or wherever technical books are sold. Ask for ISBN 1449319432.

    Read the article

  • Speaking at Microsoft's Duth DevDays

    - by gsusx
    Last week I had the pleasure of presenting two sessions at Microsoft's Dutch DevDays at Den Hague. On Tuesday I presented a sessions about how to implement real world RESTFul services patterns using WCF, WCF Data Services and ASP.NET MVC2. During that session I showed a total of 15 small demos that highlighted how to implement key aspects of RESTful solutions such as Security, LowREST clients, URI modeling, Validation, Error Handling, etc. As part of those demos I used the OAuth implementation created...(read more)

    Read the article

  • Redirect network logs from syslog to another file

    - by w0rldart
    I keep logging way to much info (not needed, for now) in my syslog, and not daily or hourly... but instant. If I want to watch for something in my syslog I just can't because the network log keeps interfering. So, how can I redirect network logs to another file and/or stop logging it? Dec 10 17:01:33 user kernel: [ 8716.000587] MediaState is connected Dec 10 17:01:33 user kernel: [ 8716.000599] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:33 user kernel: [ 8716.000601] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:33 user kernel: [ 8716.000612] rt28xx_get_wireless_stats ---> Dec 10 17:01:33 user kernel: [ 8716.000615] <--- rt28xx_get_wireless_stats Dec 10 17:01:39 user kernel: [ 8722.000714] MediaState is connected Dec 10 17:01:39 user kernel: [ 8722.000729] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:39 user kernel: [ 8722.000732] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:39 user kernel: [ 8722.000747] rt28xx_get_wireless_stats ---> Dec 10 17:01:39 user kernel: [ 8722.000751] <--- rt28xx_get_wireless_stats Dec 10 17:01:44 user kernel: [ 8726.904025] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:01:45 user kernel: [ 8728.003138] MediaState is connected Dec 10 17:01:45 user kernel: [ 8728.003153] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:45 user kernel: [ 8728.003157] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:45 user kernel: [ 8728.003171] rt28xx_get_wireless_stats ---> Dec 10 17:01:45 user kernel: [ 8728.003175] <--- rt28xx_get_wireless_stats Dec 10 17:01:51 user kernel: [ 8734.004066] MediaState is connected Dec 10 17:01:51 user kernel: [ 8734.004079] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:51 user kernel: [ 8734.004082] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:51 user kernel: [ 8734.004096] rt28xx_get_wireless_stats ---> Dec 10 17:01:51 user kernel: [ 8734.004099] <--- rt28xx_get_wireless_stats Dec 10 17:01:57 user kernel: [ 8740.004108] MediaState is connected Dec 10 17:01:57 user kernel: [ 8740.004119] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:57 user kernel: [ 8740.004121] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:57 user kernel: [ 8740.004132] rt28xx_get_wireless_stats ---> Dec 10 17:01:57 user kernel: [ 8740.004135] <--- rt28xx_get_wireless_stats Dec 10 17:01:57 user kernel: [ 8740.436021] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:03 user kernel: [ 8746.005280] MediaState is connected Dec 10 17:02:03 user kernel: [ 8746.005294] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:03 user kernel: [ 8746.005298] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:03 user kernel: [ 8746.005312] rt28xx_get_wireless_stats ---> Dec 10 17:02:03 user kernel: [ 8746.005315] <--- rt28xx_get_wireless_stats Dec 10 17:02:09 user kernel: [ 8752.004790] MediaState is connected Dec 10 17:02:09 user kernel: [ 8752.004804] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:09 user kernel: [ 8752.004808] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:09 user kernel: [ 8752.004821] rt28xx_get_wireless_stats ---> Dec 10 17:02:09 user kernel: [ 8752.004825] <--- rt28xx_get_wireless_stats Dec 10 17:02:15 user kernel: [ 8757.984031] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:15 user kernel: [ 8758.004078] MediaState is connected Dec 10 17:02:15 user kernel: [ 8758.004094] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:15 user kernel: [ 8758.004097] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:15 user kernel: [ 8758.004112] rt28xx_get_wireless_stats ---> Dec 10 17:02:15 user kernel: [ 8758.004116] <--- rt28xx_get_wireless_stats Dec 10 17:02:16 user kernel: [ 8759.492017] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:19 user kernel: [ 8762.002179] SCANNING, suspend MSDU transmission ... Dec 10 17:02:19 user kernel: [ 8762.004291] MlmeScanReqAction -- Send PSM Data frame for off channel RM, SCAN_IN_PROGRESS=1! Dec 10 17:02:19 user kernel: [ 8762.025055] SYNC - BBP R4 to 20MHz.l Dec 10 17:02:19 user kernel: [ 8762.027249] RT35xx: SwitchChannel#1(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF1, K=0x02, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.170206] RT35xx: SwitchChannel#2(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF1, K=0x07, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.318211] RT35xx: SwitchChannel#3(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF2, K=0x02, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.462269] RT35xx: SwitchChannel#4(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF2, K=0x07, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.606229] RT35xx: SwitchChannel#5(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF3, K=0x02, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.750202] RT35xx: SwitchChannel#6(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF3, K=0x07, R=0x02 Dec 10 17:02:20 user kernel: [ 8762.894217] RT35xx: SwitchChannel#7(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF4, K=0x02, R=0x02 Dec 10 17:02:20 user kernel: [ 8763.038202] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:20 user kernel: [ 8763.040194] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:20 user kernel: [ 8763.040199] BAR(1) : Tid (0) - 03a3:037e Dec 10 17:02:20 user kernel: [ 8763.040387] SYNC - End of SCAN, restore to channel 11, Total BSS[03] Dec 10 17:02:20 user kernel: [ 8763.040400] ScanNextChannel -- Send PSM Data frame Dec 10 17:02:20 user kernel: [ 8763.040402] bFastRoamingScan ~~~~~~~~~~~~~ Get back to send data ~~~~~~~~~~~~~ Dec 10 17:02:20 user kernel: [ 8763.040405] SCAN done, resume MSDU transmission ... Dec 10 17:02:20 user kernel: [ 8763.047022] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:20 user kernel: [ 8763.047026] BAR(1) : Tid (0) - 03a3:03a5 Dec 10 17:02:21 user kernel: [ 8763.898130] bImprovedScan ............. Resume for bImprovedScan, SCAN_PENDING .............. Dec 10 17:02:21 user kernel: [ 8763.898143] SCANNING, suspend MSDU transmission ... Dec 10 17:02:21 user kernel: [ 8763.900245] MlmeScanReqAction -- Send PSM Data frame for off channel RM, SCAN_IN_PROGRESS=1! Dec 10 17:02:21 user kernel: [ 8763.921144] SYNC - BBP R4 to 20MHz.l Dec 10 17:02:21 user kernel: [ 8763.923339] RT35xx: SwitchChannel#8(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF4, K=0x07, R=0x02 Dec 10 17:02:21 user kernel: [ 8763.996019] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:21 user kernel: [ 8764.066221] RT35xx: SwitchChannel#9(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF5, K=0x02, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.210212] RT35xx: SwitchChannel#10(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF5, K=0x07, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.215536] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.215542] BAR(1) : Tid (0) - 0457:0452 Dec 10 17:02:21 user kernel: [ 8764.244000] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.244004] BAR(1) : Tid (0) - 0459:0456 Dec 10 17:02:21 user kernel: [ 8764.253019] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.253023] BAR(1) : Tid (0) - 045c:0458 Dec 10 17:02:21 user kernel: [ 8764.256677] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.256681] BAR(1) : Tid (0) - 045c:045b Dec 10 17:02:21 user kernel: [ 8764.259785] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.259788] BAR(1) : Tid (0) - 045d:045b Dec 10 17:02:21 user kernel: [ 8764.280467] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.280471] BAR(1) : Tid (0) - 045f:045c Dec 10 17:02:21 user kernel: [ 8764.282189] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.282192] BAR(1) : Tid (0) - 045f:045e Dec 10 17:02:21 user kernel: [ 8764.354204] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.356408] ScanNextChannel():Send PWA NullData frame to notify the associated AP! Dec 10 17:02:21 user kernel: [ 8764.498202] RT35xx: SwitchChannel#12(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x07, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.642210] RT35xx: SwitchChannel#13(RF=8, Pwr0=30, Pwr1=28, 2T), N=0xF7, K=0x02, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.790229] RT35xx: SwitchChannel#14(RF=8, Pwr0=30, Pwr1=28, 2T), N=0xF8, K=0x04, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.934238] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.935243] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:22 user kernel: [ 8764.935249] BAR(1) : Tid (0) - 048e:0485 Dec 10 17:02:22 user kernel: [ 8764.936423] SYNC - End of SCAN, restore to channel 11, Total BSS[05] Dec 10 17:02:22 user kernel: [ 8764.936436] ScanNextChannel -- Send PSM Data frame Dec 10 17:02:22 user kernel: [ 8764.936440] SCAN done, resume MSDU transmission ... Dec 10 17:02:22 user kernel: [ 8764.940529] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.942178] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:22 user kernel: [ 8764.942182] BAR(1) : Tid (0) - 0493:048e Dec 10 17:02:22 user kernel: [ 8764.942715] CNTL - All roaming failed, restore to channel 11, Total BSS[05] Dec 10 17:02:22 user kernel: [ 8764.948016] MMCHK - No BEACON. restore R66 to the low bound(56) Dec 10 17:02:22 user kernel: [ 8764.948307] ===>rt_ioctl_giwscan. 5(5) BSS returned, data->length = 1111 Dec 10 17:02:23 user kernel: [ 8766.048073] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:23 user kernel: [ 8766.552034] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:27 user kernel: [ 8770.001180] MediaState is connected Dec 10 17:02:27 user kernel: [ 8770.001197] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:27 user kernel: [ 8770.001201] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:27 user kernel: [ 8770.001219] rt28xx_get_wireless_stats ---> Dec 10 17:02:27 user kernel: [ 8770.001223] <--- rt28xx_get_wireless_stats Dec 10 17:02:28 user kernel: [ 8771.564020] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:29 user kernel: [ 8772.064031] QuickDRS: TxTotalCnt <= 15, train back to original rate

    Read the article

  • Monitor your Hard Drive’s Health with Acronis Drive Monitor

    - by Matthew Guay
    Are you worried that your computer’s hard drive could die without any warning?  Here’s how you can keep tabs on it and get the first warning signs of potential problems before you actually lose your critical data. Hard drive failures are one of the most common ways people lose important data from their computers.  As more of our memories and important documents are stored digitally, a hard drive failure can mean the loss of years of work.  Acronis Drive Monitor helps you avert these disasters by warning you at the first signs your hard drive may be having trouble.  It monitors many indicators, including heat, read/write errors, total lifespan, and more. It then notifies you via a taskbar popup or email that problems have been detected.  This early warning lets you know ahead of time that you may need to purchase a new hard drive and migrate your data before it’s too late. Getting Started Head over to the Acronis site to download Drive Monitor (link below).  You’ll need to enter your name and email, and then you can download this free tool. Also, note that the download page may ask if you want to include a trial of their for-pay backup program.  If you wish to simply install the Drive Monitor utility, click Continue without adding. Run the installer when the download is finished.  Follow the prompts and install as normal. Once it’s installed, you can quickly get an overview of your hard drives’ health.  Note that it shows 3 categories: Disk problems, Acronis backup, and Critical Events.  On our computer, we had Seagate DiskWizard, an image backup utility based on Acronis Backup, installed, and Acronis detected it. Drive Monitor stays running in your tray even when the application window is closed.  It will keep monitoring your hard drives, and will alert you if there’s a problem. Find Detailed Information About Your Hard Drives Acronis’ simple interface lets you quickly see an overview of how the drives on your computer are performing.  If you’d like more information, click the link under the description.  Here we see that one of our drives have overheated, so click Show disks to get more information. Now you can select each of your drives and see more information about them.  From the Disk overview tab that opens by default, we see that our drive is being monitored, has been running for a total of 368 days, and that it’s health is good.  However, it is running at 113F, which is over the recommended max of 107F.   The S.M.A.R.T. parameters tab gives us more detailed information about our drive.  Most users wouldn’t know what an accepted value would be, so it also shows the status.  If the value is within the accepted parameters, it will report OK; otherwise, it will show that has a problem in this area. One very interesting piece of information we can see is the total number of Power-On Hours, Start/Stop Count, and Power Cycle Count.  These could be useful indicators to check if you’re considering purchasing a second hand computer.  Simply load this program, and you’ll get a better view of how long it’s been in use. Finally, the Events tab shows each time the program gave a warning.  We can see that our drive, which had been acting flaky already, is routinely overheating even when our other hard drive was running in normal temperature ranges. Monitor Acronis Backups And Critical Errors In addition to monitoring critical stats of your hard drives, Acronis Drive Monitor also keeps up with the status of your backup software and critical events reported by Windows.  You can access these from the front page, or via the links on the left hand sidebar.  If you have any edition of any Acronis Backup product installed, it will show that it was detected.  Note that it can only monitor the backup status of the newest versions of Acronis Backup and True Image. If no Acronis backup software was installed, it will show a warning that the drive may be unprotected and will give you a link to download Acronis backup software.   If you have another backup utility installed that you wish to monitor yourself, click Configure backup monitoring, and then disable monitoring on the drives you’re monitoring yourself. Finally, you can view any detected Critical events from the Critical events tab on the left. Get Emailed When There’s a Problem One of Drive Monitor’s best features is the ability to send you an email whenever there’s a problem.  Since this program can run on any version of Windows, including the Server and Home Server editions, you can use this feature to stay on top of your hard drives’ health even when you’re not nearby.  To set this up, click Options in the top left corner. Select Alerts on the left, and then click the Change settings link to setup your email account. Enter the email address which you wish to receive alerts, and a name for the program.  Then, enter the outgoing mail server settings for your email.  If you have a Gmail account, enter the following information: Outgoing mail server (SMTP): smtp.gmail.com Port: 587 Username and Password: Your gmail address and password Check the Use encryption box, and then select TLS from the encryption options.   It will now send a test message to your email account, so check and make sure it sent ok. Now you can choose to have the program automatically email you when warnings and critical alerts appear, and also to have it send regular disk status reports.   Conclusion Whether you’ve got a brand new hard drive or one that’s seen better days, knowing the real health of your it is one of the best ways to be prepared before disaster strikes.  It’s no substitute for regular backups, but can help you avert problems.  Acronis Drive Monitor is a nice tool for this, and although we wish it wasn’t so centered around their backup offerings, we still found it a nice tool. Link Download Acronis Drive Monitor (registration required) Similar Articles Productive Geek Tips Quick Tip: Change Monitor Timeout From Command LineAnalyze and Manage Hard Drive Space with WinDirStatMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersDefrag Multiple Hard Drives At Once In WindowsFind Your Missing USB Drive on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer

    Read the article

  • Tom Cruise: Meet Fusion Apps UX and Feel the Speed

    - by ultan o'broin
    Unfortunately, I am old enough to remember, and now to admit that I really loved, the movie Top Gun. You know the one - Tom Cruise, US Navy F-14 ace pilot, Mr Maverick, crisis of confidence, meets woman, etc., etc. Anyway, one of more memorable lines (there were a few) was: "I feel the need, the need for speed." I was reminded of Tom Cruise recently. Paraphrasing a certain Senior Vice President talking about Oracle Fusion Applications and user experience at an all-hands meeting, I heard that: Applications can never be too easy to use. Performance can never be too fast. Developers, assume that your code is always "on". Perfect. You cannot overstate the user experience importance of application speed to users, or at least their perception of speed. We all want that super speed of execution and performance, and increasingly so as enterprise users bring the expectations of consumer IT into the work environment. Sten Vesterli (@stenvesterli), an Oracle Fusion Applications User Experience Advocate, also addressed the speed point artfully at an Oracle Usability Advisory Board meeting in Geneva. Sten asked us that when we next Googled something, to think about the message we see that Google has found hundreds of thousands or millions of results for us in a split second (for example, About 8,340,000 results (0.23 seconds)). Now, how many results can we see and how many can we use immediately? Yet, this simple message communicating the total results available to us works a special magic about speed, delight, and excitement that Google has made its own in the search space. And, guess what? The Oracle Application Development Framework table component relies on a similar "virtual performance boost", says Sten, when it displays the first 50 records in a table, and uses a scrollbar indicating the total size of the data record set. The user scrolls and the application automatically retrieves more records as needed. Application speed and its perception by users is worth bearing in mind the next time you're at a customer site and the IT Department demands that you retrieve every record from the database. Just think of... Dave Ensor: I'll give you all the rows you ask for in one second. If you promise to use them. (Again, hat tip to Sten.) And then maybe think of... Tom Cruise. And if you want to read about the speed of Oracle Fusion Applications, and what that really means in terms of user productivity for your entire business, then check out the Oracle Applications User Experience Oracle Fusion Applications white papers on the usable apps website.

    Read the article

  • Enterprise Cloud Computing: Risk and Economics

    Cloud computing can help optimize a company's capital investments by reducing its costs for hardware, software and real estate, resulting in a much lower total cost of ownership and, ultimately, a whole new way of looking at the economics of operational IT.

    Read the article

  • Robotic Arm &ndash; Hardware

    - by Szymon Kobalczyk
    This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm: It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it. Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout). Laser cut parts Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well). All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood). Metal parts I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts. Servos This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45. Assembly The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog: My mini servo grippers and completed robotic arm  Multiply the robotic arm and electronics Here is also Rob’s copy cut from aluminum My assembled arm looks like this – I think it turned out really nice: Servo controller board The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu. I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store. Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30. The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well. Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available. Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.   The total cost of my robotic arm was: $10 laser cut parts $10 metal parts $45 servos $35 servo controller ----------------------- $100 total So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

    Read the article

  • Roughly, what percentage of “business” users have .NET 2.0, 3.0, 3.5, 4.0 installed?

    - by Dan W
    Home use has already (somewhat) been established, but I'm curious about business users. Approximately, what percentage of business users worldwide have .NET 3.5 runtime installed? Client profile will do, since that's what I compile my app to (though others may be interested in the full, so maybe answer that too). I'm only looking for rough estimates, but I'd like to hear separate percentage figures for: 2.0, 3.0, 3.5, 3.5 CP, 4.0, 4.0 CP, 4.5 (note: percentages won't total 100%, since many users can have two or more .NET versions simultaneously).

    Read the article

  • Mix metrics for March 22, 2010

    - by tim.bonnemann
    Mix hit another major milestone this past week, surpassing 60,000 registered members. Registered Mix users (weekly growth) 60,662 (+0.8%) Active users (percent of total) Last 30 days: 4,571 (7.5%) Last 60 days: 8,945 (14.7%) Last 90 days: 11,479 (18.9%) Traffic (30-day) Visits: 12,371 Page views: 70,896 Twitter Followers: 3,117 List mentions: 146 User-generated content (30-day) New ideas: 32 New questions: 74 New comments: 378 Groups There are currently 1,394 Mix groups (requires login).

    Read the article

  • Mix metrics for April 5, 2010

    - by tim.bonnemann
    Our latest numbers... Registered Mix users (weekly growth) 61,374 (+0.6%) Active users (percent of total) Last 30 days: 4,317 (7.0%) Last 60 days: 8,638 (14.1%) Last 90 days: 12,481 (20.3%) Traffic (30-day) Visits: 11,893 Page views: 65,880 Twitter Followers: 3,169 List mentions: 146 User-generated content (30-day) New ideas: 36 New questions: 57 New comments: 394 Groups There are currently 1,402 Mix groups (requires login).

    Read the article

  • DundeeWealth Selects Oracle CRM On Demand as Core Platform

    - by andrea.mulder
    "Oracle CRM On Demand enhances our existing Oracle platform, providing an integrated solution with incredible flexibility, mobility, agility and lowered total cost of ownership," said To Anh Tran, Senior Vice President of Business Transformation and Technology at DundeeWealth Inc. "Using Oracle as a partner in the expansion of DundeeWealth's CRM processes reinforces our client-centric approach to customer service and we believe it gives us a competitive advantage. As we begin our deployment, we are confident that Oracle is with us every step of the way." Click here to read more about more about DundeeWealth's plans.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >