Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 48/422 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Opera 10.5 RAM usage and Google Reader?

    - by David
    Hi all, Today I upgraded to Opera 10.5 from Google Chrome and I have two really important questions about it. 1) Is it normal for it to use SO MUCH RAM!!!!? Closing tabs doesn't help, but opening new ones add on to the usage. I can have just 4 tabs open and it goes up to the 300MB mark and I only have 1.5GB in my laptop, 596MB of it used by the graphics card so this really unacceptable. Is there a way to fix it? 2) Why does Google Reader feel so slow and unresponsive on it? It lags so bad when I just try scrolling through the page. I know Opera is known for being really smooth while scrolling through pages. There's also a white bar at the bottom of the page that I can get rid of. It blocks the "Next" and "Previous" buttons. The test between articles is also sort of intersecting each other and that just looks completely unattractive and that's something i'm not used with any web browser. I realize there's a built-in RSS reader, but it doesn't sync across multiple computers and is very late at updating. Here are my specs: Windows 7 Ultimate (x86), Intel Pentium M 1.86 GHz, 1.5GB RAM, ATI Mobility Radeon X600 (64MB dedicated, 596MB shared)

    Read the article

  • Replacing stock Core 2 Duo heatsink fan (just the fan really) with a Dell CPU fan

    - by user647345
    My old heatsink fan broke and I'm trying to reconnect its plugs to a new fan. My Dell CPU fan has some custom Dell plug. I snipped the old fan's wire in half and kept the plug on the end of it. I want to connect it to the Dell fan wire to the plug. The motherboard is a P5Q-e, the stock Core 2 Duo fan was .20A and the dell is .70A. Is that going to matter? The wire from the fan has four wires, the wire with the plug has four wires. They share three, of the four colors: red, black, and blue. Dell's fourth wire is white, while the plug's fourth wire is yellow. Is it safe to assume that I just connect the yellow and the white plug together and match the rest up? I don't want to take any risk of damaging anything. It runs fine passively without a fan, but I have speedstep on, so I would like to use this fan and just fasten it to the heatsink with some twist ties and paperclips and call it a day.

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • ntop to analyse bandwidth usage on multiple ASA 5505

    - by dunxd
    I have set up a netflow server at our data centre, which is connected via VPN to ~40 remote offices using Cisco ASA 5505. The aim is to analyse usage data and find out exactly how the remote connections are being used. I followed through http://techowto.files.wordpress.com/2008/09/ntop-guide.pdf to set up ntop and https://supportforums.cisco.com/docs/DOC-6114 to set up the ASAs. I can see from the Plugin Netflow Statistics page that netflow packets from my ASAs are being received - the counter is increasing. However, I am not seeing any breakdown on the Global Traffic Statistic page after switching to the Netflow interface. I'm just seeing a pie chart showing 100% traffic for eth0. The interfaces and documentation are a little hard to follow so I am not sure I have got things configured correctly. When setting up my NetFlow-device.2 I can specify Virtual NetFlow Interface Network Address - the web UI says This value is in the form of a network address and mask on the network where the actual NetFlow probe is located. is this a Network address (e.g. 192.168.0.0/24) or an actual host IP address (192.167.0.1/24)? If that should be a network address, is this the network in which one of my ASAs is or the network in which my ntop server is? If a host IP address, is this the IP address used by eth0 on my ntop server, the IP address of an ASA, or something else? Do I need a separate virtual interface for each ASA I am collecting netflow data from? Any guidance would be greatly welcome.

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

  • Centos INODES usage

    - by MSTF
    We are using Centos & cPanel server but we have a important problem for INODES usage. "df -i" command showing for / directory using 6 million inodes!. When I check number of files for / directory, it has few thousand files. df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda4 6578176 6567525 10651 100% / tmpfs 8238094 1 8238093 1% /dev/shm /dev/sdi1 61054976 169 61054807 1% /backup /dev/sda1 51296 38 51258 1% /boot /dev/sda2 0 0 0 - /boot/efi /dev/sdc1 7290880 1252 7289628 1% /database /dev/sdb2 4096000 53258 4042742 2% /home /dev/sdd1 7290880 3500 7287380 1% /home2 /dev/sde1 7290880 68909 7221971 1% /home3 /dev/sdg1 7290880 68812 7222068 1% /home5 /dev/sdh1 7290880 695076 6595804 10% /home6 /dev/sdf1 7290880 58658 7232222 1% /tmp df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 99G 30G 65G 32% / tmpfs 32G 0 32G 0% /dev/shm /dev/sdi1 917G 270G 601G 32% /backup /dev/sda1 788M 80M 669M 11% /boot /dev/sda2 400M 296K 400M 1% /boot/efi /dev/sdc1 110G 1.5G 103G 2% /database /dev/sdb2 62G 1.1G 58G 2% /home /dev/sdd1 110G 79G 26G 76% /home2 /dev/sde1 110G 3.9G 101G 4% /home3 /dev/sdg1 110G 51G 54G 49% /home5 /dev/sdh1 110G 64G 41G 62% /home6 /dev/sdf1 110G 611M 104G 1% /tmp SDA disk just have Operating System and cPanel. There is no account, database, tmp on SDA disk. Why SDA using high inodes? Note: All disks is SSD 120GB Thanks.

    Read the article

  • High load average due to high system cpu load (%sys)

    - by Nick
    We have server with high traffic website. Recently we moved from 2 x 4 core server (8 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 5.x, to 2 x 4 core server (16 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 6.3 Server running nginx as a proxy, mysql server and sphinx-search. Traffic is high, but mysql and sphinx-search databases are relatively small, and usually everything works blazing fast. Today server experienced load average of 100++. Looking at top and sar, we noticed that (%sys) is very high - 50 to 70%. Disk utilization was less 1%. We tried to reboot, but problem existed after the reboot. At any moment server had at least 3-4 GB free RAM. Only message shown by dmesg was "possible SYN flooding on port 80. Sending cookies.". Here is snippet of sar 11:00:01 CPU %user %nice %system %iowait %steal %idle 11:10:01 all 21.60 0.00 66.38 0.03 0.00 11.99 We know that this is traffic issue, but we do not know how to proceed future and where to check for solution. Is there a way we can find where exactly those "66.38%" are used. Any suggestions would be appreciated.

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • What can cause kernel out_of_memory error?

    - by nbolton
    I'm running Debian GNU/Linux 5.0 and I'm experiencing intermittent out_of_memory errors coming from the kernel. The server stops responding to all but pings, and I have to reboot the server. # uname -a Linux xxx 2.6.18-164.9.1.el5xen #1 SMP Tue Dec 15 21:31:37 EST 2009 x86_64 GNU/Linux This seems to be the important bit from /var/log/messages Dec 28 20:16:25 slarti kernel: Call Trace: Dec 28 20:16:25 slarti kernel: [<ffffffff802bedff>] out_of_memory+0x8b/0x203 Dec 28 20:16:25 slarti kernel: [<ffffffff8020f825>] __alloc_pages+0x245/0x2ce Dec 28 20:16:25 slarti kernel: [<ffffffff8021377f>] __do_page_cache_readahead+0xc6/0x1ab Dec 28 20:16:25 slarti kernel: [<ffffffff80214015>] filemap_nopage+0x14c/0x360 Dec 28 20:16:25 slarti kernel: [<ffffffff80208ebc>] __handle_mm_fault+0x443/0x1337 Dec 28 20:16:25 slarti kernel: [<ffffffff8026766a>] do_page_fault+0xf7b/0x12e0 Dec 28 20:16:25 slarti kernel: [<ffffffff8026ef17>] monotonic_clock+0x35/0x7b Dec 28 20:16:25 slarti kernel: [<ffffffff80262da3>] thread_return+0x6c/0x113 Dec 28 20:16:25 slarti kernel: [<ffffffff8021afef>] remove_vma+0x4c/0x53 Dec 28 20:16:25 slarti kernel: [<ffffffff80264901>] _spin_lock_irqsave+0x9/0x14 Dec 28 20:16:25 slarti kernel: [<ffffffff8026082b>] error_exit+0x0/0x6e Full snippet here: http://pastebin.com/a7eWf7VZ I thought that perhaps the server was actually running out of memory (it has 1GB physical memory), but my Cacti memory graph looks OK to me... But strangely the load graph goes through the roof shortly before the kernel crashes: What logs can I look at for more info? Update: Maybe noteworthy - the CPU percentage and network traffic graphs were both normal at the time of the crash. The only abnormality was the average load graph.

    Read the article

  • At what point does the performance gap between GPU & CPU become so great that the CPU is holding back a system?

    - by Matthew Galloway
    I know that generally speaking for gaming performance the GPU is the primary factor which holds back performance, with everything else such as RAM/motherboard/PSU/CPU being secondary in importance to the graphics card. But at some point the other components ARE going to be significant in holding back the whole system! For instance nobody would be silly enough to play modern games with 512MB RAM and the very latest graphics cards (such as an HD7970) as I bet the performance increase over such a system with only 512MB but a mid range card would be non-existent! Thus it would be a "waste" for such a person to buy any high end graphics card without resolving first the system's other problems. The same point applies to other components, such as if it only had a Pentium II a current high end graphics card would be wasted on it! So my core question is how do you determine at what point for your system is spending on extra GPU power be completely "wasted"? (also, a slightly more nuanced question is trying work out at what point might the extra graphics power not be "wasted" but would be "sub optimal" value for money, when the expenditure should then be split around graphics card and other components. As obviously a gamer shouldn't always just spend on upgrading the graphics card! But needs to balance it out)

    Read the article

  • passenger and apache memory usage

    - by Brent Faulkner
    On a "CentOS release 6.2 (Final)" server (with Ruby 1.9.3 and Rails 3.2), and using more memory than expected. Looking at passenger-memory-stats I see a couple of HUGE httpd processes... any thoughts on how I can figure out what's going on and reduce the memory usage? Stats are included here... thanks! ---------- Apache processes ----------- PID PPID VMSize Private Name --------------------------------------- 1371 1 202.1 MB 0.1 MB /usr/sbin/httpd 4573 1371 210.2 MB 5.0 MB /usr/sbin/httpd 4778 1371 202.5 MB 0.6 MB /usr/sbin/httpd 4780 1371 217.6 MB 9.4 MB /usr/sbin/httpd 4781 1371 217.1 MB 9.1 MB /usr/sbin/httpd 4856 1371 202.4 MB 0.5 MB /usr/sbin/httpd 4863 1371 204.1 MB 2.1 MB /usr/sbin/httpd 5027 1371 202.4 MB 0.5 MB /usr/sbin/httpd 5043 1371 202.4 MB 0.4 MB /usr/sbin/httpd 5044 1371 205.5 MB 2.7 MB /usr/sbin/httpd 5072 1371 202.4 MB 0.5 MB /usr/sbin/httpd 5084 1371 202.4 MB 0.5 MB /usr/sbin/httpd 32111 1371 1297.0 MB 246.5 MB /usr/sbin/httpd 32579 1371 1914.3 MB 215.5 MB /usr/sbin/httpd ### Processes: 14 ### Total private dirty RSS: 493.42 MB -------- Nginx processes -------- ### Processes: 0 ### Total private dirty RSS: 0.00 MB ----- Passenger processes ----- PID VMSize Private Name ------------------------------- 4180 280.5 MB 24.4 MB Passenger ApplicationSpawner: /var/www/apps/people/current 4345 309.5 MB 53.4 MB Rack: /var/www/apps/people/current 4800 300.2 MB 55.2 MB Rack: /var/www/apps/people/current 4808 297.8 MB 52.5 MB Rack: /var/www/apps/people/current 4815 297.4 MB 52.4 MB Rack: /var/www/apps/people/current 4822 302.7 MB 55.6 MB Rack: /var/www/apps/people/current 22780 209.0 MB 0.0 MB PassengerWatchdog 22783 991.5 MB 1.3 MB PassengerHelperAgent 22785 113.4 MB 1.1 MB Passenger spawn server 22788 144.6 MB 0.0 MB PassengerLoggingAgent 22911 310.4 MB 64.0 MB Rack: /var/www/apps/people/current 22939 311.6 MB 53.5 MB Rack: /var/www/apps/people/current 26175 304.1 MB 55.8 MB Rack: /var/www/apps/people/current 26182 310.4 MB 44.0 MB Rack: /var/www/apps/people/current ### Processes: 14 ### Total private dirty RSS: 513.24 MB

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • Remote Desktop svchost (networkservice) & lsa.exe high cpu usage, hangs on welcome screen

    - by Rohan1
    We have deployed an RDS Farm with 12 virtual RDS servers using Hyper V. Currently some users are not able to log on. After passing credentials to the connection broker, the session hangs on the "Welcome" screen. Using resource monitor we've seen that svchost (with the "networkservice" service) has a CPU usage of 50%, when viewing the wait chain on the process it displays that it's waiting for a lsa.exe to finish. We can't kill any of the users processes, even when trying with taskkill /f. Suspending lsa.exe did work but didn't have any effect. The networkservice also couldn't be restarted. Also, if this happens, the current users logged on to the RDS server can't be displayed. Task manager crashes when viewing the users, RDS service manager crashes when viewing the users (even remotely) and the cmd command "query session" doesn't work. No antivirus is installed on the RDS server. The only thing we can do is rebooting the server, which is not an option because of the fact that other users are in active sessions. Does anyone have ANY idea what's going on? We didn't encounter this in our pre-production setup.

    Read the article

  • How to reduce memory consumption an AWS EC2 t1.micro instance (free tier) ubuntu server 14.04 LTS EBS

    - by CMPSoares
    Hi I'm working on my bachelor thesis and for that I need to host a node.js web application on AWS, in order to avoid costs I'm using a t1.micro instance with 30GB disk space (from what I know it's the maximum I get in the free tier) which is barely used. But instead I have problems with memory consumption, it's using all of it. I tried the approach of creating a virtual swap area as mentioned at Why don't EC2 ubuntu images have swap? with these commands: sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a But this swap area isn't used somehow. Is something missing in this approach or is there another process of reducing the memory consumption in these type of AWS instances? Bottom-line: This originates server freezes and crashes and that's what I want to stop either by using the swap, reducing memory usage or both.

    Read the article

  • 2008R2 Standard and Hyper-V and Ram Usage (Usable vs Available)

    - by Mark
    A new server was purchased for our development team to start utilizing the full feature set of TFS, namely Lab Management. Because of the need for Lab Management we bought a fairly beefy machine to handle this task and to also act as a build machine. I have been tasked to setup additional features TFS on this machine starting out with a build controller and eventually going towards a full out Lab Management setup using Hyper-V. My question: Upon initially logging I noticed that Windows is registering 64gb but only 32gb available. I know this is a limitation because of licencing since only Standard Edition is installed. Since Hyper-V is another layer that handles the virtualization of guest OS's is Hyper-V able to access this memory? Or is Hyper-V memory usage also limited by 2008 R2 Standard? If Hyper-V can somehow access this memory, is this how it should be setup? Or should the host 2008R2 Standard be upgraded to Enterprise so the Host can utilize the full 64gb? Before I go hog wild and using TFS I wanted to ask some experts so I don't need to reinstall the OS down the road to utilize the additional 32gb. Thanks for any help or links you can share.

    Read the article

  • NonStop ODBC: how the connections (ODBC servers) are assigned to CPUs?

    - by Vladimir Dyuzhev
    We have an ODBC pool running on a NonStop server. The pool is connected to SQL/MX. This pool is used by a few external Java applications, each of which has an JDBC pool connected to ODBC pool (e.g. 14 connections per application). With time (after a few application recycles) we see an imbalance between CPUs -- some have 8 ODBC processes running, some only 5. That leads to CPU time imbalance too. Up to this point we assumed that a CPU is assigned to ODBC process in round-robin fashion. That would maintain the number of ODBC processes more or less equally distributed. It's not the case though. Is there any information on how ODBC pool decided which CPU to choose for every new allocated process? Does it look at CPU load? Available memory? Something else? Sadly, even HP's own people (available to us, that is) couldn't answer those questions with certainty. :-(

    Read the article

  • How powerful of a PC do you need to edit HD videos?

    - by Xeoncross
    I have a Core2Quad Q8200 (2.3GHz) with 4GB of RAM, a 512MB PCIe video card, and a SATA-2 HD. Yet it still isn't fast enough to edit 720i/p video in Sony Vegas or Adobe Premiere/Aftereffects. My RAM usage never peaks over 1.6GB, but my CPU cores make it to 95% quick! Right now the preview panes in all these programs lag to bad to actually work on the videos. I get to see 1-3 frames every second or two! So how fast do I have to go? At what point will my CPU be fast enough to actually edit these videos? I have to assume that regular people and their regular sub $2k computers can actually work with this footage. Another way to answer this is, how fast is the PC you used to edit videos? Update: I'ts worth noting that now that I have Adobe Pre/AF CS4 I am more interested in getting that working than my older Vegas 6. If you didn't have to re-run RAM preview every, single, time you made one change it would be my answer. But since I like to test many filters and effects before choosing one - I have to re-render a 1-sec section of footage over-and-over and it drives me nuts waiting. Perhaps a motherboard with Dual Xeon chips or something would be able to handle this. It would probably be as much as a dual-crossfire setup and would also speed up other applications.

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • My processor is running slower than usually it has to run

    - by Soham
    I've Core2Duo E7400 2.80GHz processor on my Intel D945gcnl mobo. From CPU-Z, I've get to know that my processor speed is 1596MHz with X6 multiplier and 266MHz Bus Speed on each core. Why my processor is being operated at 1596 MHz rather than 2.80GHz...!!???? From my side I've tried to disable SpeedStep from my bios by setting EIST to 'Disable' and also tried to change Power Option to 'High Performance' in Windows 7. And also done like suggested in this question:http://superuser.com/questions/119176/processor-not-running-at-max-speed But it gains me nothing. I've also tried to run few massive applications together to check whether it was increasing at that time or not, but it remains same. Should I have to increase my multiplier or overclock to gain that lost speed...??? Should I have to check my power supply for any problem..??? or anything else...??? Please help me on this.... And yeah I've desktop computer so no problem causing by battery. Here's my CPU-Z Screenshot: http://i56.tinypic.com/2lk4mqc.jpg

    Read the article

  • Gateway laptop module bay light repeating 12 flashes - what error is that?

    - by Simurr
    I have a Gateway M465-E laptop currently running fine with a T2300E Core Duo installed. I wanted to upgrade it to a Core 2 Duo. My brother has the same model laptop and that took a Core 2 Duo (T7200) just fine. Picked up a T7200 on ebay and installed it. Normally when booting all the indicator lights flash once and the fan spins up before the machine actually starts to POST. With the T7200 installed all the lights flash and the fan spins up, but the module bay activity light flashes 12 times repeatedly. I'm assuming this is an error code, but can find no information about it. There are no beep codes. I've removed the ram, HD, Bay module and no change. Switched back to the T2300E and everything works fine. Anyone know what that error code is? The motherboard was actually manufactured by Foxconn if that helps. Update 1 Returned the CPU as defective. I tested it in 3 M465-E's and all of them did exactly the same thing. I still have no idea what the error code is. I'd still like to know for future reference. Perhaps I should try removing the CPU from one of them and see what happens.

    Read the article

  • My Samsung R480 laptop freezes for short and inconsistent periods of time.

    - by anonymous
    I have a Samsung R480 laptop running Windows 7. It's great, I love it to death, but every so often it'll start having, for lack of a better term, "hiccups". It would freeze completely, emit a loud BZZZZZZT from the speakers (think when a video freezes and the sound gets stuck), then return to working order all in a span of 0.5~2 seconds. It has happened while I was playing Mass Effect, while I was watching both HD and non-HD videos, and while I was surfing the internet (which I've noticed while watching YouTube videos and playing Entanglement, but also noticed while using Facebook, minus the buzzing sound). My Initial hypothesis was that my GPU or CPU were overheating as it would shut down while playing Mass Effect, but when I turned off my WiFi card hardware using [fn + F9], the problem was resolved. As I currently see it, it could be a problem with my CPU, my WiFi card, or my sound card. It could also be a software related issue, possibly an ill-functioning process. It could be chrome related. A memory leak from chrome maybe? Does anyone know how to resolve this?

    Read the article

  • Computer will freeze/ lock up after doing relatively stressful things

    - by GrowingCode247
    I'll first start off by saying that the issue GENERALLY doesn't occur unless I'm doing something remotely stressful for my computer. This issue used to occur whenever it felt it was necessary, however has not occurred completely randomly for a while now (thankfully) My computer's specs: CPU: AMD Phenom II X4 960T GPU: GeForce GTX 760 Memory: 16 GB RAM Resolution Used: 1680x1050, 59Hz (strange number for refresh rate?) res is highest for monitor Nvidia Driver version: 331.65 OS: Microsoft Windows 7 Ultimate (64-bit) Sometimes I will be able to go 2-3 games (about an hour, depending) and sometimes it will go maybe one game (20-30 minutes) and then my computer will run sluggishly and leave me unable to do much of anything. I can sometimes interact with programs at a very basic level (maximizing, minimizing), and I usually cannot close them in any way, not even through Task Manager. The highest temperature my GPU reaches is 76C, with the average being around 73C. During the time the temperatures are around 73C, my GPU's RAM usage is anywhere between 1250-1300 (out of 2GB). My CPU's temperature never goes over 60C, thankfully. The PSU should be fine. It's very mildly dusty but I feel as though that would not be causing this problem... I will clean it out as soon as everything else has been ruled out. Honestly I have no clue how to test the PSU for problems - same goes for my Motherboard. I cannot really think of what could be causing these freezes otherwise. Event Viewer details: EventID: 1 - VDS Basic Provider (I've no clue what this is) EventID: 3 - Kernel-EventTracing (Again, lost) EventID: 8003 - bowser (this seems fishy) and the one critical that I know others have been dealing with as I've browsed some other responses on the web: EventID: 41 - Kernel-Power any help to solve this problem would be GREATLY appreciated.

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • Large Scale VHDL modularization techniques

    - by oxinabox.ucc.asn.au
    I'm thinking about implimenting a 16 bit CPU in VHDL. A simplish CPU. ADD, MULS, NEG, BitShift, JUMP, Relitive Jump, BREQ, Relitive BREQ, i don't know somethign along these lines Probably all only working with 16bit operands. I might even cut it down and use only a single operand and a accumulator. With Some status regitsters, Carry, Zero, Neg (unless i use a Accumlator), I know how to design all the parts from logic gates, and plan to build them up from first priciples, So for my ALU I'll need to 'build' a ADDer, proably a Carry Look ahead, group adder, this adder it self is make up oa a couple of parts, wich are themselves made up of a couple of parts. Anyway, my problem is not the CPU design, or the VHDL (i know the language, more or less). It's how i should keep things organised. How should I use packages, How should I name my processes and port maps? (i've never seen the benifit of naming the port maps, or processes)

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >