Search Results

Search found 2042 results on 82 pages for 'average'.

Page 36/82 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Spikes of 99% disk activity in Windows 8 Task Manager

    - by Jonathan Chan
    For some reason Windows 8's Task Manager reports spikes of 99% disk activity for hours at a time. Looking at the entries in that column, however, data doesn't seem to be getting written any more quickly than when the disk activity is around 25-50% (which it seem to idle at most of the time). Furthermore, when these 99% disk activity spikes are happening, the average response time reported in the Performance tab becomes 4000-6000ms. Is there a good way to find out what is causing the disk activity? I've tried using Process Explorer, but I said above, the rate at which data is reportedly being written doesn't seem to correspond (Dropbox and Google Chrome are constantly the top two, but the spikes are not dependent on their being open). Thanks in advance for any help. It gets very annoying when the computer stutters to a halt.

    Read the article

  • IE 8 plays sound, Ulead pop-up message appears, crash

    - by benzado
    I'm experiencing a problem on a new PC using Outlook Web Access in Internet Explorer 8. When OWA plays a sound, a message box appears: the about box for Ulead MP3 codec. When I click OK to dismiss the box, I get a message that IE has stopped responding and Windows eventually has to force the browser window closed. This is apparently not an isolated incident, occurring on computers from different manufacturers and on other websites that play sound (such as AOL's Webmail). The only "fix" I've found on discussion boards is to prevent the website from playing sound in the first place. That's not a fix, that's just avoiding the trigger. I'd like to know what's causing this and uninstall it or repair it, so the computer can work like it's supposed to. Since Super User users are smarter than the average bear, I thought I'd have better luck here.

    Read the article

  • Apache load balancer limits with Tomcat over AJP

    - by PAS
    Hi All, I have Apache acting as a load balancer in front of 3 Tomcat servers. Occasionally, Apache returns 503 responses, which I would like to remove completely. All 4 servers are not under significant load in terms of CPU, memory, or disk, so I am a little unsure what is reaching it's limits or why. 503s are returned when all workers are in error state - whatever that means. Here are the details: Apache config: <IfModule mpm_prefork_module> StartServers 30 MinSpareServers 30 MaxSpareServers 60 MaxClients 200 MaxRequestsPerChild 1000 </IfModule> ... <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> # Tomcat HA cluster <Proxy balancer://mycluster> BalancerMember ajp://10.176.201.9:8009 keepalive=On retry=1 timeout=1 ping=1 BalancerMember ajp://10.176.201.10:8009 keepalive=On retry=1 timeout=1 ping=1 BalancerMember ajp://10.176.219.168:8009 keepalive=On retry=1 timeout=1 ping=1 </Proxy> # Passes thru track. or api. ProxyPreserveHost On ProxyStatus On # Original tracker ProxyPass /m balancer://mycluster/m ProxyPassReverse /m balancer://mycluster/m Tomcat config: <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost"> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Engine> </Service> </Server> Apache error log: [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.10:8009 (10.176.201.10) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.10) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.10 [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.9:8009 (10.176.201.9) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.9) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.9 [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.219.168:8009 (10.176.219.168) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.219.168) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.219.168 [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state Load balancer top info: top - 23:44:11 up 210 days, 4:32, 1 user, load average: 0.10, 0.11, 0.09 Tasks: 135 total, 2 running, 133 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.2%id, 0.1%wa, 0.0%hi, 0.1%si, 0.3%st Mem: 524508k total, 517132k used, 7376k free, 9124k buffers Swap: 1048568k total, 352k used, 1048216k free, 334720k cached Tomcat top info: top - 23:47:12 up 210 days, 3:07, 1 user, load average: 0.02, 0.04, 0.00 Tasks: 63 total, 1 running, 62 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 99.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2097372k total, 2080888k used, 16484k free, 21464k buffers Swap: 4194296k total, 380k used, 4193916k free, 1520912k cached Catalina.out does not have any error messages in it. According to Apache's server status, it seems to be maxing out at 143 requests/sec. I believe the servers can handle substantially more load than they are, so any hints about low default limits or other reasons why this setup would be maxing out would be greatly appreciated.

    Read the article

  • Log analyzer that calculates "time on page"?

    - by netvope
    I need to get an idea of the average "time on page" or "page view duration" for each page on my websites without client-side scripting (such as using onunload event handler). Is any of the free log analyzers capable of doing this? I looked at Webalizer, AWStats and Analog, but they don't seem to have such a function. The closest thing is "visits duration" in AWStats, but I'd like to see "page view duration" instead. I know that visitor tracking is inaccurate without client-side scripting, but I can bear with it. Google Analytics seems to provide a "time on page" metric without hooking the onunload event (but correct me if I'm wrong), so I believe this is possible.

    Read the article

  • How is htop "Swp" calculated?

    - by Thomas
    When I run htop (on OS X 10.6.8), I see something like this : 1 [||||||| 20.0%] Tasks: 70 total, 0 running 2 [||| 7.2%] Load average: 1.11 0.79 0.64 3 [|||||||||||||||||||||||||||81.3%] Uptime: 00:30:42 4 [|| 5.8%] Mem[|||||||||||||||||||||3872/4096MB] Swp[ 0/0MB] PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 284 501 57 0 15.3G 1064M 0 S 0.0 6.5 0:01.26 /Applications/Firefox.app/Contents/MacOS/firefox -psn_0_90134 437 501 57 0 14.8G 785M 0 S 0.0 4.8 0:00.18 /Applications/Thunderbird.app/Contents/MacOS/thunderbird -psn_0_114716 428 501 63 0 12.8G 351M 0 S 1.0 2.1 0:00.51 /Applications/Firefox.app/Contents/MacOS/plugin-container.app/Contents/MacOS/ 696 501 63 0 11.7G 175M 0 S 0.0 1.1 0:00.02 /System/Library/Frameworks/QuickLook.framework/Resources/quicklookd.app/Conte 38 0 33 0 11.1G 422M 0 S 0.0 2.6 0:00.59 /System/Library/Frameworks/CoreServices.framework/Frameworks/Metadata.framewo 183 501 48 0 10.9G 137M 0 S 0.0 0.8 0:00.03 /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder How can I have Processes using Gigabytes of VIRT memory and still 0MB of Swap used ?

    Read the article

  • Packet loss rate with iperf and tcpdump

    - by stefita
    I tested a line for its link quality with iperf. The measured speed (UDP port 9005) was 96Mbps, which is fine, because both servers are connected with 100Mbps to the internet. On the other hand the datagram loss rate was shown to be 3.3-3.7%, which I found a little too much. Using a high-speed transfer protocol I recorded the packets on both sides with tcpdump. Than I calculated the packet loss - average 0.25%. Have anyone an explanation, where this big difference may be coming from? What is an acceptable packet loss in your opinion?

    Read the article

  • Determine nginx reverse-proxy load limits

    - by Aaron
    Hi all: I have an nginx server (CentOS 5.3, linux) that I'm using as a reverse-proxy load-balancer in front of 8 ruby on rails application servers. As our load on these servers increases, I'm beginning to wonder at what point will the nginx server become a bottleneck? The CPUs are hardly used, but that's to be expected. The memory seems to be fine. No IO to speak of. So is my only limitation bandwidth on the NICs? Currently, according to some cacti graphs, the server is hitting around 700Kbps ( 5 min average ) on each NIC during high load. I would think this is still pretty low. Or, will the limit be in sockets or some other resource in the operating system? Thanks for any thoughts and insights. Aaron

    Read the article

  • I want to move columns in a gradebook based on the column header title to another gradebook

    - by Pat
    I have to average grades based on each objective for a new report card we have to complete this year. For example Column one has students names, each additional column will have the objective associated with the assignment. I would like to move the entire column to another sheet for each objective. Is there a formula or macro that will do that. For example objective 3.1A is in columns 2, 5, and 7, objective 3.2B is located in columns 1, 4, 10, and 12, objective 3.4c is in column 3, 6, 9, and 11. I would like to have a spreadsheet for each objective.

    Read the article

  • NFS and KVM. Slow Speed

    - by Javier Martinez
    I have a KVM virtualization in Debian with 2 guests (Debian and Windows 2008). I want to have a 'mount point' shared that can be accessed by the 3 system (host and 2 guests) at the same time. So the only thing that I found was a NFS/SMB network storage. I picked NFS Due to my Ethernet network (10/100), the speed average that I get between accessing/transfering files between the 3 system is always 8~10MB/s. The point is if is there any chance of get a boost system for sharing files between 3 system (at the same time) without wasting the speed of my SATA disks. I mean, without the Ethernet limitation of 10 MB/s

    Read the article

  • JMeter Stress testing

    - by mcondiff
    MAMP server hosting a Joomla instance. I'd like to hear the community's thoughts on the best way to stress test the server and find it's breaking point on concurrent users etc. Currently I have setup a test plan which I have going to the home page, grabbing the index.php, css, js and all images and have run tests on 1 to 100 users and a varying number of loops. What I'd like to know is how do I determine at what number of concurrent requests or looping requests is a good way to gauge if my server can handle the proposed increase in traffic? What is a good KB/sec, Throughput, Average, Max, Min via the Aggregate Report and at what number of threads/loops etc? I have googled and have not found immediate answers to these questions and thought to come here. More or less I have just used this http://jakarta.apache.org/jmeter/usermanual/jmeter_proxy_step_by_step.pdf to guide me and then I have been winging it in terms of Thread and Loop numbers. Any light shed on these subject would be much appreciated.

    Read the article

  • Server performance worsened after a hardware upgrade: how should I reconfigure the server?

    - by twick
    I'm running a site on an Ubuntu/Apache/Django/PostgreSQL stack. We upgraded our server recently from 1 processor with 2 Gb total RAM (with 0.5 Gb of that RAM assigned to memcached) to a new server that has 2 processors with 4 Gb total RAM (with 2 Gb of that RAM assigned to memcached). However, when I looked at Google Webmaster Tools, I found out that the average page speed has worsened from 5 seconds to 15 seconds. Why would performance get worse with a hardware upgrade? What should I check and tune? Is this more likely to be a problem with memcached, Apache, Django, or PostgreSQL?

    Read the article

  • How do you persuade users to abandon their personal folders?

    - by thing2k
    Towards the end of last year we started using Mimecast services, in particular their cloud base e-mail archiving. Since then we’ve been rolling out the Mimecast Services for Outlook (MSO) Add-in. We’ve informed the users that we will be give them training in the next few Months, and we do not require them to use it, but my boss stated that we are getting rid of Personal Folders (pst files), by putting them into Mimecast. Unsurprisingly this did cause something of a backlash. Though really who likes change. I know the IT reasons for getting rid of Personal Folders (inefficient, unreliable, single access, etc), but from an average user’s perspective, unless they have had one fail on them, they see them as simple and only way to archive e-mail when their 200Mb mailbox is full. So what can I say to the users, to get them to understand why Personal Folders are not the best solution?

    Read the article

  • Wiki & issue-tracking in one system?

    - by torbengb
    I'm looking for an integrated solution that combines documentation of a software system with tracking of bugs, change requests and feature requests. Requirements: Documentation using a wiki would be nice, preferably one supporting CamelCase or other automatic linking. Issue tracking must allow a customizable workflow and optional e-mail notifications. Known alternatives: FogBugz is an awesome issue tracker, but the wiki appears to be somewhat awkward. Trac's wiki is average (though not as nice as Foswiki.org) but I don't know how good the integrated issue tracker is. What would you recommend? What systems offer the best combination of documentation and issue tracking?

    Read the article

  • Server Administration

    - by Kassem
    Hi everyone, My client asked me for a job description of a system administration because I might be assigned this position along with the other guy I'm working with. To be honest, I do not know much about a System Administrator's job but I'm willing to learn. Questions: What are the security requirements of a server? * What are the key responsibilities in a system admin's job description? What are some of the day to day tasks of a system admin? What is the average monthly salary of a system admin? Note: I will be working inside a Windows environment. But your replies do not necessarily need to be constricted to a Windows environment. (*) Other software I know will be required are: Windows Server 2008 IIS 7.0 MS SQL Server .NET 4.0 Runtime Let me know if there are other things I should be aware of as well. Thanks!

    Read the article

  • New host, high load?

    - by dotancohen
    A few minutes ago I signed up at a new webhost. I have yet to move my sites over. Upon initial SSH connection, I checked the load and memory usage, they do seem rather higher than I would like: # uptime 12:06:51 up 71 days, 23:23, 1 user, load average: 9.02, 9.49, 9.45 # free total used free shared buffers cached Mem: 33014800 31927192 1087608 0 2384812 17729816 -/+ buffers/cache: 11812564 21202236 Swap: 16787916 8584 16779332 Is that a bit to packed? I'm only paying about $5 USD per month, so I don't expect <0.1 loads, but ~10 is worrisome. Is it not? Also, there is no /etc/issue file so I tried other methods to guess the OS: # uname -a Linux box358.bluehost.com 2.6.32-20120131.55.1.bh6.x86_64 #1 SMP Tue Jan 31 15:43:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux # which yum /usr/bin/yum # which apt-get # That looks like CentOS / RHEL 6.2 possibly?

    Read the article

  • apache performance improvements and maxclients

    - by updog
    I know this has been asked a few (thousand) times around the internet but I was hoping someone whose in the know might be able to comment on my particular setup. I have a web server hosting one site (php/codeigniter) with a wordpress blog in a sub directory. The server has 2GB RAM, 3GHz CPU and I have offloaded the static assets to CloudFlare which is has reduced bandwidth for the actual server by almost 75%. The problem I have is when an email campaign is sent out that links to the site or blog, it slows down. Below is my settings in apache2.conf. Average apache process size is 80M and there is 1.5GB available for apache. <IfModule mpm_prefork_module> StartServers 8 MinSpareServers 5 MaxSpareServers 20 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> I have already setup and installed apc and built some caching into the site and used w3totalcache on the blog. The number of concurrent users is around 2-300 when there is a campaign, are there any further optimisations before

    Read the article

  • Internet speed showing high value but never felt that speed [closed]

    - by Kiran D Pillai
    I use Reliance Netconnect+ in my Toshiba Satellite laptop. My OS is Windows 7 Ultimate. I've seen speeds up to 1500 kbps when travelling and have an average speed of 400 kbps at my room premises. I'm sorry, I've never felt that much speed while browsing or downloading. Sometimes I think some other programs are also sharing the same network. It is just showing 200 or 500 kbps but I feel 25 or 50 kbps. Is there any way to prevent them? I use Google Chrome downloaded from Google site. Please suggest me some solutions.

    Read the article

  • Server crashes when too much memory is allocated

    - by lindenb
    Hi all, my server crashes whenever one of my users is running a 'R' script (this script requires a large amount of memory). Below is the last top I saw: top - 11:32:39 up 20 min, 4 users, load average: 1.08, 0.85, 0.46 Tasks: 336 total, 2 running, 334 sleeping, 0 stopped, 0 zombie Cpu(s): 6.1%us, 0.2%sy, 0.0%ni, 93.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 65939968k total, 5131440k used, 60808528k free, 88256k buffers Swap: 68124664k total, 0k used, 68124664k free, 1077612k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10392 cdina 25 0 3702m 3.5g 2428 R 100.0 5.6 7:51.82 R 10430 root 15 0 12872 1272 804 R 0.7 0.0 0:02.42 top 1 root 15 0 10348 704 592 S 0.0 0.0 0:02.95 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 is there a way to prevent my server from crashing ("don't run that script" is not an option :-) ) ? something like fixing a 'quota' for the memory allowed ?

    Read the article

  • how to escape “@” in the username when logging in through FTPES with curl?

    - by user62367
    $ curl -T "index.html" -k --ftp-ssl -u "[email protected]" MYDOMAIN.COM Enter host password for user '[email protected]': % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 57173 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>405 Method Not Allowed</title> </head><body> <h1>Method Not Allowed</h1> <p>The requested method PUT is not allowed for the URL /index.html.</p> <hr> <address>Apache/2.2.16 Server at MYDOMAIN.COM Port 80</address> </body></html> 100 57480 100 307 100 57173 284 52902 0:00:01 0:00:01 --:--:-- 53633 can someone help me? Also posted on Stack Overflow

    Read the article

  • Autoscale Rackspace Cloud, Scalr or DIY?

    - by Andre Jay Marcelo-Tanner
    I'm looking into creating a setup on Rackspace Cloud that will allow me to autoscale my webservers (no db) on demand. Preferably using something like response time. I've read into configuration tools like Puppet/Chef, but I'm thinking I can just launch from prepared server images that are ready to go. Is there any tool out there already that can monitor my existing node response times and then launch or scale up new ones based upon certain variables like average X load over Y time? I see there are commercial offerings like Scalr, Rightscale, but how would I do this myself?

    Read the article

  • TIME_WAIT connections not being cleaned up after timeout period expires

    - by Mark Dawson
    I am stress testing one of my servers by hitting it with a constant stream of new network connections, the tcp_fin_timeout is set to 60, so if I send a constant stream of something like 100 requests per second, I would expect to see a rolling average of 6000 (60 * 100) connections in a TIME_WAIT state, this is happening, but looking in netstat (using -o) to see the timers, I see connections like: TIME_WAIT timewait (0.00/0/0) where their timeout has expired but the connection is still hanging around, I then eventually run out of connections. Anyone know why these connections don't get cleaned up? If I stop creating new connections they do eventually disappear but while I am constantly creating new connections they don't, seems like the kernel isn't getting chance to clean them up? Is there some other config options I need to set to remove the connections as soon as they have expired? The server is running Ubuntu and my web server is nginx. Also it has iptables with connection tracking, not sure if that would cause these TIME_WAIT connections to live on. Thanks Mark.

    Read the article

  • Postfix performance

    - by Brian G
    Running postfix on ubuntu, sending alot of mail ( ~ 1 million messages ) per day. loads are extremly high but not much in terms of cpu and memory load. Anyone in a similiar situation and know how to remove the bottleneck? All mail on this server is outbound. I would have to assume the bottleneck is disk. Just an update, here is what iostat looks like: avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.12 99.88 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 12.38 0.00 2.48 0.00 118.81 48.00 0.00 0.00 0.00 0.00 sdb 1.49 22.28 72.28 42.57 629.70 1041.58 14.55 135.56 834.31 8.71 100.00 Are these numbers in line with the performance you would expect from a single disk? sdb is dedicated to postfix. I think it is queue shuffling, from incoming-active-deferred More details from questions: Server: Quad core Xeon(R) CPU E5405 @ 2.00GH with 4 GB ram Load average: 464.88, 489.11, 483.91, 4 cores. but the memory utilization and cpu is minimal Postfix instances between 16 - 32

    Read the article

  • HIGH CPU usage by PHP on a VPS Magento Server

    - by Anil
    My server running magento is 4gb ram and 4 core cpu. But still i am struggling with the high CPU usage. I only have 10 visitors at any given point of time. I am not sure if the PHP has to take this high % CPU usage. Attached is the TOP result. top - 09:18:32 up 2 days, 15:44, 1 user, load average: 1.16, 2.02, 1.99 Tasks: 179 total, 2 running, 177 sleeping, 0 stopped, 0 zombie Cpu(s): 46.7%us, 3.9%sy, 0.1%ni, 46.9%id, 1.0%wa, 0.0%hi, 0.0%si, 1.4%st Mem: 3919972k total, 3164968k used, 755004k free, 530820k buffers Swap: 1048568k total, 379352k used, 669216k free, 1536388k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15897 vpsadmin 20 0 431m 168m 54m R 91.7 4.4 2:16.16 php-cgi 12308 vpsadmin 20 0 404m 163m 73m S 29.3 4.3 15:15.90 php-cgi 3644 mysql 20 0 1528m 80m 4944 S 9.8 2.1 1899:58 mysqld 4969 apache 20 0 471m 6228 2824 S 2.0 0.2 0:18.53 httpd 16148 root 20 0 15024 1220 864 R 2.0 0.0 0:00.01 top 1 root 20 0 19364 1064 844 S 0.0 0.0 0:02.50 init

    Read the article

  • Windows and Linux applications to show cumulative uploads/downloads for each app, at a glance

    - by jontyc
    I've read few quite a few other threads on SU, but they have been focused on instantaneous/average bandwidths (B/sec) rather than cumulative download/upload totals for a period. Either that or they don't drill down to application level. Resource Monitor in Windows 7 only shows bandwidth. I've just been trying NetLimiter and whereas it can show total uploaded/downloaded, it's a case of having one stats window open per application, as opposed to a table showing all applications at once. Looking for applications for both Windows and Linux (Ubuntu), but they don't need to be the same.

    Read the article

  • Postfix SMTP server down on Ubuntu

    - by Paddington
    I have a Plesk server running Postfix on Ubuntu 10.04 and the SMTP service on port 25 is down. When I stop and then start postfix the server comes up only for a minute and goes down again. I have checked the load on the server and it is low as shown: *top - 04:29:33 up 19 days, 3:25, 4 users, load average: 1.47, 1.78, 2.34 Tasks: 936 total, 1 running, 935 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 86.6%id, 11.7%wa, 0.6%hi, 0.1%si, 0.0%st Mem: 6110496k total, 6072988k used, 37508k free, 251244k buffers Swap: 12000544k total, 95264k used, 11905280k free, 4370432k cached* IMAP clients are not experiencing a problem and there are no issues with receiving emails for both POP or IMAP. Only SMTP (port 25) is a problem. If I ask clients to use the submission port (587) messages are delivered. netstat -lnt shows the following results , so its not a port issue. tcp 0 0 0.0.0.0:25 0.0.0.0: LISTEN tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN*

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >