Search Results

Search found 2042 results on 82 pages for 'average'.

Page 40/82 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • httpd service keep restarting. after 15-20 mins

    - by niraj
    I have recently purchased Dedicated Server which has 16bg ram and 1TB Harddisk. It has Cpanel and for firewall CSF Installd. I am mainly going to install it for File hosting service. Now the day i moved my httpd service keep restarting every 15-20 mins. It becomes unresponsive after that so have to manually restart it. My httpd settings are Start Servers = 5 Minimum Spare Servers = 5 Maximum Spare Servers = 10 Server Limit = 20000 Max Clients = 10000 Max Requests Per Child = 10000 Keep-Alive = On Keep-Alive Timeout = 5 Max Keep-Alive Requests = Unlimited Timeout 300 TOP is top - 14:53:41 up 1 day, 23:39, 2 users, load average: 0.10, 0.14, 0.09 Tasks: 1563 total, 1 running, 1562 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.6%sy, 0.0%ni, 98.1%id, 0.2%wa, 0.0%hi, 0.5%si, 0.0%st Mem: 16303780k total, 16142048k used, 161732k free, 135264k buffers Swap: 8224760k total, 868k used, 8223892k free, 14136616k cached Please help me in this its keep happning.

    Read the article

  • How to calculate bandwidth limits per user on WiFi network

    - by Lars
    A typical 802.11g access point can provide around 25 Mbps of bandwidth. How is the bandwidth shared among the users? Furthermore, how many users can be served by a single access point using 802.11g in an environment with low interference, and average web activity from the users? The goal is to use bandwidth limitation to avoid starvation for some users in case some of the users start to download a file or stream HD video or some other bandwidth intensive activity. Can someone break down the math on this?

    Read the article

  • apache and ajp performance

    - by user12145
    I have an apache sitting in front of two tomcat app servers(one on the same physical server, the other on a different one) that does time consuming work(0.5 sec to 10sec per request). The apache http server is getting killed by an average of 1 to 2 concurrent requests per second. both Server spec is about 2GB of RAM. Is there a way to optimize apache to handle the load? any advise is welcome. BalancerMember ajp://localhost:8009/xxxxxx BalancerMember ajp://XXX.XX.XXX.XX:8009/xxxxxx I keep getting the following in apache2.2 log: [Mon Dec 28 00:31:02 2009] [error] ajp_read_header: ajp_ilink_receive failed [Mon Dec 28 00:31:02 2009] [error] (120006)APR does not understand this error code: proxy: read response failed from 127.0.0.1:8009 (localhost)

    Read the article

  • Download rates of torrents through µTorrent and FlashGet are suddenly limited

    - by el
    Both µTorrent and FlashGet are being throttled and I don't know why or how. I use both and they usually report a download speed of 70kbps for torrent speed. FlashGet would report a speed of 70kbps yet actually download at another speed which is about the same as the actual which is a consisten 70kbps. Now, all of a sudden, my torrent downloads are only up to 20kbps. FlashGet still shows a reading of 70kbps, maybe even 80kbps. But, the average actual speed of download is really only 20kbps. I am using Windows 7 on my laptop and no matter where the internet is coming it's the same thing. I have no idea what could be limiting my download rates.

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • Linux QoS: bulk data transmission during idle times

    - by syneticon-dj
    How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X. The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times. Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.

    Read the article

  • When to use Nginx PHP Fast CGI with a TCP socket instead of a UNIX socket?

    - by user64204
    I've followed this guide to setup PHP in FastCGI mode with Nginx. This guide describes 2 ways of doing it: TCP socket and UNIX socket. I've ran some Apache Benchmark on my locale machine and here are the results: Below tests ran multiple times to get better average statistics: $ ab -c 200 -n 100000 http://.... APACHE: 1800 req/sec NGINX (TCP socket): 2500 req/sec NGINX (UNIX socket): 15000 req/sec As far as I understand, there is overhead with using a TCP socket rather than a UNIX socket, hence the better performance with the latter. However I was not expecting such a performance difference given that the TCP socket is on the localhost, and therefore would like to ask the following question: Q: Given the huge performance gain with using a UNIX socket, what are the configuration scenarios where it would make sense to use a TCP socket instead?

    Read the article

  • linux migration/N high cpu consumption

    - by Alexander
    on my linux appliance based on 3.0.0-14 kernel I got: RPN:/tmp# ps axuf | grep migration root 6 92.9 0.0 0 0 ? S Apr23 2788:33 \_ [migration/0] root 7 99.7 0.0 0 0 ? S Apr23 2993:20 \_ [migration/1] my top is RPN:/tmp# top -b -n1 top - 12:03:41 up 2 days, 2:18, 5 users, load average: 25.76, 25.26, 24.73 Tasks: 171 total, 1 running, 168 sleeping, 0 stopped, 2 zombie Cpu(s): 14.0%us, 12.6%sy, 0.8%ni, 72.0%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 1543032k total, 1264728k used, 278304k free, 25308k buffers Swap: 0k total, 0k used, 0k free, 183168k cached My question: why processes "migration/N" take so much CPU?

    Read the article

  • Sudden loss of Wi-Fi connectivity on OS X

    - by GJ.
    Occasionally while I work, without any special provocation, I lose connectivity via Wi-Fi. Other devices connected to the same wifi network have no interruption, and the problem gets resolved once I reboot my MacBook Air, so it's definitely a local problem. Observations: The Wi-Fi symbol in the menu indicates that I'm still connected, but apps can't actually connect neither to the Internet nor to other devices in the LAN. I can't connect to an alternate Wi-Fi network (e.g. Wi-Fi tethering via iPhone). I can connect to the Internet via iPhone USB tethering but this seems to only work some of the time. Only a reboot solves the problem but a regular restart gets stuck on a grey screen with rotating wheel (after all applications have closed) and I have to do a hard reset. How should I go about troubleshooting this? It used to happen very rarely but now is becoming more frequent (approaching once every 2-3 days on average).

    Read the article

  • Likeliness of obtaining same IP address after restarting a router

    - by ?affael
    My actual objective is to simulate logged IPs of web-site users who are all assumed to use dynamically assigned IPs. There will be two kinds of users: good users who only change IP when the ISP assignes a new one bad users who will restart their router to obtain a new IP So what I would like to understand is what assignment mechanics are usually at work here deciding from what pool of IPs one is chosen and whether the probability is uniformly distributed. I know there is no definite and global answer as this process can be adjusted be the ISP but maybe there is something like a technological frame and common process that allows some plausible assumptions. UPDATE: A bad user will restart the router as often as possible if necessary. So here the central question is how many IP changes on average are necessary to end up with a previously used IP.

    Read the article

  • What software allows editing text with furigana professionally?

    - by Julian
    I'm studying Japanese and need to write a lot of text with furigana. I've been using Word so far but my main concern is that entering furigana is not only quite clumsy (no hotkey) but what's more important is that once entered, you can't globally change either its font or its size; you need to change them one by one. This is a deal-breaker for me since my average text contains hundreds of entries. There is a hack you can do as pointed out by another guy on SU but I found that by using it I could (and did) break my document easily. My question is: is there a software that is specifically designed to work with Japanese text that also has its UI in English? As stated above, I need something that has furigana editing as a first-class citizen.

    Read the article

  • Random shut down? Asus N53J Laptopi

    - by Mr.Y
    My Laptop is pretty new and today it random shut down on its own. I was surfing the net and the computer shut down on its own (Like losing power, without any warning). It was the 2nd time, the first time was 3 weeks ago. Btw it happens during cold days. I remember one time I was unable to turn on my laptop at all. So I'm pretty sure it's not related to heat issue... could it be the laptop is too "cold"? My room temperature is around 17=18 degree on average :P edit -I use Windows 7 Professional. -I run memtest86 once and it passed. -I run my laptop WITHOUT battery all the time.

    Read the article

  • How to have a soft-real-time process in presense of heavily swapping IO-intensive background load?

    - by Vi
    schedtool: PID 32301: PRIO 4, POLICY R: SCHED_RR , NICE -20, AFFINITY 0xf ionice: realtime: prio 4 But the music is stumbling anyway. Background load is low prio (SCHED_IDLEPRIO, idle ionice), but uses a lot of memory (more than is physically available) and does a lot of IO and calculations. Latencytop shows about 1500ms for: Following symlink Writing buffer to disk (sync) Page fault Writing a page to disk both for the bg load and for unrelated processes. Load average is 10 and counting. Why cannot it allocate, for example, 200MHZ of one of the cores and 32M of memory and not less than once per second opportunity for IO for mplayer to make it happy while continuing calculations on the background? Or: why it cannot leave background task and swap loving each other but keeping the rest of the system as if there were no background load? How to have RT processes AND heavy bg load simultaneously (without of virtual machines)?

    Read the article

  • Is it possible to sync specific router settings across multiple routers?

    - by Betard Fooser
    I recently purchased a second Linksys wireless routers set up on the other side of my home and I am wondering if it is possible to somehow sync particular settings between the two? For instance what I am really after is the "MAC Filter List". I would "like" to be able to maintain the list on both routers without having to manually type in the field values. maybe this isn't possible, or has an easy answer, but hopefully those of you who know will cut me a bit of slack. I tried to "google" the answer to this of course, but it seems any searches with the words "sync" and "router" and/or "wifi" result in pages of people having issues with synching their iOS devices over wifi. I would say that have decent amount of networking knowledge in regards to average home networks, and I imagine in larger businesses / corporations they must have a "simpler" way of maintaining things like this. Any insight to point me in the right direction will be much appreciated.

    Read the article

  • How to calculate required switch speed based on network usage?

    - by tobefound
    I have a 48 port HP Procurve Switch 2610 (J9088A) that can handle 13.0 million PPS (packets per second) and features wire speed switching capacity at 17.6Gbps. First off, what does that REALLY mean? Where do I start when trying to figure out if my office (with 70 employees) will be well setup with this switch? How to calculate through-put based on a user average load of X MB per day? 90% of the folks will only be sending email, access random websites, etc... the other 10% will be conducting heavier tasks like moving image files (10 MB) across network shares, constant external FTP streams through the switch to a server etc... Is this switch good enough?

    Read the article

  • How do I lower idle cpu usage in ubuntu linux? Gnome or KDE Variants

    - by Jasen
    My question comes from a kde desktop currently, but it also happens with the gnome instance. When just sitting there, with only the cpu monitor widget running. no open windows, no background processes other than the desktop, my cpu is at ~20%. I wanna know how to fix this, and possibly get better performance out of it. When running my windows side, the cpu will sit at zero, and i generally load new programs about 400ms faster. With windows 7 being as slow as it is, this is not acceptable. and the widget is only set to check every 500ms, so im almost completely sure its not the widget. My system is a Gateway nv 53 amd 2.0 ghz turion with 4 gb of installed ram, and 500 gb hd. both linux and windows are 64 bit. average ram use on either system is about 1.4 gb for just the os

    Read the article

  • Why is Firefox so slow and heavy?

    - by Tony
    For some reason, when I go to links the pages seem slow and heavy. It also has a lot of lag spikes between page loads. Basically it seems to freeze then load it all at once fast. I'm currently using Firefox 25. But when I use the same Chrome version, it seems to be very fast and smooth page loading. The CPU it takes on average is about 400,000k. Extensions: iMacros Leethax Ad Block Plus 2.4 Ad Block Plus Pop-up Addon 0.9.1 Computer stats: 6 GB RAM Windows 7 Acer Aspire Laptop 500 GB HDD Intel Core i4-2370M How do I make Firefox load like Google Chrome, without much freezing?

    Read the article

  • free -m output, should I be concerend about this servers low memory?

    - by Michael
    This is the output of free -m on a production database (MySQL with machine. 83MB looks pretty bad, but I assume the buffer/cache will be used instead of Swap? [admin@db1 www]$ free -m total used free shared buffers cached Mem: 16053 15970 83 0 122 5343 -/+ buffers/cache: 10504 5549 Swap: 2047 0 2047 top ouptut sorted by memory: top - 10:51:35 up 140 days, 7:58, 1 user, load average: 2.01, 1.47, 1.23 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 6.5%us, 1.2%sy, 0.0%ni, 60.2%id, 31.5%wa, 0.2%hi, 0.5%si, 0.0%st Mem: 16439060k total, 16353940k used, 85120k free, 122056k buffers Swap: 2096472k total, 104k used, 2096368k free, 5461160k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20757 mysql 15 0 10.2g 9.7g 5440 S 29.0 61.6 28588:24 mysqld 16610 root 15 0 184m 18m 4340 S 0.0 0.1 0:32.89 sysshepd 9394 root 15 0 154m 8336 4244 S 0.0 0.1 0:12.20 snmpd 17481 ntp 15 0 23416 5044 3916 S 0.0 0.0 0:02.32 ntpd 2000 root 5 -10 12652 4464 3184 S 0.0 0.0 0:00.00 iscsid 8768 root 15 0 90164 3376 2644 S 0.0 0.0 0:00.01 sshd

    Read the article

  • What could be wrong with my CD drive and how do you fix it?

    - by Jamie
    My CD drive hasn't worked in about 8 months. I was burning CDs in a hurry and pulled it out because it was taking too long (probably an average amount of time). Im running a laptop (Toshiba Satellite A505-6965) that uses a slot drive. Now my computer doesn't accept CDs. It used to pull them in but it doesn't anymore. Ive heard it make odd noises a few times but thats it. The slot is these spongey things and Ive managed to peek into the drive with a flashlight and theres nothing in the way, really. Could someone explain the mechanics of what happened and if it would be possible to fix it? If itd be possible to fix it through Linux that would be great since I keep getting the BSOD (0x0000007B) and am going to try reinstalling Windows 7. But I can't really do that since I don't have a USB drive larger than 2 GB (Windows is about 4 GB) so Im relying on Linux ATM.

    Read the article

  • Why does HDTune report better performing drives 2 months after installing them?

    - by Rolnik
    OK, so this is really weird. I ran HDTune on a newly set-up home-built computer and got the following readings from my drives in mid-November. SSD 154 MB/s RAID1 87 RAID0 198 (software installs) RAID0 98 Swap drive Today, in January, I run HDTune (same version) and get these results, in MB/s: SSD 186 RAID1 98 RAID0 241 RAID0 98 (Swap drive) Here are more details that HDTune reports on the SSD drive: HD Tune: OCZ-VERTEX Benchmark Blockquote Transfer Rate Minimum : 135.4 MB/sec Transfer Rate Maximum : 219.4 MB/sec Transfer Rate Average : 185.7 MB/sec Access Time : 0.1 ms Burst Rate : 187.3 MB/sec CPU Usage : -1.0% To get to my question: Why are my hard drives improving in performance? Most of my logical drives are in some form of RAID, except for the SSD. Will this performance ever deteriorate? Note, none of my drives are a hybrid drive that uses some form of SSD to enhance the write/reads on actual platters.

    Read the article

  • Linux script that indicates time the server was offline?

    - by RD
    Below is data taken from my dedicated server: root@namhost [~]# last root pts/0 XXX Tue May 18 09:46 still logged in root pts/0 XXX Mon May 17 08:51 - 12:18 (03:26) reboot system boot XXX Mon May 17 08:49 (1+00:59) root pts/0 XXX Sun May 16 11:50 - 13:15 (01:25) root@namhost [~]# last | grep "system boot" reboot system boot 2.6.18-164.15.1. Mon May 17 08:49 (1+01:02) reboot system boot 2.6.18-164.el5 Tue May 11 04:20 (7+05:31) reboot system boot 2.6.18-164.el5 Tue May 11 03:53 (7+05:58) reboot system boot 2.6.18-128.el5 Mon Oct 5 22:40 (-3:-50) .... I need a script that I can run on an hourly basis that will: 1. Calculate the total downtime since the first date 2. The overall downtime percentage 3. Store this data in a file at /home/bla/file.txt, in the following format: TotalDowntime=03:02:02 Average=0.01% How do I go about doing this?

    Read the article

  • SQLIO help decipher output

    - by SQL Learner
    When load testing on a SQL Server Box, using following (testfile is 25 GB) sqlio -kW -t8 -s360 -o8 -frandom -b8 -BH -LS g:\testfile.dat > result.txt sqlio -kW -t8 -s360 -o8 -frandom -b64 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b128 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b256 -BH -LS g:\testfile.dat >> result.txt Can anyone help me decipher output.. I do not understand latency min and average....? What does this number means IOs/sec: 10968.80 MBs/sec: 685.55 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 5 Max_Latency(ms): 21

    Read the article

  • Proccess of carrying out a BER Test

    - by data
    I am subscribed to an ISP supplying a 3meg ADSL line. Lately (for the last 4 weeks) speeds have dropped from the usual average downstream speed of ~250kbps to just 0.14Mbps (according to speedtest.net) and employees are complaining about lack of access to the server. I have been calling customer support and logging calls for the last 3 weeks, but they have been unable to determine the source of the problem other carrying out a few bitstream tests and checking the DHCP renewal times. I am going to call back and suggest carrying out a BER test. What type of equipment is needed to carry out this test? I have access to a wide range of Cisco networking equipment. Other: We dont need a leased line as there are less than ten employees.

    Read the article

  • Download rates of torrents through µTorrent and FlashGot are suddenly limited

    - by el
    Both µTorrent and FlashGot are being throttled and I don't know why or how. I use both and they usually report a download speed of 70kbps for torrent speed. FlashGot would report a speed of 70kbps yet actually download at another speed which is about the same as the actual which is a consistent 70kbps. Now, all of a sudden, my torrent downloads are only up to 20kbps. FlashGot still shows a reading of 70kbps, maybe even 80kbps. But, the average actual speed of download is really only 20kbps. I am using Windows 7 on my laptop and no matter where the internet is coming it's the same thing. I have no idea what could be limiting my download rates.

    Read the article

  • Swap 95%+ , but a lot of free ram memory

    - by Paolo_NL_FR
    I am running centos 5.8 with cpanel. Lately I am getting reports that my swap is full , but there is a lot of free memory to use. top - 10:33:43 up 133 days, 17:00, 1 user, load average: 0.05, 0.03, 0.05 Tasks: 170 total, 1 running, 169 sleeping, 0 stopped, 0 zombie Cpu(s): 2.1%us, 0.5%sy, 0.0%ni, 97.2%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 24726100k total, 8255368k used, 16470732k free, 599560k buffers Swap: 1046520k total, 984740k used, 61780k free, 3641828k cached How do I solve this? The unused ram memory should be used instead of the swap. Or should I increase the swap ( and how do I do that ? ). Thanks

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >