Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 19/255 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Web browsing through SSH tunnel gets stuck/clogged

    - by endolith
    I use tools like Tunnelier to log into my home Tomato router through SSH, and then use it as a proxy for web browsing, tunnel for Remote Desktop/VNC, etc. Most days it works great, but some days every page I try to view gets stuck, like the tunnel is clogged. I load a web page and it seems to be loading, then stops, with the little loading icon spinning and nothing happening. I refresh the page, I reboot the router, I reboot the other computers on my home network and turn off any bandwidth-hogging services on them, I've turned on QoS on the router to prioritize SSH. I don't understand what's getting stuck. Rebooting or disconnecting/reconnecting the SSH tunnel improves responsiveness for a minute, but then it gets clogged again. It also seems to help if I don't do anything on the tunnel for a few minutes, then it will be responsive for a bit and then get clogged again. Trying to open a terminal console from Tunnelier is also unresponsive, so it's not just a web browsing problem. Likewise, connecting to http://192.168.1.1 in the browser (to the router's web config through its own tunnel) is also slow/laggy/halting. The realtime bandwidth reported by the router is nowhere near my DSL connection's limits, though it does show big spikes during the laggy times, and the connection is responsive when it shows low bandwidths. How do I troubleshoot something like this?

    Read the article

  • Random server lag, no CPU/mem/pagefile usage

    - by Kev
    We have a fairly new server running Windows 2003 SP2, and the past few days we've noticed random slowdowns. When I'm logged into the server over remote desktop while this is happening, or if I'm physically sitting at the server logged in, suddenly everything becomes extremely laggy. Any UI element I try to interact with takes upwards of ten seconds to react, and then responds very slowly. Then a minute later everything is quite snappy again. During this, I have Task Manager minimized to the tray, and there's no CPU usage. I open it up right after this happens, and there's very little CPU usage on the graph, and no memory or pagefile usage above normal. (Normal being 1.5 GB free in the case of memory.) This is what I see logged into the server, and then users start calling saying things are slow, timing out, and failing--anything to do with our server. No events in the Event Viewer around the times this happens. The context I'm working in (last thing I clicked, etc.) seems different every time--different programs active, different combinations of programs open. Never anything particularly stressful (like adding an event entry to a Cobian Backup configuration, or editing text in TextPad, which has been exceptionally stable in my extensive usage of it.) I would've thought it was just the server, but a family member's home PC (entirely separate) running WinXPSP3 had the same thing happen to it last night a few times. Is this some new behaviour introduced by the latest Windows Updates? Either way, where do I even start to look when nothing seems to be chewing up resources?

    Read the article

  • moving files and directories between two machine, via a third, preserving permissions and usernames

    - by Jarmund
    The situation is as follows: Machine A has a file repository accessible via rsync Machine B needs the above mentioned files with all permissions and ownerships intact (including groups etc) Machine C has access to both A and B, but has a completely different set of users. Normally, i would just rsync everything over, directly between A and B, but due to severely limited bandwidth at the moment, i need something different, as rsync times out after building the list of the 430 files (49Mb uncompressed... can be compressed down to ~7Mb). What i've tried so far: rsync everything over from A to C, tar it, copy the tarball over, and then untar it, however, this messes up the ownership and/or the permissions. To rsync it from A to C, i run this command: rsync --numeric-ids --password-file=/root/rsync_pwd_file -oaPvu rsync://[email protected]/portal_2/ ./portal_2/ ...and from the looks of things, they do end up on C with the correct ownerships/permissions/flags/everything (not 100% sure, though.. are there any more switches i can throw in there? did i miss something?) copying the tarball over is simple enough (slow as a one-legged turtle due to the bandwidth, but it checksums out alright) What i'm unsure of is the flags and switches for creating and extracting the tarball, so could someone please provide the full commands for creating a tarball from /root/portal_2 on machine C (with everything intact) and extracting the tarball into /var/ex/portal_2 on machine B? ? Also, are there any other approaches worth mentioning that could allow me to perform this? I have root access to A and C, whereas i only have rsync access to B. PS: I'm running rsync v2.6.9 on machine B, and unfortunately i do not have the oportunity to upgrade to v3

    Read the article

  • Why is my browser using so much memory?

    - by Steve
    Hi. I've recently had problems with Firefox running very slowly when I have many tabs open; say 20 tabs. My whole system would slow down. I decided to give Google Chrome a try, and it started out fine. But lately I am finding that it too, slows down my whole system. Looking at Task Manager, chrome.exe is using about 250MB of memory in about 6 different entries in task manager. However, when I shut Chrome down, memory usage is reduced by about 600MB. How can this be? (shows drop in memory usage after ending Chrome.) When my system locks up with Chrome having many tabs open, it takes 10 seconds to load the Start Menu, 10 seconds to expand All Programs, and each folder and subfolder, and 30 seconds for the program to be highlighted under my mouse. It also takes 10 seconds to switch to Notepad. Why is Chrome appearing to use so much more memory than Task Manager indicates? Why is my pagefile being used when I have around 1.1GB of memory? Can I set Chrome to run in RAM and not in the pagefile? How can 20 tabs use 600MB? That's 30MB per tab. Thanks for your help.

    Read the article

  • My system is always disk-bound (the disk light is always on). Why is this?

    - by Scoobie
    I have been given a laptop by the good folks at my company on which to do my work (Java development). I usually use eclipse as my primary development platform. The laptop is a Dell D830 and runs Windows 7 - 32 bit. Although the processor supports a 64 bit instruction-set, licensing limits me to running the 32 bit OS. The HDD is a WD1600BEVT (Western Digital). I have noticed that my disk is always very slow. Windows start up is usually pretty quick, however as soon as I log on, my disk light stays on and usually, the laptop takes about 4 minutes (after logging in -- immediately upon getting the prompt to press Ctrl + Alt + Del to log in) before it's usable. Questions: Is this expected behavior? What can I do to examine the disk and determine the cause of the problem? What can I do to improve my disk's performance? Any optimizations you may be able to suggest? Other Questions: Some have suggested running Process Monitor (from sysinternals), but how would i get the log since start up? Instead of trying to fix this myself, should I simply push this onto the system administrator? Thanks all.

    Read the article

  • Ubuntu in VirtualBox File Modified Time in Future and PHP slow file operations

    - by user1750
    For some reason, some of my files have a last modified date in the future. In addition to this, file operations in PHP are SUPER slow. For example, rebuilding the Symfony2 cache can take over 40 seconds (its takes 1-2 on my MacBook Pro). Notice the time for ListingsCRUDController.php. It just says "2012". In order see the date more clearly I ran ls --time-style="full-iso" -l For some reason it shows that this file's last modified date is ~5 hours into the future. System time: To make things more confusing, the system will intermittently speed up. Suddenly, my app will start serving requests in 1-2 seconds (down from 40 seconds) for no apparent reason. I mean I don't do anything to my code/system config - it just changes. Also, during a slow PHP request, the php5-fpm process (nginx) uses 100% of the CPU for the duration of the request. This is the second VM this has happened on and I need to know why its doing this. It has become unusable. Information About My Setup VirtualBox 4.2.0 Host: Macbook Pro Guest: Ubuntu Server 12.04 Package dkms is installed Timezones match for Ubuntu and PHP. Things I've Tried Both Apache and Nginx. APC enabled and disabled. Xdebug enabled and disabled. 1 processor up to 4 processors. 1gb memory up to 4gb memory. I've installed Ubuntu using the regular kernel and the VM kernel.

    Read the article

  • SQL server queries are really slow only on first run

    - by JoelFan
    Somewhat strange problem... when I start my .NET app for the first time after rebooting my machine, the SQL Server queries are really slow... when I pause the debugger, I notice that it's hanging on getting the response from the query. This only happens when connecting to a remote SQL server (2008)... if I connect to one on my local machine, it's fine. Also, if I restart the app, it works fast, even off the remote SQL server, and subsequent runs are also fine. The only problem is when I connect to a remote SQL server for the first time after rebooting my machine. What's more, I have even noticed this same exact behavior with a 3rd party app (also .NET) that also connects to a remote SQL server. Another piece of info... this has only started hapenning since I upgraded my machine from XP to Win7 (64 bit). Also, other developers on my team who upgraded to Win7 are seeing the same behavior (both with the app we're developing and the 3rd party .NET app). (copied from http://stackoverflow.com/questions/2014814/sql-server-queries-are-really-slow-only-on-first-run )

    Read the article

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • Slow Citrix connection related to mapped network drives

    - by George
    I have this weird issue with Citrix being slow and maybe users just being a little dramatic, but I am curious as to why that happens. Let me give you a little bit of a background. Citrix is running off of Windows 2003 server, TSprofiles and file server were located on the same server, until recently. We have moved our file server over to a new server with tons of space. We have Citrix on one server, TSprofiles on another and file server on third. We are using logon scripts to map home drives, shared drive and etc. Now, up until we made the file server move, the logon process took several seconds and most users couldn't even notice logon script being executed as they logon. Now, it takes upwards of several minutes and users can see logon script being executed at a slow pace, one line at a time. The only new variable in this whole scenario is the new file server. All the servers are physically located in the same location and on the same subnet. So, I guess my question is, if anyone can explain why a sudden sluggishness? And any tools I can use to troubleshoot the issue? Thanks!

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Why is my connection slow?

    - by Jay R.
    I have a Dell Precision T5400 with a Broadcom 1Gb onboard NIC. For some strange reason, when I access machines on our local network, the best I can get is around 125KB/s download speed. My laptop that has a 10/100Mb NIC onboard usually gets around 300KB/s or better from the same network resource. Both machines are plugged into the same 1Gb switch which connects to our local network wall jack at 100Mb half duplex. There is also a printer plugged into the same switch at 100Mb full. The resource I'm using for the test is a 30MB zip file copied from a jetty webserver that is running as part of a cruisecontrol installation. The cruisecontrol installation is running WindowsXP with full real-time antivirus and Altiris patch management and inventory running. That stuff on its own is eating some of the download speed. I've seen the laptop reach into the multiple MB/s download speed before, but the desktop never seems to get past 125KB/s to 130KB/s. In WindowsXP, before I upgraded the driver in the desktop, it was that slow. In Fedora, it is still slow even though it appears to be using the same driver version as the upgraded Windows driver. The upgraded Windows driver is faster, but still not nearly as fast as the laptop. What gives? Any insight to improve the situation would be appreciated. Could it be that the BroadCom board just isn't that good, or the driver in linux is just not as good as the Windows one?

    Read the article

  • How do I fix a super slow MacBook?

    - by MakingScienceFictionFact
    I'm running a black MacBook 4.1. Intel Core 2 Duo @ 2.4 GHz, 2 GB RAM, 250 GB hard disk drive, bus speed is 800 MHz. It's about three years old in excellent shape externally. I treat this thing like a baby. It used to run awesome, but now it's super slow at everything. I get the spinning pizza of death constantly. It takes a long time to boot up or load any program, even Safari and iTunes. iPhoto is terribly slow. The Internet doesn't work properly and it reminds me of a buggy PC. I've formatted it and re-installed Mac OS X 10.6 (with all updates), and I've done the disk repairs process. As an iOS developer this is driving me crazy, but luckily I have an iMac to work on in the day which is fast. I'm ready to format it again, but that didn't work last time. After the last format, I copied back files from an external drive so maybe the offending files were hidden in there somewhere. Here are the hard disk drive and RAM specifications. It is upgrade-able to 4 GB of RAM. Hard disk drive: The Fujitsu Mobile MHY2250BH is a 250 GB, standard hard disk drive. Its burst transfer rate is 150 Mbyte/s. This is a 5400 RPM drive and comes with an 8 MB buffer. RAM: two sticks of 1 GB DDR2 SDRAM, speed: 667 MHz.

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • MS Word custom dictionary making spellcheck slow - ideas?

    - by ezuk
    I have a user who edits technical materials. She uses MS Word's Custom Dictionary all the time for spelling; it has grown very large, and is now making spell check very slow. All of the advice I've read online says to disable the custom dictionary. This is an easy solution, but is not workable for the user, because she actually needs this dictionary. So, is there any way to optimize the custom dictionary and/or Word itself, so that a large dictionary file doesn't slow things down quite so badly? Many thanks. Update after suggestions: I ran contig on the file, and it reports just 1 frag, so that's not the issue I think. The file is 9.95KB -- 1,117 lines, each consisting of just a single word. I viewed the file using Notepad and none of the lines seems corrupted, strange, or overly long (no line seems to be over 10 chars or so). Both of your suggestions were helpful so I will upvote both; any further tips would be most welcome.

    Read the article

  • Microsoft Mouse and Keyboard Center - Slow response for App-specific shortcuts

    - by Darrel Hoffman
    So a few months ago, I bought a new MS mouse, and was surprised that they'd discontinued Intellipoint in favor of this Microsoft Mouse and Keyboard Center. It seems to have the same functionality underneath all the bloat, but there's a very serious drawback - when I set up application-specific functions for the extra buttons on the mouse, they work, but sometimes with a very long delay, like up to a minute or more. For example, I often set up the left side button as an "Undo" in various programs for convenience. But sometimes, when I try to use that Undo button, nothing happens, so I'm forced to use the standard Ctrl-Z or whatever. But then, a minute or so later, it suddenly remembers that I hit that button a while back, and calls the Undo unexpectedly on something entirely different. It's infuriating. No modern computer function should be this slow. It's not the software or the computer itself, because doing an Undo via Ctrl-Z or the menu still works instantly. It's very definitely a side-effect of delayed response to the mouse button. Usually after it delays the first time, it'll work quickly after that, but if you haven't used a given shortcut in several minutes, it "forgets" again and you get another inexplicably long delay. Intellipoint never had this problem, but it's not supported any more, and not compatible with the newer mice. Has anyone else noticed slow-downs with MS M&K C and app-specific shortcuts? Any ideas how to get around this? I use these shortcuts extensively in my workflow and it's just entirely unacceptable to have such a long delay in what should be a pretty basic feature.

    Read the article

  • Ruby installed on Ubuntu 10.10 slow on one machine but not other

    - by Aaron Jensen
    I have a machine that was provisioned several months ago. RVM was used to install ruby 1.9.3-p125 as well as 1.9.3-p125-perf. When I compared raw ruby performance to another identical machine the older machine smoked them. For example: ================================================================================ With in-block needle calculation ================================================================================ Rehearsal ---------------------------------------------- detect 3.790000 0.000000 3.790000 ( 3.800895) each 2.410000 0.000000 2.410000 ( 2.420860) any 3.960000 0.000000 3.960000 ( 3.972099) include 1.440000 0.000000 1.440000 ( 1.442862) ------------------------------------ total: 11.600000sec vs ================================================================================ With in-block needle calculation ================================================================================ Rehearsal ---------------------------------------------- detect 10.740000 0.000000 10.740000 ( 10.769366) each 6.080000 0.010000 6.090000 ( 6.106323) any 10.600000 0.000000 10.600000 ( 10.641606) include 4.160000 0.000000 4.160000 ( 4.171530) ------------------------------------ total: 31.590000sec I attempted to reinstall 1.9.3-p125 with rvm on the fast machine and that ruby is now slow. It's as if something changed in RVM, or I installed some package that made compiled versions of ruby perform significantly worse. I know this is a tough question to answer, but what things should I look into in order to track down why the performance has suffered so much? edit I just attempted to install with ruby-build and the version installed was fast. Something rvm is doing to build it in my environment is slow.

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Mysql server high trafic makes websites really slow or unable to load

    - by Holapress
    Lately we have been having a lot of problems with our mysql server, from websites being really slow or even unable to load them at all. The server is a dedicated server that only runs our mysql database. i have been running some test using a profiler (JetProfiler) and tool to stress test (loadUI). If I use loadUI to connect with 50 simultaneous connections to one of our websites that runs a resently big query it will already make the website be unable to load. One of the things that makes me worried is that when I look at Jetprofile it always shows a Treads_connected of 1.00 and it seems that when it hits around 2.00 that I'm unable to connect. The 3 big peaks are when I run a test with loadUI, first one was 15 simultaneous connections wich made it still able for me to load the website but just really slow, the second one was 40 simultaneous connections which already made it impossible to load and the third one was with 100 connection which also didn't make it load anymore. Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem? The website I run as a test runs 3 queries, one for a menu that outputs around 1000 rows, one for the adds that has around 560 rows and a big one to get posts that has around 7000 rows (see screenshot bellow) I also have monitored the cpu of the server and there seems to be no problem there, even when I make a lot of connections with loadui the cpu stays low. I can't seem to figure out what is the main cause of the websites being unable to load when there is a high amount of traffic, if anyone has other suggestions for testing or something that might cause the problem please let me know.

    Read the article

  • Touchpad scroll slow and jumpy

    - by IR
    I have a laptop with a synaptics touchpad running on win7 x64. When i use the scrolling region of the touchpad in some applications, for example in Visual Studio 2008, Notepad and Windows Media Player 12, the scroll is very slow. If i pull the edge of the touchpad slowly the program will scroll one row at a time(regardless of the number of lines-settings in the mouse configuration). If i pull the edge quickly though, the program will instantly jump like 20 rows making it way too fast. In some applications, like Firefox, the scrolling work as expected. Changing the scrollspeed-setting for the touchpad does not help. If you make it slower it doesn't do the 20-row jump but then it's horribly slow and if you try to make it faster it will do the jumps all the time. I have tried both synaptics generic drivers and the "special" drivers that HP provides but they both have the same problem (except that the generic one can't adjust the scrolling speed, even though that didn't help anyway). With windows generic drivers the scrolling region doesn't even work. Other mice i've tried with scrollwheel work as they should do.

    Read the article

  • Copy past speed very slow for a large number of files on Windows [closed]

    - by Arno2501
    I've run the following test I've created a folder containing 15'000 files of 400 bytes using this batch : @ECHO off SET times=15000 FOR /L %%i IN (1,1,%times%) DO ( fsutil file createnew filename%%i.txt 400 ) then I copy past it on my Windows Computer using this command : robocopy LargeNumberOfFiles\ LargeNumberOfFiles2\ After it has completed I can see that the transfer rate was 915810 Bytes/sec this is less than 1 MB/s. It took me several seconds to copy 7 MBytes Please note that this is very slow. I've tried the same with a folder with a single file of 50 Mbytes and the transfer rate is 1219512195 Bytes/sec. (yeah GB/s) instantaneous. Why copying large number of files take so much time - ressources on a windows filesystem ? Please note that I've tried to do the same on a linux system which runs on the same computer in a virtual machine (vmware player) with ext3 filesystem. I use the cp command and the copy is instantaneous ! Please also note the following : no antivirus I've tested that behaviour on multiple windows computers (always ntfs) i always get comparable results (transfer rate under 1MB/s avg 7-8 seconds to copy 7 MBytes) I've tested on multiple linux ext3 system the copy is always instantaneous for that amount (15000 files of 400 bytes) The question is about understanding what makes windows filesystem so slow to copy large number of files compared to a linux one for instance.

    Read the article

  • PHP pages working slow from time to time

    - by user1038179
    I have VPS with limit of 2GB of ram and 8 CPU cores. I have 5 sites on that VPS (one of them is just for testing, no visitors exept me). All 5 sites are image galleries, like wallpaper sites. Last week I noticed problem on one site (main domain, used for name servers, and also with most traffic, visitors). That site has two image galleries, one is old static html gallery made few years ago and another, main, is powered by ZENPhoto CMS. Also I have that same gallery CMS on another two sites on that same VPS (on one running site and on one just for testing site). On other two sites I have diferent PHP driven gallery. Problem is that after some time (it vary from 10 minutes to few hours after apache restart), loading of pages on main site becomes very slow, or I get 503 Service Temporarily Unavailable error. So pages becomes unavailable. But just that part with new CMS gallery, old part of site with static html pages are working fast and just fine. Also other two sites with same CMS gallery and other two with different PHP driven gallery are working fine and fast at the same time. I thought it must be something with CMS on that main site, because other sites are working nice. Then I tryed to open contact and guest book pages on that main site which are outside of that CMS but also PHP pages, and they do not load too, but that same contact php scipts are working on other sites at the same time. So, when site starts to hangs, ONLY PHP generated content is not working, like I said other static pages are working. And, ONLY on that one main site I have problems. Then I need to restart Apache, after restart everything is vorking nice and fast, for some time, than again, just PHP pages on main site are becomming slower. If I do not restart apache that slowness take some time (several minutes, hours, depending ot traffic) and during that time PHP diven content is loading very slow or unavailable on that site. After sime time, on moments everything start to work and is fast again for some time, and again. In hours with more traffic PHP content is loading slowly or it is unavailable, in hours with less traffic it is sometimes fast and sometimes little bit slower than usually. And ones again, only on that main site, and only PHP driven pages, static pages are working fast even in most traffic hours also other sites with even same CMS are working fast. Currently I have about 7000 unique visitors on that site but site worked nice even with 11500 visitors per day. And about 17000 in total visitors on VPS, all sites ( about 3 pages per unique visitor). When site start to slow down sometimes in apache status I can see something like this: mod_fcgid status: Total FastCGI processes: 37 Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 11300 39 28 7 Working 11274 47 28 7 Working 11296 40 29 3 Working 11283 45 30 3 Working 11304 36 31 1 Working 11282 46 32 3 Working 11292 42 33 1 Working 11289 44 34 1 Working 11305 35 35 0 Working 11273 48 36 2 Working 11280 47 39 1 Working 10125 133 40 12 Exiting(communication error) 11294 41 41 1 Exiting(communication error) 11277 47 42 2 Exiting(communication error) 11291 43 43 1 Exiting(communication error) 10187 108 43 10 Exiting(communication error) 10209 95 44 7 Exiting(communication error) 10171 113 44 5 Exiting(communication error) 11275 47 47 1 Exiting(communication error) 10144 125 48 8 Exiting(communication error) 10086 149 48 20 Exiting(communication error) 10212 94 49 5 Exiting(communication error) 10158 118 49 5 Exiting(communication error) 10169 114 50 4 Exiting(communication error) 10105 141 50 16 Exiting(communication error) 10094 146 50 15 Exiting(communication error) 10115 139 51 17 Exiting(communication error) 10213 93 51 9 Exiting(communication error) 10197 103 51 7 Exiting(communication error) Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7983 1079 2 149 Ready 7979 1079 11 151 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7990 1066 0 57 Ready 8001 1031 64 35 Ready 7999 1032 94 29 Ready 8000 1031 91 36 Ready 8002 1029 34 52 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7991 1064 29 115 Ready When it is working nicly there is no lines with "Exiting(communication error)" Active and Idle are time active and time since last request, in seconds. Here are system info. Sysem info: Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5440 @ 2.83GHz Speed 88.320 MHz Cache 6144 KB All other seven are the same. System Information Linux vps.nnnnnnnnnnnnnnnnn.nnn 2.6.18-028stab099.3 #1 SMP Wed Mar 7 15:20:22 MSK 2012 x86_64 x86_64 x86_64 GNU/Linux Current Memory Usage total used free shared buffers cached Mem: 8388608 882164 7506444 0 0 0 -/+ buffers/cache: 882164 7506444 Swap: 0 0 0 Total: 8388608 882164 7506444 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/vzfs 100G 34G 67G 34% / none System Details: Running on: Apache/2.2.22 System info: (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_fcgid/2.3.6 Powered by: PHP/5.3.10 Current Configuration Default PHP Version (.php files) 5 PHP 5 Handler fcgi PHP 4 Handler suphp Apache suEXEC on Apache Ruid2 off PHP 4 Handler suphp Apache suEXEC on Apache Configuration The following settings have been saved: fileetag: All keepalive: On keepalivetimeout: 3 maxclients: 150 maxkeepaliverequests: 10 maxrequestsperchild: 10000 maxspareservers: 10 minspareservers: 5 root_options: ExecCGI, FollowSymLinks, Includes, IncludesNOEXEC, Indexes, MultiViews, SymLinksIfOwnerMatch serverlimit: 256 serversignature: Off servertokens: Full sslciphersuite: ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP:!kEDH startservers: 5 timeout: 30 I hope, I explained my problem nicely. Any help would be nice.

    Read the article

  • Is SiteCore slow and buggy?

    - by Larsenal
    I've seen plenty of negative comments regarding performance and general bugginess. However, to be fair, most of these look like they were within the v5.3 timeframe. Have they fixed all of those issues in v6.0? Is it an excellent product? Some examples of the complaints: Maybe it’s just a case of user error, but this guy says, “some pages take as much as 20s to render…” Source Here, in the comments, one fellow remaks, “Sitecore backend is incredible slow. Sitecore developement is really pain, it takes from 2 minutes to start sitecore and many many seconds to do small backend operations. They claim to have a quick client, but that is a BIG LIE. All developers in my company really hate sitecore for being so slow.” Source Another search yielded, “Sitecore’s users listed three issues as number one: licensing, the server as a resource hog, and the site’s slow responsiveness.” Source

    Read the article

  • What can cause throughput to become really slow when an ISAPI filter implements SF_NOTIFY_SEND_RAW_D

    - by Gerald
    I have an ISAPI filter for IIS6 that I've been developing for a while, but I just noticed something disturbing. Anytime I have the filter installed and I download a file, the file download becomes really slow. From a remote machine I'm getting ~120kb per second without the filter installed, and ~45kb per second with the filter installed. This seems to be related to the SF_NOTIFY_SEND_RAW_DATA callback. Whenever I register for this callback, I get the slow downloads, when I don't register for it, everything is fine. Even if I make my HttpFilterProc function just return immediately, like this: DWORD WINAPI HttpFilterProc( PHTTP_FILTER_CONTEXT pfc, DWORD notificationType, LPVOID pvNotification ) { return SF_STATUS_REQ_NEXT_NOTIFICATION; } I've also tried returning SF_STATUS_REQ_HANDLED_NOTIFICATION with the same result. Is it possible that I have some build setting on my DLL that is causing a slow execution of the callback function, or is this just the way it's going to be with ISAPI?

    Read the article

  • pyplot: really slow creating heatmaps

    - by cvondrick
    I have a loop that executes the body about 200 times. In each loop iteration, it does a sophisticated calculation, and then as debugging, I wish to produce a heatmap of a NxM matrix. But, generating this heatmap is unbearably slow and significantly slow downs an already slow algorithm. My code is along the lines: import numpy import matplotlib.pyplot as plt for i in range(200): matrix = complex_calculation() plt.set_cmap("gray") plt.imshow(matrix) plt.savefig("frame{0}.png".format(i)) The matrix, from numpy, is not huge --- 300 x 600 of doubles. Even if I do not save the figure and instead update an on-screen plot, it's even slower. Surely I must be abusing pyplot. (Matlab can do this, no problem.) How do I speed this up?

    Read the article

  • AFP painfully slow

    - by yairchu
    Copying a file using AFP took 40 minutes but using scp it only took 7 mins. Why is AFP so slow? My setup: D-Link DIR-300 wifi router iMac with Snow-Leopard serves AFP Macbook with Leopard is the client

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >