Search Results

Search found 17336 results on 694 pages for 'richard long'.

Page 168/694 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Importance of SEO in Designing a Website

    If you are really interested in marking a successful online presence, you first need to understand that SEO is a long term approach that you have to abide by from the very start of designing of your website. Many people often wonder regarding what initiated first - search engine optimization or web design? The truth is that one needs to plan for both of them since the designing of the very first page of your website.

    Read the article

  • How to Make Money Online by Creating a Website? Part 1

    Who doesn't want to make money online these days. Everybody is looking for different ways. Some are looking for the quick ways, some for easy ways and there are others who are looking for the ways which will help them to make money in the long term. In this article I am going to tell you how to make money online by creating a website.

    Read the article

  • Reasons to Hire an Expert SEO Company

    SEO outsourcing is a major trend in current scenario and thus today hiring a SEO expert on full time, in all aspects can be a real good option to get your work done. As they can handle their job much more efficiently then anyone else, and generally cost you anything in the long run, you can easily recover your expense in revenue.

    Read the article

  • How to Make Money Online by Creating a Website? Part 1

    Who doesn't want to make money online these days. Everybody is looking for different ways. Some are looking for the quick ways, some for easy ways and there are others who are looking for the ways which will help them to make money in the long term. In this article I am going to tell you how to make money online by creating a website.

    Read the article

  • VMWare ESXi, change the default path for a VM

    - by glenatron
    For some reason VMWare ESXi has decided that one of my VMs is on a completely different path to the path it is actually on. So my VM is on /vmfs/volumes/long-guid-here/my-vm-name but when I try to open it I get the message "File < unspecified filename was not found." Which is not really surprising as unspecified filenames are quite difficult to locate. I thought it was just the swap file, which was down in the .vmx file as /vmfs/volumes/long-guid-here/old-vm-name/old-vm-name.vmsd but when I changed that in the vmx it made no difference. What I can't figure out is where VMWare is getting the old-vm-name from- when I look in the "Settings" pane it believes the working file location to be "[datastore-name] old-vm-name\" and I can't find anywhere to change it. Now the files themselves are all named for old-vm-name - so the directory is /my-vm-name/old-vm-name.vmx and so on. Is this the cause of my problems or is there some arcane configuration option elsewhere around the VMWare machine that I need to be tinkering with?

    Read the article

  • How do I stop Sophos anti virus from scanning directories that are under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely the need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • How many bits for sequence number using Go-Back-N protocol.

    - by Mike
    Hi Everyone, I'm a regular over at Stack Overflow (Software developer) that is trying to get through a networking course. I got a homework problem I'd like to have a sanity check on. Here is what I got. Q: A 3000-km-long T1 trunk is used to transmit 64-byte frames using Go-Back-N protocol. If the propagation speed is 6 microseconds/km, how many bits should the sequence numbers be? My Answer: For this questions what we need to do is lay the base knowledge. What we are trying to find is the size of the largest sequence number we should us using Go-Back-N. To figure this out we need to figure out how many packets can fit into our link at a time and then subtract one from that number. This will ensure that we never have two packets with the same sequence number at the same time in the link. Length of link: 3,000km Speed: 6 microseconds / km Frame size: 64 bytes T1 transmission speed: 1544kb/s (http://ckp.made-it.com/t1234.html) Propagation time = 6 microseconds / km * 3000 km = 18,000 microseconds (18ms). Convert 1544kb to bytes = 1544 * 1024 = 1581056 bytes Transmission time = 64 bytes / 1581056bytes / second = 0.000040479 seconds (0.4ms) So then if we take the 18ms propagation time and divide it by the 0.4ms transmission time we will see that we are going to be able to stuff ( 18 / 0.4) 45 packets into the link at a time. That means that our sequence number should be 2 ^ 45 bits long! Am I going in the right direction with this? Thanks, Mike

    Read the article

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • When tab groups are loaded, Firefox becomes unresponsible for minutes (Unresponsive script)

    - by unor
    I have several tab groups (~ 20) in Firefox. I can start the browser without any problems. However, as soon as I … click at the "Group tabs" icon in the toolbar, or right-click on a tab and hover over "Move to tab group", … Firefox becomes unresponsible/freezes for a rather long time (more than 2 minutes). It seems to load all tab groups (it doesn't load all the pages! I deactivated this in the settings). While this is happening, I get several "Unresponsive script" warnings, like: Script: chrome://global/content/bindings/tabbox.xml:0 (most of the time) Script: chrome://global/content/bindings/tabbox.xml:418 Script: chrome://browser/content/tabview.js:400 Script: chrome://browser/content/tabview.js:522 Script: resource://modules/sessionstore/SessionStore.jsm:3578 Script: resource:///components/PageThumbsProtocol.js:79 (rare) Script: resource://gre/modules/XPCOMUtils.jsm:323 (rare) (probably also other warnings, didn't record them yet, though) On all of these I click "Continue". After ~ 2-3 minutes and 3-5 warnings, I can use Firefox again. Now I can switch tab groups without any problems. Why is this happening? How can I prevent the long loading time? Is there maybe a about:config setting I could try? I started Firefox in Safe Mode (= without any add-ons): the problem still exists.

    Read the article

  • Apache: https to https redirect

    - by Klaas van Schelven
    I'm trying to get Apache to redirect all http and https traffic to a single endpoint www.example.org. The http part is easy: <VirtualHost *:80> ServerName example.org Redirect permanent / https://www.example.org/ </VirtualHost> # long list of other domains, all redirecting to https://www.example.org/ <VirtualHost *:80> ServerName www.example.org Redirect permanent / https://www.example.org/ </VirtualHost> I'm trying to do something similar for the https. It is my understanding that I need to specify one specific IP address, because the Host directive is also sent encrypted. So the below works: <VirtualHost xx.xx.xx.xx:443> ServerName www.example.org # actual stuff happening here </VirtualHost> However, when I start adding the redirects to the config, like so: <VirtualHost xx.x.xx.xx:443> ServerName example.org Redirect permanent / https://www.example.org/ </VirtualHost> # long list of other domains stuff breaks. $ apache2ctl configtest [warn] VirtualHost xx.xx.xx.xx:443 overlaps with VirtualHost xx.xx.xx.xx:443, the first has precedence, perhaps you need a NameVirtualHost directive If I add a directive like so: NameVirtualHost xx.xx.xx.xx:443 Connecting to the (ssl part of the) server starts to fail. How do I solve this?

    Read the article

  • "Could not claim interface on camera: -6" when trying to connect usb camera (Kinect)

    - by rzetterberg
    I have installed the freenect library from openkinect.org. With that library there is a demo application which you can run from the terminal to test out the Kinect. However when I run this command I get the following output: richard@behemoth:~$ sudo freenect-glview Kinect camera test Number of devices found: 1 Could not claim interface on camera: -6 Could not open device This particular error is thrown by the library libusb by the function libusb_claim_interface and the error -6 corresponds to the LIBUSB_ERROR_BUSY. So my guess is that it has something to do with mounting the usb, rather than specifically the freenect library or the Kinect itself. So my question is how can I find out what resource is using this interface and how can I free it so that I can access it? Edit: What I have tried so far (just to be sure): Rebooted Plugged-out, plugged-in Tried different usb ports Restarted udev Additional information that might be useful: /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=1c73f217-ac8d-451b-8390-7a680628a856 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=bb49bd29-07ec-45a0-bbab-46fb8362b06b none swap sw 0 0 sudo uname -r: Linux behemoth 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10"

    Read the article

  • Can't compile CentOS 5, Ruby 1.9.2 and OpenSSL 1.0.0c

    - by pstinnett
    I'm trying to install Ruby 1.9.2 on CentOS 5.5. I get through most of the make process, but when it tries to compile OpenSSL I get an error. Below is the errror outputted: compiling openssl make[1]: Entering directory `/sources/ruby-1.9.2-p136/ext/openssl' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/openssl -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O3 -ggdb -Wextra -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-missing-field-initializers -Wno-long-long -o ossl_x509.o -c ossl_x509.c In file included from ossl.h:201, from ossl_x509.c:11: openssl_missing.h:71: error: conflicting types for ‘HMAC_CTX_copy’ /usr/include/openssl/hmac.h:102: error: previous declaration of ‘HMAC_CTX_copy’ was here openssl_missing.h:95: error: conflicting types for ‘EVP_CIPHER_CTX_copy’ /usr/include/openssl/evp.h:459: error: previous declaration of ‘EVP_CIPHER_CTX_copy’ was here make[1]: *** [ossl_x509.o] Error 1 make[1]: Leaving directory `/sources/ruby-1.9.2-p136/ext/openssl' make: *** [mkmain.sh] Error 1 Any help would be greatly appreciated! I'm not a master at Linux by any means, but I was able to successfully install this version of Ruby on our dev server. Our live server is running a newer version of OpenSSL which I'm assuming is why it's breaking. Just not sure what the fix is!

    Read the article

  • Citrix client slow to launch

    - by user706837
    Was wondering if anyone else experience Citrix client to launch very slowly. While I'm a Windows SA by trade, I consider myself Novice+ on Linux, but I doubt thats the problem. This is the simple scenario: 1. Login to Citrix server to work from home 2. Click on the published application; this typically starts the local Citrix client. 3. Citrix client should start and you're off. Problem is between #2 and #3 I click on the application and 8 out of 9 times there is a 60 second delay and then I get an SSL connection error. I suspect this error is misleading since the connection took too long to open. But I dont know how to prove it (or fix it). I'm able to successfully manually launch wfcmgr without errors; so this leads me to believe Citrix client is installed correctly. I even leave it running thinking this may help, but I don't see a difference with or without this running first. The only times I'm able to connect successfully is when the Citrix client starts up a few seconds after clicking on the application. I've searched online for articles that might help, but tried a number of fixes without much difference. Even tried "ln -sf /dev/urandom /dev/random" as suggested by this article, but no dice:http://forums.citrix.com/message.jspa?messageID=1381276 My System (specs that may be relevant) Sony VAIO Laptop VGN-NW270F Linux Mint 11.04 Problem using: FireFox and Chrome Any help would be appreciated. Just trying to either find an answer or guidance on how to determine why its taking so long to launch the Citrix Client. Thanks

    Read the article

  • How to stop SophosAV from scanning directories under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • smartctl not actually running self tests?

    - by canzar
    I want to run the smartctl self tests to check the health of the drives in my RAID array (PERC 5/i). The array is on sda and comprises six drives. I can check the status using sudo smartctl /dev/sda -d megaraid,0 -a And I see that SMART is available and enabled on all the drives. I have tried to run self tests using sudo smartctl /dev/sda -d megaraid,0 -t short and sudo smartctl /dev/sda -d megaraid,0 -t long I have also tried it on all of the drives 0-5. No matter what I try, when I run: sudo smartctl /dev/sda -d megaraid,0 -l selftest I always get the same result, which seems to always report that I have never run a self test. /dev/sda [megaraid_disk_00] [SAT]: Device open changed type from 'megaraid' to 'sat' ===START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] From what I read, I should have no problem running the short and long self tests on the array while it is mounted. Does anyone else have experience running these tests on a PERC 5/i raid array who could lend some insight into what is causing the problem? (smartmontools release 5.40 dated 2009-12-09 at 21:00:32 UTC)

    Read the article

  • Mounting 2.5" SSD in Antec Atlas 550's 3.5" bay

    - by cecilkorik
    Just got some new SSDs to add to my servers, unfortunately there doesn't appear to be anywhere to actually mount them. The cases are Antec Atlas 550 mid-tower server cases. The problem is that the 3.5" bays are intended for 3.5" hard drives (obviously) not SSDs, so they are mounted from the bottom with rubber grommets to reduce vibration. There are NO side-holes in any of the 3.5" bays, all mounting has to be done from the bottom of the drive, through the thick rubber grommets, which requires special extra-long screws with large heads. Those screws will not work with the SSD, the thread is too coarse, 2.5" drives apparently use M3 screws with a finer pitch. The case's 3.5" bays have holes aligned for 2.5" drives and by moving the grommets into the appropriate holes they line up with the SSD. But that's as far as I can get. I don't have and cannot find any screws that use the finer M3 pitch that are long enough and have the wide head. I can't remove the grommets because the screw holes are much too wide without them, the entire head of the screw fits through. And again, there are no side-holes, so I can't use the mounting bracket that came with the drive nor any other 2.5"-3.5" bracket I've found. Here is a pic of what I am dealing with (note that the SSD is just resting on the case, it's not currently screwed in if that's not clear) Any ideas for safely mounting these little guys without resorting to duct tape or superglue would be very appreciated. If anyone knows where I could buy the appropriate type of M3 threaded screw or has any other ideas please help me out here.

    Read the article

  • Virtual bridged networking with VLAN, could not ping

    - by v.yegy
    I require a virtual network with VLAN be build between two virtual hosts - which can be (lxc/ vbox -ubuntu or win xp). I tried with lxc and vbox with Ubuntu and was finding difficult to make it work without vlan, but was successful with vbox with xp. vbox-xp1 --- br1 ---------------- br2 ---- vbox-xp2 The config is: brctl addbr br1; brctl addbr br2 ifconfig br1 up; ifconfig br2 up stp br1 off; stp br2 off ip link add name br1-br2-l0 type veth peer name br1-br2-l1 sudo brctl addif br1 br1-br2-l0 sudo brctl addif br2 br1-br2-l1 vbox - xp 1 and 2 with network ; bridged and br1 and br2 respectively. The adapter is intel PRO/1000 MT Server and driver installed in guests. Configured IPs and two hosts pinged! VLAN config: ip link add link br1 name br1-2.5 type vlan id 5 brctl addif br2 br1-2.5 create vlan 5 in xp 1 and 2 and assign ip address Ping on with this config does not work. Wireshark trace on interface br1-br2-l1 / br1-2.5 shows that one ping results in ~240 ping packets and each growing by 4 bytes - first one being correct and 60, ping does not reach other host as I see mac is not learnt[arp -a]. -- if br1-2.5 is not configured, I see untagged packets in br1-br2-l1/0, but still not reaching other host as mac is not learnt. if br1-br2-l0/1 is made down, even if br1-2.5 is up, I count not see any packets. I tried with ebtables, but still could not make a correct config to work. -- If any one here are aware of any configuration, please let me know. I need to make a network of switches. Seems I have a very long way. Sorry for a very long question. Thanks and regards, vy

    Read the article

  • Time not propagating to machines on Windows domain

    - by rbeier
    We have a two-domain Active Directory forest: ourcompany.com at the root, and prod.ourcompany.com for production servers. Time is propagating properly through the root domain, but servers in the child domain are unable to sync via NTP. So the time on these servers is starting to drift, since they're relying only on the hardware clock. WHen I type "net time" on one of the production servers, I get the following error: Could not locate a time-server. More help is available by typing NET HELPMSG 3912. When I type "w32tm /resync", i get the following: Sending resync command to local computer The computer did not resync because no time data was available. "w32tm /query /source" shows the following: Free-running System Clock We have three domain controllers in the prod.ourcompany.com subdomain (overkill, but the result of a migration - we haven't gotten rid of one of the old ones yet.) To complicate matters, the domain controllers are all virtualized, running on two different physical hosts. But the time on the domain controllers themselves is accurate - the servers that aren't DCs are the ones having problems. Two of the DCs are running Server 2003, including the PDC emulator. The third DC is running Server 2008. (I could move the PDC emulator role to the 2008 machine if that would help.) The non-DC servers are all running Server 2008. All other Active Directory functionality works fine in the production domain - we're only seeing problems with NTP. I can manually sync each machine to the time source (the PDC emulator) by doing the following: net time \\dc1.prod.ourcompany.com /set /y But this is just a one-off, and it doesn't cause automated time syncing to start working. I guess I could create a scheduled task which runs the above command periodically, but I'm hoping there's a better way. Does anyone have any ideas as to why this isn't working, and what we can do to fix it? Thanks for your help, Richard

    Read the article

  • Puppet file transfer slow

    - by Noodles
    I have a puppet master and slaves in different datacenters. The latency between them is ~40ms. When I run "puppet agent --test" on a slave to apply the latest manifest it takes ~360 seconds to finish. After doing some digging I can see the main cause of the slow down is file transfers. It seems it's taking ~10 seconds to transfer each file. The files are only small (configuration files) so I can't understand why they would take so long. This is an example of a file in my manifest: file { "/etc/rsyncd.conf" : owner => "root", group => "root", mode => 644, source => "puppet:///files/rsyncd/rsyncd.conf" } Running puppet-profiler I see this: 10.21s - File[/etc/rsyncd.conf] It also seems I cannot update more than one server at once using puppet. If I run two servers at the same time then puppet takes twice as long. I have changed the puppet master from using webrick to mongrel, but this doesn't seem to help. This is making deploying changes painful. A simple config change can take an hour to roll out to all servers.

    Read the article

  • Is there any way to detect when nginx has completed a graceful shutdown?

    - by Daniel Vandersluis
    I have a ruby on rails application which is running on passenger and nginx, with one main webserver and multiple application servers. I am trying to update my deployment process in order to minimize (or ideally, remove) any downtime caused by the deployment. The main roadblock right now is that passenger takes some time to restart (ie. reload the application), so in order to get around this, I want to stagger my restarts so that only one app server gets restart at a time. In order to do this without losing any long running passenger processes, I am thinking I need to gracefully shutdown the app server's nginx instance, which will cause it to no longer accept new connections but continue to process the existing ones; as well, HAProxy will detect that the app server is down and route new requests to the other server. However, assuming that there is a long-running process, I am not sure how to detect when the graceful shutdown has completed so that I can start it back up. Since the shutdown is caused by sending a signal (ie. kill -QUIT $( cat /var/run/nginx.pid )), and the kill command will return immediately, I cannot combine commands (ie. kill ... && touch restarted), as the touch command will execute immediately, even if nginx hasn't completed its shutdown. Is there any good way to do this?

    Read the article

  • How can I do an SELINUX filesystem relabel without rebooting first?

    - by Skaperen
    I can touch the file /.autorelabel and reboot and during the initialization coming back up it will do the SELINUX relabel for me. But I want to do this in a different situation where the system has just been copied to a hard drive image. I can chroot to the originating file tree, or chroot to the just populated device image and run it. I just can't find anything that says what to be run. This image is being made into an AMI on AWS EC2, and contains CentOS 6.3. But the time it takes to relabel is too long (6 minutes or more). I want to move the relabel to the image build where the extra time is not an issue (because it happens once instead of every time an AMI is launched). I can make this relabel be the very last thing just before the filesystem is unmounted for the last time until it becomes an AMI and will launch. I just need to know what to call to do it. I have searched man pages with no luck. I have searched system init scripts but where /.autorelabel is detected, it is unclear what is happening. Documents like http://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-fsrelabel.html only tell how to do things that still really do the work after a reboot. I need to have the work doing BEFORE the "reboot" (unmount, build AMI, and launch ready to go). The big point is ... yes there will be a reboot ... but I want the relabel work to be done before that so it won't be done every time an AMI is launched (because it takes so long).

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >