Search Results

Search found 151 results on 7 pages for 'lsof'.

Page 5/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • SSH session closing whilst virtualenv session stays open (I think)

    - by ing0
    I've been developing some sites using Flask recently (running on debian within a virtualenv), and when I am testing I can run it on a port, let's say post 5000. So I run the script like so: . env/bin/activate <- go into virtual environment python file.py <- run python script And I will be given this message: Running on http://0.0.0.0:5000/ So this all works great and I can access my site on this port fine. However... my rubbish ISP always does this thing where it resets something around 1am every morning. I have no idea what this is, everything runs like normal but I always get disconnected from any SSH sessions open. This leaves it running and all I can do is call: lsof -i Which will show me the process but if I kill it and then rerun it things get weird. The: Running on http://0.0.0.0:5000 message still shows but I cannot connect to it anymore. I've tried changing the port number and it seems the only thing that works is trying again later on or on another day. Now I'm assuming that something on my server resets inbetween these times and I would like to think it was maybe that virtualenv session timing out, but I cannot find out how to do this manually, does anyone know?

    Read the article

  • Can't find disk usage in one directory

    - by Xster
    Similar questions are asked frequently but no suggested answers solved my issue. I have some disk space usage that I can't find as well. In df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 144183992 136857180 2652 100% / udev 2013316 4 2013312 1% /dev tmpfs 808848 876 807972 1% /run none 5120 0 5120 0% /run/lock none 2022116 76 2022040 1% /run/shm overflow 1024 0 1024 0% /tmp I checked the inodes, I checked lsof for +L1 or deleted files, I rebooted, I checked for files hidden behind mounts but none of them were the issue. It grows periodically and I'm running out of things to delete to feed the beast. It's all in the home directory of the only user I have. In du in ~ du -h --max-depth=1 192K ./.nv 2.1M ./.gconf 12K ./Pictures 1.6M ./.launchpadlib 12K ./Public 24K ./.TemporaryItems 8.9M ./.cache 12K ./Network Trash Folder 28K ./.vnc 11M ./.AppleDB 48K ./.subversion 1.9G ./.xbmc 8.0K ./.AppleDesktop 12K ./.dbus 81M ./.mozilla 12K ./Music 160K ./.gnome2 44K ./Downloads 692K ./.zsh 236K ./.AppleDouble 64K ./.pulse 4.0K ./.gvfs 1.4M ./.adobe 44K ./.pki 44K ./.compiz-1 168K ./.config 1.4M ./.thumbnails 12K ./Templates 912K ./.gstreamer-0.10 8.0K ./.emacs.d 92K ./Desktop 1.3M ./.local 12K ./Ubuntu One 12K ./Documents 296K ./.fontconfig 12K ./.qt 12K ./.gnome2_private 20K ./.ssh 20K ./.mission-control 12K ./Videos 12K ./Temporary Items 640K ./.macromedia 124G . I can't find a way to figure out how it got to that 124G in that directory. There are no mount points in home.

    Read the article

  • Redis 2.0.3 would not let go of deleted appendonly.aof file after BGREWRITEAOF

    - by Alexander Gladysh
    Ubuntu 10.04.2, Redis 2.0.3 (more details at the end of the question). My AOF file for Redis is getting too large, to the point where it soon would threaten to take whole free disk space on my small-HDD VPS box: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / $ ls -la total 3866688 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:11 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-r----- 1 redis redis 3923246988 2011-03-02 00:14 appendonly.aof -rw-rw---- 1 redis redis 32356467 2011-03-02 00:11 dump.rdb When I run BGREWRITEAOF, the AOF file shrinks, but disk space is not freed: $ ls -la total 95440 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:17 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-rw---- 1 redis redis 65137639 2011-03-02 00:17 appendonly.aof -rw-rw---- 1 redis redis 32476167 2011-03-02 00:17 dump.rdb $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / Sure enough, Redis is still holding the deleted file: $ sudo lsof -p6916 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... redis-ser 6916 redis 7r REG 202,0 3923957317 918129 /var/lib/redis/appendonly.aof (deleted) ... redis-ser 6916 redis 10w REG 202,0 66952615 917507 /var/lib/redis/appendonly.aof ... How can I workaround this issue? I can restart Redis this time, but I would really like to avoid doing this on a regular basis. Note that I can not upgrade to 2.2 (upgrade to 2.0.4 is feasible though). More information on my system: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid $ uname -a Linux my.box 2.6.32.16-linode28 #1 SMP Sun Jul 25 21:32:42 UTC 2010 i686 GNU/Linux $ redis-cli info redis_version:2.0.3 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:32 multiplexing_api:epoll process_id:6916 uptime_in_seconds:632728 uptime_in_days:7 connected_clients:2 connected_slaves:0 blocked_clients:0 used_memory:65714632 used_memory_human:62.67M changes_since_last_save:8398 bgsave_in_progress:0 last_save_time:1299014574 bgrewriteaof_in_progress:0 total_connections_received:17 total_commands_processed:55748609 expired_keys:0 hash_max_zipmap_entries:64 hash_max_zipmap_value:512 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=1,expires=0 db1:keys=18,expires=0

    Read the article

  • Apache process consuming all memory on the server

    - by jemmille
    I have an apache process that suddenly appears on a particular server. When it shows up it starts consuming memory at a very rapid rate, then moves on to all the swap. In all it consumes about 11GB (including swap) of memory and the server eventually becomes unresponsive. The load on the server is under 1 at all other times. The process runs as nobody and I am having a hard time tracking down the source. If i run an strace on the process and all it did was continuously dump out mprotect over and over again If i run lsof -p <pid>, I get this, but only sometimes: httpd 19229 nobody 152u IPv4 175050 crawl-66-249-67-216.googlebot.com:62336 (CLOSE_WAIT) httpd 19229 nobody 153u IPv4 179104 crawl-66-249-71-167.googlebot.com:58012 (ESTABLISHED) As long as I catch it, I can kill the process and the server almost immediately stabilizes. I have on site on the server that is getting a few thousand hits a a day that I think might be the source, but I still can't find the exact reason. Also, this is a cPanel server and I have upcp'd the server, rebuilt apache with easy apache, and rebuilt httpd.conf. It is not spawing any related processes, meaning I can find any php, mysql, cgi, etc. processes that relate to this process. It's just a loner process that balloons fast and consumes ever last MB of memory. This is on a XenServer 5.6 based VM. No other servers in the cluster are having this issue.

    Read the article

  • Apache2 process stuck at 100% cpu, CLOSE_WAIT socket lingering

    - by mmazing
    I've troubleshooted the heck out of this today, and I can't seem to find any information on how to determine what is happening exactly. Basically, on my development server, another developer is causing CLOSE_WAIT connections that eat up one or more apache2 processes for several hours if I don't restart apache2. strace on any of the processes yields no information, only that it was able to attach. mod_proxy is not enabled. KeepAlive is on, KeepAliveTimeout is 15 seconds, MaxKeepAliveRequests is 100. From what I've been reading, this may or may not be an apache issue at all, just that that's how CLOSE_WAIT works (the server is waiting for a FIN packet to close the connection). I just can't believe that a server would be crippled so easily by not receiving a packet from a remote host telling it to close the connection. Especially without any intervention for well over an hour. Any tips? I'm about to pull my hair out. Edit : Also, there are no unusual entries in any apache log files. Edit 2: lsof -i shows only a single CLOSE_WAIT per hanging process. (That's what has been bothering me about this, as most other discussions talk about many CLOSE_WAIT connections, while I only have one per process.) The nature of the code that is running (php) doesn't really lend itself to closing open connections and whatnot. I can run the same code that he is executing with the same session data, and not result in a hanging process.

    Read the article

  • Linux: prevent VNC from swapping like mad

    - by Weezy
    I'm accessing a MacMini (with MacOS X 10.4) from my Linux machine using VNC and there's an issue that is driving me crazy... My Linux machine has 4 GB of ram and I run a lot of various apps on it and I've got no issue at all. It's all snappy and don't hear the hard disk swapping/read/writing too often. Now with VNC, the hard disk is swapping like mad... When I'm moving things on the OS X desktop. So I was thinking of creating a ramdisk and forcing the temp VNC files to go into that ramdisk but the problem is I can't find any temp files. I've attempted to do that: #!/bin/bash while [ true ] do lsof | grep vnc done And eyeball parse the output to try to find some temp file: no luck. The VNC version I'm using is this one: $ vncviewer -version VNC Viewer Free Edition 4.1.1 for X - built Jan 30 2009 19:33:16 Copyright (C) 2002-2005 RealVNC Ltd. No matter how much data is coming from the Mac, there should be plenty of memory (4 GB of ram) so there's really no reason to swap like crazy. This is driving me mad. Any help as to how I could solve this problem is most welcome because this is literally driving me nuts.

    Read the article

  • How to stop RAID5 array while it is shown to be busy?

    - by RCola
    I have a raid5 array and need to stop it, but while trying to stop it getting error. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> # mdadm --stop mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: No devices given. # mdadm --stop /dev/md0 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: fail to stop array /dev/md0: Device or resource busy and # lsof | grep md0 md0_raid5 965 root cwd DIR 8,1 4096 2 / md0_raid5 965 root rtd DIR 8,1 4096 2 / md0_raid5 965 root txt unknown /proc/965/exe # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] # grep md0 /proc/mdstat md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] # grep md0 /proc/partitions 9 0 2120320 md0 While booting, md1 is mounted ok but md0 failed for some unknown reason # dmesg | grep md[0-9] [ 4.399658] raid5: allocated 3179kB for md1 [ 4.400432] raid5: raid level 5 set md1 active with 3 out of 3 devices, algorithm 2 [ 4.400678] md1: detected capacity change from 0 to 2121793536 [ 4.403135] md1: unknown partition table [ 38.937932] Filesystem "md1": Disabling barriers, trial barrier write failed [ 38.941969] XFS mounting filesystem md1 [ 41.058808] Ending clean XFS mount for filesystem: md1 [ 46.325684] raid5: allocated 3179kB for md0 [ 46.327103] raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2 [ 46.330620] md0: detected capacity change from 0 to 2171207680 [ 46.335598] md0: unknown partition table [ 46.410195] md: recovery of RAID array md0 [ 117.970104] md: md0: recovery done. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[0] sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/3] [UUU] md1 : active raid5 sdc2[0] sdf2[2] sde2[3](S) sdd2[1] 2072064 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

    Read the article

  • CentOS - mdadm raid1 drive won't mount to default location

    - by danny
    I'm running CentOS 5.5, the system, boot, swap, etc. is all on /dev/sda and I have two identical single-partition drives /dev/sdb1 /dev/sdc1 that are configured in RAID1 (using mdadm). It was working fine (configured to mount to /mnt/data in the fstab file) and I recently let yum install a couple of automatic updates without paying attention to what they were, and now it doesn't work. Raid is working fine (dmesg shows it gets loaded correctly). mdstat shows: # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] XXXX blocks [2/2] [UU] unused devices: <none> Additionally, I can mount it anywhere other than its default directory (i.e. the following works, and I can read data off the drives). # mount /dev/md0 /mnt/data2 EXT3-fs warning: mounting fs with errors, running e2fsck is recommended But when I run the following I get: # mount -a mount: /dev/sdb1 already mounted or /mnt/data busy It says nothing is mounted when I try to umount /dev/sdb1 or umount /mnt/data, so I assume it's the second of those errors. However, lsof | grep mnt shows nothing. The weird thing is that I can save files in /mnt/data. So something is obviously mounted there, but when I try to umount it I get the error that nothing is mounted. /etc/mtab doesn't mention any of the partitions or files I am trying to work with, and fstab just has that one line I mentioned above that is supposed to mount my raid partition. Again, it was all working fine until I On Google I've found a few things about dmraid interfering with mdadm after an update, but I yum remove'd dmraid and rebooted and it didn't help. I'm really confused and need to get this working to get on with my work!

    Read the article

  • nginx server over https using up all available file handles

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it?

    Read the article

  • MySQL not releasing temp file descriptors

    - by Wakaru44
    Since a few days ago, we’ve been experiencing some serious problems with our MySQL installation: MySQL keeps opening temporal files (normal behaviour) but these files are never released. The consequence is that, eventually, the disk space is exhausted and we have to restart the service and clean up /tmp manually. Using lsof, we see something like this: mysqld 16866 mysql 5u REG 8,3 0 692 /tmp/ibyWJylQ (deleted) mysqld 16866 mysql 6u REG 8,3 0 707 /tmp/ibf5adsT (deleted) mysqld 16866 mysql 7u REG 8,3 0 728 /tmp/ibGjPRyW (deleted) mysqld 16866 mysql 8u REG 8,3 0 5678 /tmp/ibMQDLMZ (deleted) mysqld 16866 mysql 13u REG 8,3 0 5679 /tmp/ibQAnM42 (deleted) Maybe it's not related, but when we shutdown the server, the files are finally freed, and we can see the following warnings in the MySQL log: 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1333 user: 'xxx' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1156 user: 'yyy' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1151 user: 'zzz' where 'xxx', 'yyy' and 'zzz' are distinct mysql users (and the only 3 users with active connections to the database). We have a few theories: There is a problem in the OS, that keeps file handlers open. Could it be possible that the OS "delete" operation blocks the threads until shutdown? This may explain the warning at shutdown and the fact that files are finally deleted when the process dies. Until now, data sets were so small that temp files were relatively small and there was enough time to release the file handles without exhausting disk space. We are using Mysql 5.5 on a RHEL 6.2 with the default kernel.

    Read the article

  • Centos running Apache Tomcat keep getting "java.net.SocketException: Too many open files"

    - by Gerard Moroney
    We're running Apache Tomcat 7.0.41 on CentOS 6 with java version "1.7.0_21". We were getting a lot of too many open files errors so I did some research. The consensus was that it was to to with the number of open files. So I did the following: Increased max files in /etc/security/limits.conf soft nofile 100000 hard nofile 100000 Rebooted the server Checked the limits were valid for the user which was to run the process [app_admin@xxx ~]$ ulimit -Hn 100000 [app_admin@xxx ~]$ ulimit -Sn 100000 Monitored open files on the server using the lsof command What I observed was when the total open files reached circa 13000 and tomcat had around 4500 open files the error reappeared. I am confused. I thought it would have resolved the problem but clearly I don't fully understand the root cause and also how to set the parameter correctly. To (maybe) help I have not modified the server.xml file for Tomcat (although I'm tempted). I don't want to start fiddling with that and make things worse. I'm more than happy to share any more information if someone can give me some hints on where to start looking.

    Read the article

  • Apache is running the wrong version of PHP.

    - by The Rook
    I am trying to enable CURL in php, and possibly update php as well. I have run into a road block where Apache seems to be running the wrong version of PHP. Here is some evidence. #lsof | grep php httpd 18397 nobody 135w REG 8,3 242 528537 /usr/local/apache/modules/libphp5.so I download the latest php 5.2.13 and build from source. I run a service httpd stop, do a make install and then I manually overwrite /usr/local/apache/modules/libphp5.so just to be safe. Using ldd i can see that its my fresh binary compiled with curl (the old one doesn't have curl.) ldd /usr/local/apache/modules/libphp5.so | grep curl libcurl.so.4 => /usr/local/lib/libcurl.so.4 (0x0064e000) I execute a phpinfo() and 5.2.3 is being executed instated of my new 5.2.13 and "curl" is nowhere to be found. What am I doing wrong? Why is curl disabled even though its statically linked? This is a RHEL 5.5 system.

    Read the article

  • NRPE: Unable to read output with check_connections plugin

    - by Wlodzimierz
    I'm using plugin which gives me warning or crtis with established connections. If I run it on local machine it gives: *root@graber:/usr/lib/nagios/plugins# ./check_connections -w 1 -c 5 -C sshd CRITICAL Established connections: 6* I know, I run as root. But: Rights to the file: root@graber:/usr/lib/nagios/plugins# ls -all check_connections -rwxr-xr-x 1 nagios nagios 5459 2012-07-06 10:19 check_connections /etc/sudoers: root@graber:/usr/lib/nagios/plugins# cat /etc/sudoers Defaults env_reset root ALL=(ALL:ALL) ALL %admin ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/bin/lsof nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ /etc/nagios/nrpe.cfg: *nrpe_user=nagios nrpe_group=nagios* *dont_blame_nrpe=1* *command_prefix=/usr/bin/sudo command[check_connections]=/usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd* log from remote: *2012-07-06T11:12:49+02:00 graber nrpe[25928]: Handling the connection... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host address is in allowed_hosts 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host is asking for command 'check_connections' to be run... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Running command: /usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd 2012-07-06T11:19:11+02:00 graber nrpe[26100]: Return Code: 2, Output: NRPE: Unable to read output* Why is this happening? I'm out of ideas, I've searched google for 2 days now :)

    Read the article

  • How to tell statd to use portmap on a non-localhost ipadress?

    - by jneves
    How can I make statd connect to other IP address other than 127.0.0.1? I have a server that is connected to 2 different networks (one is public, another a private). I want it to provide a NFS share for only the private network. The host in an ubuntu 8.04. The private ip address is 192.168.1.202 I changed /etc/default/portmap to add: OPTIONS="-i 192.168.1.202" The command lsof -n | grep portmap returns: portmap 10252 daemon cwd DIR 202,0 4096 2 / portmap 10252 daemon rtd DIR 202,0 4096 2 / portmap 10252 daemon txt REG 202,0 15248 13461 /sbin/portmap portmap 10252 daemon mem REG 202,0 83708 32823 /lib/tls/i686/cmov/libnsl-2.7.so portmap 10252 daemon mem REG 202,0 1364388 32817 /lib/tls/i686/cmov/libc-2.7.so portmap 10252 daemon mem REG 202,0 31304 16588 /lib/libwrap.so.0.7.6 portmap 10252 daemon mem REG 202,0 109152 16955 /lib/ld-2.7.so portmap 10252 daemon 0u CHR 1,3 960 /dev/null portmap 10252 daemon 1u CHR 1,3 960 /dev/null portmap 10252 daemon 2u CHR 1,3 960 /dev/null portmap 10252 daemon 3u unix 0xecc8c3c0 4332992 socket portmap 10252 daemon 4u IPv4 4332993 UDP 192.168.1.202:sunrpc portmap 10252 daemon 5u IPv4 4332994 TCP 192.168.1.202:sunrpc (LISTEN) portmap 10252 daemon 6u REG 0,12 289 3821511 /var/run/portmap_mapping I defined in /etc/hosts the following: 192.168.1.202 server.local In /etc/default/nfs-common I changed STATDOPTS to: STATDOPTS="--name server.local" Yet when I run /etc/init.d/nfs-common start if fails to start. The log shows: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Version 1.1.2 Starting Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Flags: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: unable to register (statd, 1, udp). An strace -f rpc.statd -n server.local results in a lot of lines, including this one: sendto(9, "\200]3\362\0\0\0\0\0\0\0\2\0\1\206\240\0\0\0\2\0\0\0\1"..., 56, 0, {sa_family=AF_INET, sin_port=htons(111), sin_addr=inet_addr("127.0.0.1")}, 16) = 56

    Read the article

  • python reports socket in use, netstat and others claim its not

    - by captainmish
    We have a strange socket issue with a RHES3 box: Python 2.4.1 (#1, Jul 5 2005, 19:17:11) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> s = socket.socket() >>> s.bind(('localhost',12351)) Traceback (most recent call last): File "<stdin>", line 1, in ? File "<string>", line 1, in bind socket.error: (98, 'Address already in use') This seems normal, lets see what has that socket: # netstat -untap | grep 12351 {no output} # grep 12351 /proc/net/tcp {no output} # lsof | grep 12351 {no output} # fuser -n tcp 12351 {no output, repeating the python test fails again} # nc localhost 12351 {no output} # nmap localhost 12351 {shows port closed} Other high ports work fine (eg 12352 works) Is there something magic about this port? Is there somewhere else I can look? Where does python find out that socket is in use that netstat doesnt know about? Any other way I can find out what/if that socket is?

    Read the article

  • kill a hung mount process

    - by John P
    I have a virtual machine drive that ran out of space, so I shutdown the VM, extended the volume using lvextend. After resizing the partition (ext3), I ran e2fsck on it, and it found and corrected errors. Unfortunately, when I ran efsck one more time, there were more errors that had to be fixed. I went through 3 rounds of e2fsck before I decided to try mounting it to clean up some space manually. I tried mounting it, but the mount process hung. I tried to "kill -9" the mount process, but that did not kill it. I killed the parent process, but that did not kill it either. Any ideas on how to kill a rogue mount process? Some evidence: ps -l 13292 F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 4 R 0 13292 1 99 85 0 - 17964 - ? 11:27 mount /dev/mapper/xen7-123p3 /tmp/p3/ lsof -p 13292 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mount 13292 root cwd DIR 9,2 4096 25264129 /root mount 13292 root rtd DIR 9,2 4096 2 / mount 13292 root txt REG 9,2 61656 2916434 /bin/mount mount 13292 root mem REG 9,2 144776 31457282 /lib64/ld-2.5.so mount 13292 root mem REG 9,2 1718232 31457284 /lib64/libc-2.5.so mount 13292 root mem REG 9,2 23360 31457291 /lib64/libdl-2.5.so mount 13292 root mem REG 9,2 43808 31457783 /lib64/libblkid.so.1.0 mount 13292 root mem REG 9,2 247496 31457331 /lib64/libsepol.so.1 mount 13292 root mem REG 9,2 95464 31457337 /lib64/libselinux.so.1 mount 13292 root mem REG 9,2 154640 31457491 /lib64/libdevmapper.so.1.02 mount 13292 root mem REG 9,2 17936 31457472 /lib64/libuuid.so.1.2 mount 13292 root mem REG 9,2 56438208 12684878 /usr/lib/locale/locale-archive mount 13292 root 0u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 1u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 2u CHR 136,11 0t0 13 /dev/pts/11 (deleted) umount -f /tmp/p3/ umount2: Invalid argument umount: /tmp/p3/: not mounted

    Read the article

  • shared hosting with malware, .htaccess file gets modified every 2 hours or so

    - by apache
    I spent all day today chasing malware on the shared hosting for one of my clients. The issue is as follows: Every 2 hours or so .htaccess file and all other .htaccess files gets modified, on the top of the file these lines are added: IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_REFERER} ^.*(google|ask|yahoo|youtube|wikipedia|excite|altavista|msn|aol|goto|infoseek|lycos|search|bing|dogpile|facebook|twitter|live|myspace|linkedin|flickr)\.(.*) RewriteRule ^(.*)$ http://pasla-ghwoo.ru/rqpgfap?8 [R=301,L] </IfModule> and on the bottom: ErrorDocument 400 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 401 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 403 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 404 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 500 http://pasla-ghwoo.ru/rqpgfap?8 The main problem I'm not root on the server, and cannot sudo, as this is shared hosting with 100's of websites. Typical good commands like dmesg, lsof, dtrace, chattr and many others are not available to me as I'm not root. I can't find who is modifying .htaccess files, how do I get that info? My guess is some php script is changing that which is called from outside via command and control. This seems to relate to this: http://blog.unmaskparasites.com/2009/09/11/dynamic-dns-and-botnet-of-zombie-web-servers/ How do I find out who is modifying .htaccess files without being root?

    Read the article

  • tftpd starts randomly

    - by Mutant
    A few days ago my Little Snitch filter starts popping up tftpd. I'd never seen this before, so I immediately start freaking out thinking my Mac has been compromised. I can't find anything unusual on the system. The process usually dies before I can trace it (little snitch never allowed the connection just left the popup up). I finally caught it once, and found this: [10:32]: sudo lsof -nlP | fgrep tftp Password: tftpd 1924 18446744 cwd DIR 1,3 1326 2 / tftpd 1924 18446744 txt REG 1,3 29856 163979456 /usr/libexec/tftpd tftpd 1924 18446744 txt REG 1,3 600576 163686622 /usr/lib/dyld tftpd 1924 18446744 txt REG 1,3 303300608 189014898 /private/var/db/dyld/dyld_shared_cache_x86_64 tftpd 1924 18446744 0u IPv4 0x34a76100fcbb06e3 0t0 UDP *:55818 tftpd 1924 18446744 2u IPv4 0x34a76100f1113c53 0t0 UDP *:69 [10:32]: ps ax | fgrep 1924 1924 ?? S 0:00.00 /usr/libexec/tftpd -i /private/tftpboot 1949 s000 S+ 0:00.00 fgrep 1924 For the life of me I can't figure out what is starting this. Nothing in cron, launchdaemons, etc. Google searches haven't yielded much either. The connection IP is different each time. So my question is: Has anyone seen anything like this before?

    Read the article

  • Find which files an apache process is writing to?

    - by Haluk
    We have this apache process which becomes io-bound time to time. Using atop, we can see it is a write operation. Using lsof -p <PID> we can see a list of files open by the httpd process. First we thought "log" files must be the problem. So we turned them off just to test. However write operations still continues. We will continue testing a few other things. For instance we use php session variables a lot. Maybe php session files are getting all the writing. But is there a way to quickly identify files which get written to by the httpd process? This way we can focus our efforts on those files. UPDATE: We used the strace command as suggested. Here are two lines from the output. write(23, "\27\0\0\0\3SET CHARACTER SET utf8", 27) = 27 write(23, "\17\0\0\0\3SET NAMES utf8", 19) = 19 We do not have a mysql process on this server. So is strace also showing what is being written to an ethernet port? UPDATE2: During high io load, the process which consumes most of the write resources gives the following output to strace -e trace=write -p <PID>: --- SIGCHLD (Child exited) @ 0 (0) --- write(9, "!", 1) = 1 write(19, "OPTIONS * HTTP/1.0\r\nUser-Agent: Apache (internal dummy connection)\r\n\r\n", 70) = 70 However I cannot figure out where these are being written to.

    Read the article

  • Relinking a deleted file

    - by mbac32768
    Sometimes people delete files they shouldn't, a long-running process still has the file open, and recovering the data by catting /proc/<pid>/fd/N just isn't awesome enough. Awesome enough would be if you could "undo" the delete by running some magic option to ln that would let you re-link to the inode number (recovered through lsof). I can't find any Linux tools to do this, least with cursory Googling. What do you got, serverfault? EDIT1: The reason catting the file from /proc/<pid>/fd/N isn't awesome enough is because the process which still has the file open is still writing to it. A delete removes the reference to the inode from the filesystem namespace. What I want is a way of re-creating the reference. EDIT2: 'debugfs ln' works but the risk is too high since it frobs raw filesystem data. The recovered file is also crazy inconsistent. The link count is zero and I can't add links to it. I'm worse off this way since I can just use /proc/<pid>/fd/N to access the data without corrupting my fs.

    Read the article

  • PC to USB transfer slow

    - by Vipin Ms
    I'm having trouble with USB transfer,not with external hard disk. Transfer starts with like, for the transfer of 700MB file it starts with 30mb/s and towards the end it stops at 0s and stays put for like 3-4 mins to transfer the last bit. I have tried different USB devices, but no luck. Is it a bug? Another important point is, in Kubuntu there is no such issue. So is it something related to Gnome? I'm using Ubuntu 11.10 64bit. Somebody please help, it's really annoying. Here are the details. PC all of my drives are in ext4. USB I tried ext3,ntfs and fat32. All having the same problem. Here are my USB controllers details: root@LAB:~# lspci|grep USB 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) Here is an example of one transfer. I connected one of my 4GB usb device. Nov 24 12:01:25 LAB kernel: [ 1175.082175] userif-2: sent link up event. Nov 24 12:01:25 LAB kernel: [ 1695.684158] usb 2-2: new high speed USB device number 3 using ehci_hcd Nov 24 12:01:25 LAB mtp-probe: checking bus 2, device 3: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2" Nov 24 12:01:26 LAB mtp-probe: bus: 2, device: 3 was not an MTP device Nov 24 12:01:26 LAB kernel: [ 1696.132680] usbcore: registered new interface driver uas Nov 24 12:01:26 LAB kernel: [ 1696.142528] Initializing USB Mass Storage driver... Nov 24 12:01:26 LAB kernel: [ 1696.142919] scsi4 : usb-storage 2-2:1.0 Nov 24 12:01:26 LAB kernel: [ 1696.143146] usbcore: registered new interface driver usb-storage Nov 24 12:01:26 LAB kernel: [ 1696.143150] USB Mass Storage support registered. Nov 24 12:01:27 LAB kernel: [ 1697.141657] scsi 4:0:0:0: Direct-Access SanDisk U3 Cruzer Micro 8.02 PQ: 0 ANSI: 0 CCS Nov 24 12:01:27 LAB kernel: [ 1697.168827] sd 4:0:0:0: Attached scsi generic sg2 type 0 Nov 24 12:01:27 LAB kernel: [ 1697.169262] sd 4:0:0:0: [sdb] 7856127 512-byte logical blocks: (4.02 GB/3.74 GiB) Nov 24 12:01:27 LAB kernel: [ 1697.169762] sd 4:0:0:0: [sdb] Write Protect is off Nov 24 12:01:27 LAB kernel: [ 1697.169767] sd 4:0:0:0: [sdb] Mode Sense: 45 00 00 08 Nov 24 12:01:27 LAB kernel: [ 1697.171386] sd 4:0:0:0: [sdb] No Caching mode page present Nov 24 12:01:27 LAB kernel: [ 1697.171391] sd 4:0:0:0: [sdb] Assuming drive cache: write through Nov 24 12:01:27 LAB kernel: [ 1697.173503] sd 4:0:0:0: [sdb] No Caching mode page present Nov 24 12:01:27 LAB kernel: [ 1697.173510] sd 4:0:0:0: [sdb] Assuming drive cache: write through Nov 24 12:01:27 LAB kernel: [ 1697.175337] sdb: sdb1 After that I initiated one transfer. lsof -p 3575|tail -2 mv 3575 root 3r REG 8,8 1719599104 4325379 /media/Misc/The Tree of Life (2011) DVDRip XviD-MAXSPEED/The Tree of Life (2011) DVDRip XviD-MAXSPEED www.torentz.3xforum.ro.avi mv 3575 root 4w REG 8,17 1046347776 15 /media/SREE/The Tree of Life (2011) DVDRip XviD-MAXSPEED/The Tree of Life (2011) DVDRip XviD-MAXSPEED www.torentz.3xforum.ro.avi Here are the total time spent on that transfer. root@LAB:/media/SREE# time mv /media/Misc/The\ Tree\ of\ Life\ \(2011\)\ DVDRip\ XviD-MAXSPEED/ /media/SREE/ real 11m49.334s user 0m0.008s sys 0m5.260s root@LAB:/media/SREE# df -T|tail -2 /dev/sdb1 vfat 3918344 1679308 2239036 43% /media/SREE /dev/sda8 ext4 110110576 60096904 50013672 55% /media/Misc Do you think this is normal?? Approximately 12 minutes for 1.6Gb transfer? Thanks.

    Read the article

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

  • Toggle Android emulator network traffic from emulator invocation

    - by highphi
    I'm working on scripts to manage large amounts of Android emulators and I need to disable all network traffic on some of them. Because I'm doing all of this on a headless server, I cannot use the F8 hotkey described on the emulater documentation. I'm currently routing the TCP traffic through a null proxy with by using emulator-arm ... -http-proxy 0.0.0.0:0 and this blocks the traffic that I want it to. I thought this was working well until I noticed some strange error messages while running my scripts. The console started outputting accept too many open files and checking the open files with lsof reveals numerous messages stating "can't identify protocol" ... emulator- 19463 username 19u sock 0,6 0t0 1976595845 can't identify protocol emulator- 19463 username 20u sock 0,6 0t0 1976595847 can't identify protocol ... The only "solution" I found to this is to kill all of the emulators and then wait until this limit is reached again, which is hardly a solution at all. Is there another way to do this while invoking the emulator? Am I incorrectly using the -htt-proxy switch to block the traffic? Other people found solutions to block traffic by manually doing this by using airplane mode, but this isn't feasible for me as I'm controlling emulators via scripts. I could send keyevents to the emulator with my script and turn the phone on in airplane mode, but I would prefer something more reliable than this.

    Read the article

  • How to find other end of unix socket connection?

    - by depesz
    I have a process (dbus-daemon) which has many open connection over UNIX sockets. One of these connections is fd #36: =$ ps uw -p 23284 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND depesz 23284 0.0 0.0 24680 1772 ? Ss 15:25 0:00 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session =$ ls -l /proc/23284/fd/36 lrwx------ 1 depesz depesz 64 2011-03-28 15:32 /proc/23284/fd/36 -> socket:[1013410] =$ netstat -nxp | grep 1013410 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) unix 3 [ ] STREAM CONNECTED 1013410 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD =$ netstat -nxp | grep dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013953 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013825 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013726 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013471 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013410 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012325 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012302 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012289 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012151 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011957 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011937 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011900 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011775 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011771 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011769 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011766 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011663 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011635 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011627 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011540 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011480 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011349 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011312 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011284 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011250 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011231 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011155 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011061 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011049 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011035 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011013 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1010961 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1010945 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD Based on number connections, I assume that dbus-daemon is actually server. Which is OK. But how can I find which process is connected to it - using the connection that is 36th file handle in dbus-launcher? Tried lsof and even greps on /proc/net/unix but I can't figure out a way to find the client process.

    Read the article

  • Double MySQL Running on Mountain Lion

    - by Norris
    After I've done a full restart, my Apache PHP server doesn't connect to Local MySQL ( connecting via 127.0.0.1 because localhost for some reason fails always ). So I did this today: ? ~ mysqladmin shutdown -u root -p Enter password: ? ~ mysqladmin shutdown -u root -p Enter password: mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/tmp/mysql.sock' exists! Which basically means that I succeeded in shutting down mysql. But as soon as I did - Apache PHP successfully connect to MySQL and my local sites work without a hiccup until the next restart. Here are a few other details: (as you can tell - I've installed MySQL via brew) ? ~ sudo ps aux | grep mysql N 4774 0.0 0.0 2432768 620 s000 S+ 9:53AM 0:00.00 grep mysql N 4772 0.0 2.6 3030168 440688 ?? S 9:51AM 0:00.29 /usr/local/Cellar/mysql/5.6.13/bin/mysqld --basedir=/usr/local/Cellar/mysql/5.6.13 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mysql/5.6.13/lib/plugin --bind-address=127.0.0.1 --log-error=/usr/local/var/mysql/N.local.err --pid-file=/usr/local/var/mysql/N.local.pid N 4686 0.0 0.0 2433432 1000 ?? S 9:51AM 0:00.01 /bin/sh /usr/local/opt/mysql/bin/mysqld_safe --bind-address=127.0.0.1 N 4362 0.0 2.7 3120276 458728 ?? S 9:47AM 0:00.45 mysqld ? ~ lsof -i | grep mysql mysqld 4362 N 16u IPv6 0x76959e40691f9f93 0t0 TCP *:mysql (LISTEN) This is the weird thing: ? ~ killall -9 mysqld MySQL Is dead! Apache doesn't connect. Then, when I run: ? ~ sudo mysqladmin shutdown -u root -p Enter password: Apache is (again) able to successfully connect to MySQL. As far as I understand this means that I have two mysql servers setup and both of them are trying to start up at the same time, but I don't have the slightest idea on how to fix it. I've tried brew reinstalling but that didn't help. ? ~ which mysqladmin /usr/local/bin/mysqladmin ? ~ whence -p mysql /usr/local/bin/mysql

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >