How Do I Stop NFS Clients from Using All of the NFS Server's Resources?

Posted by Ken S. on Server Fault See other posts from Server Fault or by Ken S.
Published on 2013-10-28T20:35:18Z Indexed on 2013/10/28 21:55 UTC
Read the original article Hit count: 277

Filed under:
|

I have a v4 NFS server running on Ubuntu 12.04LTS. It is the main repository for the web assets that four external nginx webservers mount to serve up to site visitors. These client servers connect to it via a read-only mount. Each of these RO servers has this displayed when I check the mounts:

10.0.0.90:/assets on /var/www/assets type nfs4 (ro,addr=10.0.0.90,clientaddr=0.0.0.0)

The NFS master's /etc/exports file contains entries like this for each server:

/mnt/lvm-ext4 10.0.0.40(ro,fsid=0,insecure,no_subtree_check,async)

The problem that I'm seeing is that these clients are eventually utilizing all the RAM on the NFS server and causing it to crash. If I do a watch free -m I can watch the used memory creep up until it's used and then see the free buffers/cache entry creep down to near zero before the server eventually locks up requiring a reboot.

There is some sort of memory leak somewhere that is causing this, and the optimal solution would be to find it and fix it, but in the meantime I need to find a way to have the NFS server protect itself from connected clients using all it's RAM. There must be some sort of setting that limits the resources the clients can use, but I can't seem to find it.

I've tried adjusting the values for rsize and wsize but they don't seem to help or be related.

Thanks for any tips.

© Server Fault or respective owner

Related posts about ubuntu

Related posts about nfs