I have multiple machines sharing home directory via NFS share used by
6-10 users. All machines are used to run computational experiments including
the one with NFS server. Although it is very rare but possible that some
experiment may cause out of memory(OOM) problem. Though the user process may get
killed at some point of time, I would like to know how it can affect NFS
server thus in turn affecting other machines too. I tried searching for it
but could not find a specific answer. Also are there any measures I can take
to avoid OOM affecting NFS share?
NFS Server Configuration:
Intel Core i7-9700, 32 GB RAM, SWAP 32 GB and Graphics TITAN RTX
Other machines have similar configurations.
By default when Linux runs out of memory it uses a heuristic to decide which processes to kill to recover enough memory to continue. This often is not desired, though. In many cases (including probably this one) it would be better to kill the process which caused the out of memory condition.
You can set the
vm.oom_kill_allocating_task sysctl to cause the OOM killer to kill the process which ran the system out of memory.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.