Memory limits in systemd

sblandin asked:

I am using systemd to run Celery, a python distributed task scheduler, on an Ubuntu 18.04 LTS server.

I have to schedule mostly long running tasks (taking several minutes to execute), which, depending on data, may end up eating a large amount of RAM. I want to put on some sort of safety measure in order to keep memory usage under control.

I have read here that you can control how much RAM it’s used by a systemd service using the MemoryHigh and MemoryMax options

I have used these options in my Celery service setup, and I watched what happens to a celery service when it reaches the given limits with htop.

The service stops the execution and is put on "D" state, but stays there and the allocated memory is not freed.

Is it possible for systemd to kill the memory eating process?

My answer:


The process should already be killed if it reaches MemoryMax=. But if it passes MemoryHigh= then it may be suspended in the manner you describe. The problem is that if you reach MemoryHigh= then you may never reach MemoryMax=. If the process isn’t actually releasing memory then it will just be mostly-suspended forever.

If you want the service to die if it uses a specific amount of memory, the don’t specify MemoryHigh= at all, only MemoryMax=.

Ultimately your developers should find and resolve the memory leaks in the application.


View the full question and any other answers on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.