Memstop: Use LD_PRELOAD to delay process execution when low on memory

ingve | 62 points

On Android something similar;

If you have an app that absolutely needs a lot of memory (transcoding HD videos in my case), before running it;

  - Spawn a new process and catch the unix signals for that process, specifically SIGINT. 
  - Have that new process allocate all the memory you need in your actual process. 
  - The OS will kill via it's Low Memory Killer (LMK) daemon. 
  - Since you caught the SIGINT in the parent process you can avoid the Android "Application X is using too much memory" notification that pops up to the user making it a silent failure. 
  - Lastly sleep a few MS to let Androids LMK kill other apps.
Now run your intensive memory process. This works for games too. A huge hack but needed for some of the lower end devices that don't have GB of RAM. Also needed since Java apps will just eat memory until something tells them to clean up, you need some overallocation to fail to trigger the whole memory free process on Android.
AnotherGoodName | 22 days ago

I've seen energy-aware scheduling, literately decades of effort that culminated on the EEVDF scheduler so that it was possible to have a good scheduler that worked well on desktops, servers and HPC... and, between all those efforts, a giant parallel one to prevent or influence to OOM-Killer to behave better.

I really wonder if a "simple" memory-aware scheduler that punished tasks whose memory behavior (allocation or access) slows down the system would be enough. I mean, it doesn't happen anymore, but some years ago it was relatively simple to soft-crash a system just by trying to open a file that was significantly larger than the physical RAM. By 'soft-crashing' I mean the system became so slow that it was faster to reboot than wait for it to recover by itself. What if such a process was punished (for slowing down the system) by being slowed down (not being scheduled or getting lower cpu times) in a way that, no matter what it did, the other tasks continued fast enough so that it could be (even manually) killed without soft-crashing the system? Is there a reason why memory-aware scheduling was never explored or am I wrong and it was explored and proved not good?

marcodiego | 22 days ago

I assumed it paused the program while it's running, by e.g. intercepting malloc calls or something, but no it just delays the startup.

I'm wondering what the value of this using LD_PRELOAD is, rather than just being a wrapper command that takes the command to execute as arguments. I guess it's easier to inject into a preexisting build system because it's all configured via environment variables?

meatmanek | 22 days ago

I just wanted to point out that GNU parallel has built-in options to do the same thing when running parallel processes that could possibly overwhelm the computer.

  --memfree size
    Minimum memory free when starting another job. The size can be postfixed with K, M, G, T, P, k, m, g, t, or p (see UNIT PREFIX).
imp0cat | 22 days ago

No information about design philosophy, whether it triggers on RSS or virtual memory. And I'd think adding swap would be recommended as a place to stow away these stopped processes?

Naive approach might end with deadlock when all processs that could free up memory are stopped.

rini17 | 22 days ago
[deleted]
| 21 days ago

1. You don't need this. Just run programs inside cgroup and set a memory limit (systemd-run lets you do this in a single convenient command). When the program reaches its memory limit it will be throttled.

2. Also often a bad idea. If you slow down a process you are also stopping it from _releasing_ memory.

nialv7 | 22 days ago
[deleted]
| 22 days ago

This could be a nice systemd unit option.

ape4 | 22 days ago

OOM eat your heart out :D This is great, but there are security implication when using LD_PRELOAD--but I like it! More programs like this for parallel computing please.

d00mB0t | a month ago