Does disabling "numa interleave" from bios cause memory page-out(when cpu-1 has no free memory left) to hdd on all dual-cpu systems?

huseyin tugrul buyukisik asked:

For an example system of a dell dual 4114 silver with 24GB per CPU; how would it work if my application allocates 24 GB at once? Should I be concerned about write-life of my SSD because of pagefile usage?

Note about memory for the example: 6x8GB distributed equally to both CPUs(probably 3 sticks on single memory controller on each CPU)

My answer:

Disabling node interleave in the BIOS doesn’t cause your system to swap.

It would be more accurate to say that disabling this option puts NUMA interleaving under OS control. This option exists for people who run ancient operating systems which aren’t NUMA aware. When it’s enabled, the system presents all the CPU cores to the OS as if they occupied only one CPU socket, and performs the NUMA interleaving itself. This generally is not what you want with a modern NUMA-aware OS such as Linux or Windows.

A more detailed explanation comes from Dell’s BIOS Performance and Power Tuning Guidelines:

Another option in the Memory Settings screen of the BIOS is Node Interleaving. This option is disabled
by default, which means that NUMA mode is enabled. Conversely, enabling Node Interleaving means
the memory is interleaved between memory nodes, and there is no NUMA presentation to the
operating system. Most modern operating systems have been NUMA-aware for many years, and
schedulers have been optimized to ensure memory allocations are localized to the correct (closest)
NUMA node. However, these schedulers are not perfect, and to achieve maximum performance with a
given workload (especially for benchmarks), manual intervention may be required to “pin” workload
processes and threads to specific cores, ensuring that memory is always allocated to the local NUMA
node for any given logical processor.

For some applications, where the memory required is larger than the memory available in a single
memory node (such as very large databases), the memory allocation must necessarily occur on the
remote node as well. It is in cases such as this, or for other applications that cannot be easily localized
to a single socket/NUMA node combination, where enabling Node Interleaving could have positive
effects. Enabling Node Interleaving was most common several years ago when NUMA was a relatively
recent arrival to x86 servers and operating systems were not as NUMA aware. However, this situation
has radically improved, and today the number of customers that need to enable Node Interleave is
diminishing. Note that enabling Node Interleaving is not supported for 4P E5-4600 configurations.

For a current NUMA-aware operating system, this option should be disabled, and NUMA should be tuned within the OS.

View the full question and any other answers on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.