Configuring Hyper-V Virtual Machine NUMA Topology

In this post I will show you how you can customize the virtual non-uniform memory access (NUMA) configuration of a virtual machine. You will rarely ever have to look at these advanced settings outside of an exam scenario. But there is a scenario in which you would tune these settings to optimize the performance of a virtual machine.

Why Customize Virtual NUMA?

In a previous article, “What Is Non-Uniform Memory Access (NUMA)?” I introduced NUMA and showed how Hyper-V works with it. If your virtual machine does not use Dynamic Memory, then Hyper-V will reveal the physical NUMA architecture of the host that the virtual machine is occupying. This allows the guest OS (Windows or Linux) and NUMA-aware services (such as SQL Server) to optimize the assignment of virtual RAM with the processes running on the virtual processors (vCPUs).
For example, imagine you have a host with four 8-core processors without hyperthreading. That host would probably have a total of 32 logical processors (LPs), allowing you to run some pretty big virtual machines on Windows Server 2012 (or later) Hyper-V. The host probably has four NUMA nodes with eight cores each, one for each processor, and the RAM would be split up evenly between host nodes if you followed the server manufacturer’s best practices.

Now imagine that you create a virtual machine with 16 vCPUs and statically assign it half of the host’s memory. The virtual machine will be informed, when it starts up, that it occupies two NUMA nodes and where the virtual RAM and vCPUs reside within those nodes. Now the guest OS can fire up processes with RAM assigned from one node, and processes with RAM on another node without unnecessarily spanning the physical NUMA nodes. This will optimize the performance of the services running in the virtual machine and the host.
What if you need to live migrate that running virtual machine to a host that has eight quad-core processors? The second host has enough processor and RAM capacity to run the virtual machine. But something is not optimal. The NUMA layout of this host is different. There are four cores per NUMA node instead of eight cores per NUMA node. The running virtual machine won’t be able to magically learn the new NUMA layout on the fly; that’s impossible. After the migration is complete, the virtual machine will have mismatched virtual NUMA awareness and will be spanning the physical NUMA nodes of the second host. That will reduce the performance of the guest’s services and the second host.

Change the NUMA Topology of a Virtual Machine

If you cannot deploy hosts with identical NUMA architectures then you can tweak the advanced processor settings of large virtual machines to achieve optimal performance as those virtual machines migrate around your cloud. Note that these settings are quite advanced and really only need to be changed if you are working with genuinely large virtual machines (more than four vCPUs) that span NUMA nodes.
You can find the NUMA topology settings if you edit the configuration of a virtual machine and expand Processor.

Customizing guest aware NUMA of a Hyper-V virtual machine
Editing the virtual NUMA of a virtual machine.

By default, Hyper-V will supply the settings to the virtual machine from the current host when the virtual machine starts up. You can customize three settings.

  • Maximum number of processors: A value from 1 to 32 that defines how the maximum number of processors will be in any NUMA node that the virtual machine might find on any host.
  • Maximum amount of memory (MB): From 8 MB to 256 GB, this value defines the maximum amount of RAM that any NUMA node on any host will have.
  • Maximum NUMA nodes allowed on a socket: Some processors with lots of cores can have more than one NUMA node. You can configure the maximum that should be encountered on any host with this setting. Values can range from 1 to 64.

So what settings do you use? It would be a good idea to run the Get-VMHostNumaNode cmdlet on every specification of host that your large virtual machine could live migrate to. The results of that cmdlet will identify the smallest NUMA node configuration in your Hyper-V host farm. You can then customize your very-large virtual machine to suit that configuration so that it will never span a NUMA node as it live migrates around your data center.