This post will describe how Microsoft has recently started rolling out a preview of improved networking performance for Azure virtual machines.
Microsoft announced at the Ignite 2016 conference that it has started work to greatly improve the networking performance of the entire global fleet of virtual machines. We will see reduced latency/jitter, and we will see maximum bandwidth improvements of 33 percent to 50 percent. This is being done by leveraging hardware offloads that are available in Windows Server Hyper-V.
Note: Azure runs the same Hyper-V hypervisor that we can use in Windows Server. Microsoft’s public cloud is currently running Windows Server 2012 and Windows Server 2012 R2 Hyper-V.
The results of these improvements will benefit us in several ways:
One example that Microsoft shared is that SQL Server database In-Memory OLTP transaction performance was improved by 1.5x in testing. They also reached speeds of 25 Gbps when testing with DS15v2 and D15v2 virtual machines.
So how did Microsoft make these improvements? They “simply” turned on some features that move processing from the host OS (the management OS in Hyper-V parlance).
By default, a Hyper-V virtual machine receives and sends traffic by routing packets from the vNIC, through the VMbus (via an integration service) to the host virtual switch, which runs in user mode in the management OS, and then to the drivers of the physical NIC, and vice versa.
SR-IOV, a feature available to us since Windows Server 2012 Hyper-V, enables a virtual function (VF – think of it as a physical NIC driver running in a VM) to connect directly to one of many physical functions (PFs) on a physical NIC in the host. By enabling SR-IOV for a virtual NIC, you can circumvent the circuitous route via the virtual switch, and allow network packets to travel directly between the VF and the PF, thus reducing latency and jitter, and potentially increasing bandwidth. This is what Microsoft has started to roll out in Azure, and what you can turn on in the settings of a virtual machine/NIC if you sign up for the preview.
It’s no secret that:
There is some amount of processing involved on the host to translate physical network traffic into software-defined network traffic, and vice versa. Microsoft added an offload for NVGRE processing in Windows Server 2012 R2 Hyper-V, enabling hosts to pass this processing to compatible NICs, including models by Mellanox which are widely used within Azure. The benefit is that additional processing is made available to virtual machines, and once again, networking performance should be improved for the host and virtual machines.
The new networking performance improvements are being rolled out worldwide by Microsoft, but you can opt into a preview program to test out the enhancements for yourself. Availability is limited at this time. The supported OS images include:
Only 2 regions are supported at this time:
And only 2 high-end virtual machines are supported: