Today I’ll walk you through the basics of Hyper-V Network Virtualization (HNV), including how multitenant computing works and its challenges. I’ll also go into the concepts of how HNV is implemented in Windows Server 2012, Windows Server 2012 R2, and System Center.
Much of what Microsoft has done with Hyper-V and System Center in the 2012 and 2012 R2 generations was based on their own development and experiences in Windows Azure, as well as the feedback that was gathered from hosting companies. A key trait of a cloud is multitenancy, in which multiple customers of the cloud (known as “tenants”), rent space in the cloud and expect to be isolated from each other. Imagine Ford and General Motors both wanting to use the services of same public cloud. They must be isolated. In doing so, the cloud operator (the hosting company) must ensure that:
As Microsoft found out, achieving this level of isolation with traditional solutions isn’t easy. How is this done with physical networking? The answer is probably to have one or more virtual local area networks (VLANs) per tenant, with physical firewall routing/filtering to isolate the VLANs. However, this is not VLANs’ intended purpose. They were meant to provide a method to subnet IP ranges on a single LAN and control over broadcast domains.
Somehow, the poor network engineer was convinced that VLANs should be twisted into a security solution. Doing this simply was not scalable for the following reasons.
Microsoft announced a solution to the limits of VLANs in the cloud using a new feature that was codeveloped for Windows Server 2012 Hyper-V and Windows Azure. This new feature was called Hyper-V Network Virtualization (HNV). This is based on a more general concept called Software Defined Networking (SDN).
The goal of SDN is to virtualize the network. That will, and should, invoke your responding, “What the <insert expletive of choice>?” Calm down. Take a step back. Think back on the computer room of yore, full of physical servers that took up rack space and electricity. What did we do with them? We turned them into software-defined machines. Yes, we virtualized them. Now they exist only as software representations. And from then on, we (normally) only ever deploy virtual machines whenever someone asks for a “server.” That’s what SDN does: It takes a request for a “network” and deploys a virtual network. That’s a network that doesn’t exist in the physical realm (top-of-rack switches, core switches, etc.) and replaces it with something that exists in the hypervisor.
The core advantages of adopting HNV over VLANs are:
This are just the basic benefits of HNV. There is more, as you will see later.
SDN and HNV abstract IP address spaces. This is done using two types of address:
HNV actually has two ways of working.
A Basic Example of NVGRE.
The diagram above illustrates how two tenants can reside on the same cloud and virtual machines on the same virtual network can communicate using NVGRE. In this example, note that a single virtual network is created for Ford that spans multiple hosts. The same is true for General Motors. The virtual machine, Ford1, is assigned a CA of 10.0.1.10 and so is GM1. Both virtual machines reside on different virtual networks so there is no issue of overlapping IP addresses; they simply will not route to each other anyway because of tenant network isolation.
If Ford1 needs to communicate with Ford2, the packet from the CA of Ford1 will be encapsulated by HostA. HostA has a lookup table to know where Ford2 is (on HostB). HostA will send the encapsulated packet from Ford1 to HostB, where the packet is stripped and passed to Ford2.
Ford3, Ford4, Ford5, etc., can be deployed on either HostA or HostB. Each will get their own CA (either assigned by the cloud or by the tenant), but there is no need for more than one PA per host thanks to NVGRE. That keeps IP management simpler and more scalable.
It is rare that a tenant wants just one network. The example shown above is very simplistic, and to be honest, it doesn’t accurately represent what HNV provides. It is a basic illustration to present a concept, often shown as Red Corp and Blue Corp during Microsoft presentations. It’s time to go further down the rabbit hole and see what HNV can do.
Dividing VM networks into VM subnets.
In reality a tenant often wants multiple subnets in their virtual network. HNV allows this, as you can see in the above diagram. Again, in this example, Ford and General Motors have virtual networks, referred to as VM networks in System Center Virtual Machine Manager (VMM). This provides tenant isolation. Both Ford and General Motors require more than one subnet, so these are provided in the form of VM subnets. These subnets have IDs, IP ranges, and a default gateway, just like a subnet does in the physical network.
Now, in the above diagram we can see that Ford and General Motors have both deployed two VM subnets in their own VM Networks. The Ford VM subnets are routed to each other, the General Motor VM subnets are routed to each other, but neither Ford, General Motors, nor the hosting company can route to each other. In the case of General Motors, the VMs in VM subnet 5002 will have a default gateway of 192.168.1.1 and the VM subnet 5003 will have a default gateway of 192.168.2.1. VMs in each VM subnet will use their respective default gateways to communicate with other VM subnets in the same VM network.
What if a tenant wants to isolate VM subnets from each other? In this case, you deploy another VM network, as VM Networks are the unit of isolation. Any VM subnet in a VM Network is routed to all other VM subnets in the VM Network. VM subnets in one VM network are isolated from other VM subnets in other VM networks.
It’s all well and good if you have services running on these nicely isolated virtual machines, but there are PCs that will want to connect to them to access SharePoint farms and send or receive emails, and web farms to share to the Internet. How exactly do VMs on abstracted networks communicate with the world via their NVGRE encapsulated networking?
This is made possible using a NVGRE appliance. We have returned to the high-level view of NVGRE and added an NVGRE appliance (sometimes called a gateway) to the following illustration. This appliance allows us to extend the virtualized NVGRE networks to the physical world.
A NVGRE appliance.
The VM Networks are compartmentalized in the NVGRE Gateway to continue the isolation of the VM networks through to the physical layer without any need for using VLANs.
The lack of available NVGRE appliances delayed the adoption of SDN and HNV, but these appliances are starting to appear from companies such as Iron Networks, Huawei, and F5.
System Center VMM plays several critical roles in HNV.
There are some significant feature improvements/introductions that we will highlight in WSSC 2012 R2.
Damian Flynn (MVP, Cloud, and Datacenter Management) and Nigel Cain (Microsoft) are writing a series of articles to explain the concepts of HNV and how to design and implement a solution using System Center. This series is expanding over time and is essential reading for anyone want to learn about the intricacies of HNV. There is a steep learning curve, but it is one worth climbing.
At TechEd 2013, Greg Cusanza (Microsoft) and Charlie Wen (Microsoft) presented two sessions (MDC-B350 and MDC-B351) that is must-watch content if you are just reviewing the concepts of HNV. In it, they explain how HNV works and how it can be designed and implemented, and they go over the new features of Windows Server and System Center 2012 R2.