In previous posts I explained what Windows Server containers offer, and how containers work in Windows Server 2016 Technical Preview 3 (TPv3). In this post, I’ll explain how containers can be connected to the network. And let’s face it, the born-in-the-cloud services that you will deploy via Windows Server Containers will be pretty useless if you cannot connect them to a network!
Imagine this that you’re going to deploy application or operating system virtualization into a virtual machine. These containers are going to connect to a network, via a virtual machine’s virtual NIC, through a host’s virtual switch, and be connected to a VLAN via a physical top-of-rack switch. Now let me compete the picture by saying that there will be an additional virtual switch and virtual NIC inside of the virtual machine. Are you feeling confused yet? Don’t worry, this stuff, like most of IT, sounds confusing at first but when you compartmentalize the components in how you visualize the solution, it’s actually not that hard at all.
Let’s build up the solution a bit at a time so you understand what is going on. Queue up Visio! We’ll start with the networking of a Windows Server 2016 Hyper-V host. The diagram below shows a physical server with Hyper-V enabled. It also has a pair of physical NICs connected to a top-of-rack (TOR) switch. Windows Server 2016 Hyper-V gives us Switch-Enabled Teaming (SET); this integrates the functionality of NIC teaming into the virtual switch, and allows fault tolerant aggregation of the uplinks (the physical NICs) to connect virtual machines (and the host, optionally) to the TOR switch. Additional virtual NICs can be added to the host to network the host via converged networking.
I’ve kept the host networking design simple in this example so we can focus on the VM host and containers. An external virtual switch with SET is used to connect the host and virtual machines to the TOR switch.
The next step is to deploy a virtual machine to host the containers, otherwise known as a VM host. There’s nothing new here; the virtual machine has a virtual NIC (vNIC) that is connected to a port on the Hyper-V host’s external virtual switch. You can deploy VLANs as normal, by trunking the ports on the TOR switch and entering the VLAN ID in the properties of the vNIC.
The Containers role is enabled in the VM host and some functionality will be available that will be new to most Hyper-V veterans, and this is where the learning curve really begins.
A virtual switch is created inside the virtual machine that will be the VM host. Say what? Yes, you can deploy a virtual switch inside of a virtual machine that connects to the vNIC of the virtual machine. That vNIC is connected to the Hyper-V host’s virtual switch, which is connected to the TOR switch. The VM host’s virtual switch will be used to network containers. The VM host is also connected to the network by a virtual NIC, otherwise known as allowing “the management operating system to share the network adapter.”
A Windows Server Container can be connected to the virtual switch in the VM host. This can be done at the time of creation or afterward. Each container will have its own IP address and that leads us to another topic.
There are two ways to manage and deploy IP addresses to containers. These two methods are both dynamic because containers are supposed to be things that we deploy on the fly and are often are short-lived. Containers are also things that we can afford to lose, as born-in-the-cloud service design accounts for this.
The New-VMSwitch PowerShell cmdlet offers a new kind of virtual switch that you can create called NAT. This implementation of NAT works the same way that you would configure NAT on your edge firewall.
The VM host has an IP address that is valid on the VLAN or subnet that it is connected to. An additional network range (172.16.0.0/12 in my example) is assigned to the NAT virtual switch; this range will be used to configure IPv4 addresses for each container.
A NAT mapping table is configured on the VM host to allow the external network to access services in containers. For example, we can assign TCP 60001 on the VM host to forward traffic to TCP 80 on Container 1. This means that if I browse to the IP address of the VM host on TCP 60001 then that request will be sent to Container 1. I could create a similar rule for Container 2 using TCP 60002.
The benefit of this approach is that it is massively scalable. We could, in theory, create NAT rules for thousands of TCP ports on the VM host, and consume just a single IPv4 address on the network.
You might prefer a more traditional form of virtual networking and want to connect your containers to the LAN without using NAT. The second option, DHCP, deploys an external virtual switch in the VM host instead of a NAT virtual switch. Containers use DHCP to retrieve an IP address from the LAN and communicate on the LAN using their own MAC address. Note that this requires MAC spoofing to be enabled in the virtual machine’s vNIC settings.
This is a simpler form of networking and removes the need to configure NAT mapping rules in the VM host (a form of guest/host spanning that we dislike in the service provider world), albeit at the cost of assigning one IPv4 address to each container.