Managing Windows Server Containers with PowerShell: Connecting to a Network

In this article series, I’m showing how you can manage Windows Server Containers on Windows Server 2016 (WS2016) Technical Preview 3 (TPv3) using PowerShell. I’ve already covered where the files are stored, what kinds of files are used, how to create and start a new container, the methods for interacting with a container, and how to create a new container image. In this post, we’ll cover how you can deploy a new service on the network using containers.

This post is part of a series:

Deploying Containers from Custom Container Images

Let’s assume the following for this post:

  • You have created a container image with nginx web server installed.
  • You have a VM host with a NAT virtual switch.

The goal is to deploy three identical containers on the VM host. In Part 2 of this series, we created a container image called DemoImage1. This container image is based on the WindowsServerCore container OS image. We can create a new container using DemoImage1, and the parentage of WindowsServerCore will automatically link this container OS image as a dependency:

New-Container -ContainerImageName "DemoImage1" -Name "Web1" -SwitchName "Virtual Switch"

This cmdlet will create a new container that is identical to the one that was used to create the container image.

Creating lots of identical containers in seconds (Image credit: Aidan Finn)
Creating lots of identical containers in seconds (Image credit: Aidan Finn)

The repository linking system creates the new container in about a second, and we can repeat the command with a new container name to instantly create clones of the container. In other words, we could deploy a large application or web farm of hundreds of instances in seconds, especially if you use a loop. The following will create containers Web4 to Web10 in just a few seconds:

4..10 | % { New-Container -ContainerImageName "DemoImage1" -Name "Web$_" -SwitchName "Virtual Switch" }

Using PowerShell to automate Windows Server Containers (Image credit: Aidan Finn)
Using PowerShell to automate Windows Server Containers (Image credit: Aidan Finn)

I bet you’re thinking that PowerShell and born-in-the-cloud applications are looking pretty impressive right about now.
You can use Start-Container to launch a container:

Start-Container Web1

Or you could start them all at the same time:

Start-Container Web*

Make sure that the service installed in the container has started. Microsoft’s example of nginx requires you to start the nginx service up.

Connecting Containers to the Network

If you decided to use DHCP networking, then your virtual machines should get a DHCP-assigned address if the virtual and physical network and DHCP server are correctly configured for the broadcast domain.
However, if you went with DHCP addressing, then each of your containers will get a private IP address from the NAT virtual switch. This makes things a little more complicated to set up, but it is more scalable. We can set up a large farm of containerized web servers using a single IP address on the LAN by using NAT. Each container has a web server listening on port 80. We will configure NAT rules on the VM host to intercept traffic on some unused TCP port and forward it to TCP 80 in the container. For example:

Container NAT Port
Web1 60001
Web2 60002
Web3 60003
Web4 60004
Web5 60005
Web6 60006
Web7 60007
Web8 60008
Web9 60009
Web10 60010

The format of a rule works something like this:

  • Inbound traffic on the VM host on Port X
  • Forwarded to IP address of container on Port Y

We need the private IP address of the Web1 container:

Invoke-Command -ContainerID (Get-Container Web1).ContainerId -ScriptBlock {(Get-NetIPAddress -AddressFamily IPv4).IPAddress}

If that reports a private IP address of 192.168.250.5, then we can create a NAT rule for the Web1 container:

Add-NetNatStaticMapping -NatName "ContainerNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress 192.168.250.5 -InternalPort 80 -ExternalPort 60001

Create a NAT rule for a Windows Server Container (Image credit: Aidan Finn)
Create a NAT rule for a Windows Server Container (Image credit: Aidan Finn)

If you test this by browsing the name/IP of the VM host with the 60001 port, the likely result is that it will fail. Unlike with a Hyper-V host, the VM host is handling the network traffic, and therefore the Windows firewall on the VM host is filtering traffic by default. If the Windows firewall is active, then you will have to create a rule to allow TCP 60001 inbound to the VM host:

New-NetFirewallRule -Name "TCP60001" -DisplayName "HTTP on TCP/60001" -Protocol tcp -LocalPort 60001 -Action Allow -Enabled True

Now the service is up and running in the container. Below I have browsed to the IP address of the VM host with the TCP port of the Web1 container:

The container is now a ready web server on the LAN (Image credit: Aidan Finn)
The container is now a ready web server on the LAN (Image credit: Aidan Finn)


All you have to do now is script the configuration of NAT rules for the remaining container, configure a network load balancing appliance, and your scaled out web farm is up and running in less time than it would take to deploy one virtual machine web server.