Sponsored: Understanding Windows Containers


Containers are a lightweight virtualization solution that, unlike VMs, use the kernel of the host OS and other shared resources to create sandboxes that have their own process space, network interfaces, and can be configured independently of the host. Windows uses namespace isolation, resource control, and process isolation technologies to restrict the files, network ports, and running processes each container can access, so apps running in containers can’t interact or see apps running in the host OS or other containers.

Containers vs. Virtual Machines

There are significant technical differences between containers and VMs. The most important is that containers use a shared kernel, so despite the use of namespace isolation, a malicious user might be able to exploit a design flaw or security vulnerability to break out of a container. VMs provide better isolation, but load a full OS instance into memory, plus any required application libraries, resulting in a large memory and disk footprint that reduces efficiency, portability, and VM density.

Containers are more efficient because files, directories, and running processes are shared between containers. And only when a change is made, or a new file added, does a container use a distinct copy of a file provided by the host OS — and then just the blocks that have changed. When containers are started for the first time, they see what appears to be the file system of a freshly installed OS, even if changes have been made to the host, and the running memory only holds what would be present after the OS has first booted.

Docker Images

Images are the transportable component of containers, describe what each container looks like, and can be used by multiple containers at the same time. A base image is usually a minimal version of the host OS — so in the case of Windows, that means Server Core or Nano — and should contain the core components that an app needs to run. For example, you might add to a base image specific server roles and features that your app requires to run but are not included in the OS out-of-the-box. Images are stored in a local repository, and can also be stored in public or private registries hosted by Docker.

Docker images provide an easy way to distribute applications so that they can run on any compatible OS and, because of the small footprint, moved and started in seconds. But the clever part is that the images are broken up into layers. The base image is shipped only once. A database layer might be added, and then an application layer. If changes are made to a layer, only the layer containing the modified files is updated.

Developers can pull the components they need, such as databases and web servers, and leave someone else to worry about the set up and maintenance. And although it’s possible to place large monolithic applications into containers, componentization allows legacy apps to be restructured into a more serviceable microservices architecture.

Containers can run directly on a developer’s system, providing an isolated runtime environment for apps, and be lifted and shifted to a pre-production environment, and finally to a production system, even if the systems comprise of different underlying hardware or OS configurations. Although it’s worth noting that because Linux containers utilize shared Linux APIs, they can run only on Linux systems. And similarly, Windows containers can run only on Windows systems. Although, Docker can be used to manage containers on both platforms using the same toolset.

If you’d like to learn how to configure containers in Windows Server 2016, including how to isolate them using Hyper-V, try out Microsoft’s free, hands‑on virtual lab, Build your first container using Docker on Hyper‑V.