Azure Virtual Machine Announcements from Build
Microsoft made several announcements regarding Azure virtual machines at the recent Build conference. This post will describe those announcements.
Say Goodbye to Traditional PC Lifecycle Management
Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.
M-Series Virtual Machines
While quietly presented, a new massive series of virtual machines has been announced. How massive? Well, these machines have so many processors and so much RAM that the Hyper-V team had to increase the maximum limits in their testing for Windows Server 2016.
Note, the new series of machines mentioned in this article are running on Windows Server 2016 Hyper-V in the Azure data centers.
Scott Guthrie has recently been showing off the new M128ms virtual machine during his Red Shirt Tour in Europe. This machine has 128 virtual processors and 3.5TB RAM. Yes, you read that correctly. It has three point five terabytes of RAM!
A page on running SAP on Azure lists three M-Series virtual machines:
- M64ms: 1.75TB RAM, 2TB local disk
- M128s: 2TB RAM, 4TB local disk
- M128ms: 3.5TB RAM, 4TB local disk
The s designation leads us to believe that the local disk is flash based. A large temp drive would be required to host a paging file for virtual machines with so much RAM.
Dv3-Series Virtual Machines
We have known about the Dv3-Series of virtual machines for a while. Microsoft introduced a promotional price for Dv2 virtual machines in April and reduced that cost again in May. The goal was to get more people using the D-family of virtual machines. This came with the promise that the new Dv3 series, on newer hosts and with better performance, would be the same as the promotional price of the Dv2 machines. A conversion should be pretty easy.
This strategy appears to have worked well. There are host shortages at the time of writing this for DSv2- and FS-Series virtual machines in some regions around the world. Note, they have the same host hardware.
Ev3-Series Virtual Machines
This newest family of virtual machines will be based on the Intel Broadwell E5-2673 v4 2.3 processor and will support up to 432GB RAM. This new series offers a lot of memory. Except for being newer, it remains to be seen how this will be unique to the G-Series.
This is a new variant in the N or NVIDIA family of virtual machines, which is using NVIDIA Tesla P40s GPUs. These machines are intended for deep learning or artificial intelligence (AI), for those of you that are not afraid of the robot rising.
A larger GPU memory assignment of 24GB enables a larger neural net to be kept in RAM and the chipset offers twice the performance of previous generation chipsets for AI workloads. Other HPC workloads can also benefit from this series of virtual machines, including scientific calculations, simulations, graphics rendering, and more.
Microsoft is releasing a successor to the NVIDIA-based NC-Series virtual machines. This is based on the NVIDIA Tesla P100s chipsets for compute workloads.
The Dv3- and Ev3-Series virtual machines will be running on Windows Server 2016 Hyper-V hosts that support Hyperthreading and nested virtualization. Scott Guthrie demonstrated Hyper-V running inside of an M128ms virtual machine with virtual machines running inside of the huge M-Series virtual machine.
This new ability to run Hyper-V in Azure will allow you to execute compute on your own terms. For those of you trying to learn Hyper-V, you will not need to purchase hardware anymore. You will just need pay-as-you go virtual machines that you power up and down as required. This can help to save money.
A normal container shares a kernel with other containers and the host. There is no secure boundary between containers or between containers and the host. This means that you should use containers with internal-only and trusted code.
The real reason for nested virtualization is that it will allow containers to be deployed as Hyper-V Containers. By using Hyper-V, each container gets its own mini-kernel that is isolated from the host and every other container. This boundary means that you can execute hostile code in a container without concern that it can break out to the host or other containers.
Azure Cloud Shell
The clues appeared in the Azure Portal shortly before the conference started. A new >_ icon appeared in the top-right corner. If you click this icon, Microsoft will deploy a container in Azure and this container will allow you to run a command line shell in Azure.
This means that you can run the latest command line tasks against your Azure subscription(s) without running a shell on your local machine. This can save you time by deploying a command line shell. The real win here is that you will get a faster interactive experience. Those who have run PowerShell from their PC against Azure can tell you of the interactive latency.
The cloud shell was launched with support for Bash and with support for PowerShell coming later. Linux and open source workloads appear to dominate Azure too!
Cross-Region Azure Site Recovery
ASR for Azure virtual machines was announced at Ignite last September. The news at Build said this feature would be coming soon. At a recent cloud architects training course that Microsoft ran in Bellevue, WA, I heard the date as being the end of May.
Once this service arrives, it will change how we implement cross-region fault tolerance of Azure virtual machines. Instead of having to create duplicate deployments, we will be able to replicate most virtual machines from one region to any other region in Azure. We can execute planned or unplanned failovers in the event of a disaster or planned migration.