How to Choose an Azure Virtual Machine
In this post, I’m going to give you an updated answer to one of the most common questions that I encounter while selling Azure to my customers: How do I select and size an Azure virtual machine?
Order from the Menu
Azure is McDonald’s, not a Michelin Star restaurant. You have to take the burger their way, not yours, and you cannot say how you want your steak cooked. You take the menu that you are given but you get your order quickly and you can have a lot of what you order. You cannot say, “I’d like a machine with 4 cores and 64 GB RAM and a 200 GB C: drive.” That simply is not possible in Azure. Instead, there is a preset list of series of machine, and within those series, there are preset sizes.
The C: drive is always 127 GB (unless you upload your own template), no matter what the pricing pages claim as the disk size (it’s actually the size of the temp drive). Any data you have goes into a data drive, which you specify the size of (and therefore control the cost). Remember that storage (OS and data disks) costs extra!
Sizing a Virtual Machine
There are two things to consider here. The first is quite common sense: The machine will need as much RAM, CPU, and disk as your operating system and service(s) will consume; that’s no different from how you sized on-premises physical or virtual machines in the past.
Say Goodbye to Traditional PC Lifecycle Management
Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.
What is different is that you are in a cloud-scale environment that offers high availability (HA) via scale-out instead of densely packed hardware. On-premises, you load hosts, networks, and storage with dual-this, and redundant-that. That’s not how the big-three clouds work; they offer HA through sheer scale. This is reinforced by Microsoft’s SLA for virtual machines. Your machines will only qualify for the SLA if you deploy more than one of each instance, in an availability set.
The other factor of cloud-scale computing is that you should deploy an army of ants, not a platoon of giants. Big virtual machines are extremely expensive. A more affordable way to scale is to deploy smaller machines that share a workload and can be powered on (billing starts) and off (billing stops) based on demand, possibly using the Auto-Scale feature.
Choosing a Virtual Machine Series
Browse to the HPE or Dell sites and have a look at the standard range of rack servers. You’ll find DL360s, R420s, DL380s R730s, and so on. Each of these is a series of machine, and within that series, you’ll find a range of preset sizes. Once you select a series, you find the size that suits your workload, and the per-hour price (which is charged per minute) of running that machine is listed.
Let’s take a look at the different series of Azure virtual machines. Please remember that not all of the series are in all regions.
This is the lowest and cheapest series of machine in Azure. The A-Series (Basic and Standard) run on hosts with AMD Opteron 4171 HE 2,1 GHz CPUs. This is a processor that is designed for core-scale with efficient electricity usage, rather than for horsepower, so it’s fine for lighter loads like small web servers, domain controllers and file servers.
The Basic A-Series machines have some limitations:
- Data disks are limited to 300 IOPS each.
- You cannot load balance Basic A-Series machines. This means you cannot use NAT in an ARM/CSP deployment via an Azure load balancer.
- Auto-Scale is not available.
I like this series for domain controllers because my deployments are not affected by the above, and it keeps the costs down.
This is the most common machine that I have encountered in Azure. Using the same hardware as the Basic A-Series, the Standard A-Series has some differences:
- Data disks are limited to 500 IOPS, which is the norm for Standard Storage (HDD) accounts.
- You can use Azure load balancing.
- Auto-Scaling is available to you.
There are a few specialized machines in this series. The A8 and A9 sizes have large allocations of RAM (56 GB and 112 GB, respectively) and have a second, Infiniband low latency 40 Gbps, NIC for HPC workloads. The A10 and A11 are similar but lack the Infiniband networking.
When I think D-Series, I think “D for disk.” The key feature of the D-Series machine is that the temp drive is on a host-local SSD volume. This makes the paging file (on the temp drive) really fast, and DBAs can place their caching/temp database on this non-persistent volume. More performance is possible because this is the first of the Azure machines to offer an Intel Xeon processor; the Intel Xeon E5-2660 2.2 GHz CPU, to be precise.
When you see S in the name of an Azure virtual machine series, it designates that the machine can use SSD storage, Premium Storage accounts, for OS and/or data disks. The DS-Series is identical in sizing and pricing to the D-Series, but you can opt for SSD storage to increase IOPS (500+ IOPS) and reduce latency, and that Premium Storage costs more.
Microsoft recommends the DS-Series for SQL Server workloads. And that has led to some of my customers asking questions when they get their first bill. Such a blanket spec generalization is unwise: Some SQL Server workloads are fine with HDD storage, and some will require SSD. If you need lots of IOPS then Premium Storage is the way to go, but don’t forget that you can aggregate Standard Storage data disks to get more IOPS.
The Dv2-Series is a successor to the D-Series, with the Intel Xeon E5-2673 v3 2.4 GHz CPU, that can reach 3.1 GHz using Intel Turbo Boost Technology 2.0. Microsoft claims this will offer 35% more speed than the original D-Series.
If you need to run a workload with the best available processor, then the Dv2-Series is the type to select.
G is for Goliath. The G-Series virtual machines offer much more RAM per core than any of the other virtual machines in Azure, all the way up to 448 GB RAM, based on hosts with a 2.0 GHz Intel Xeon E5-2698B v3 CPU. If you need a lot of memory, then these are the machines to choose.
There’s that S designation again! The GS-Series takes the G-Series and adds a Premium Storage capability for storage OS and/or data disks, the same way that the DS-Series does for the D-Series. The price of the GS-Series is the same as that of the G-Series, plus the cost of the Premium Storage.
This name reminds me of a pickup truck, and I expect that Microsoft intends the brand new F-Series to be the workhorse of the future, possibly replacing the A-Series and more. The F-Series uses the same Intel Xeon E5-2673 v3 2.4 GHz CPU as the Dv-2 Series with the same 3.1 GHz turbo boost. The F-Series also uses SSD for the temporary drive.
The differences with the F-Series are:
- A new numbering system where the name tells you how many cores the machine runs on; an F4 has 4 virtual CPUs.
- You can get access to cores more affordably than with the Dv-Series. An F2 with 2 cores and 4 GB RAM is cheaper than a D2 v2 with 2 cores and 7 GB RAM.
- RAM is delivered in more recognizable amounts of 2 GB, 4 GB, 8 GB, instead of 1.75 GB, 3.5 GB, 7 GB.
- A new naming system.
Starting with the F-Series, a letter designator will be added to the end of the name to mark specializations. You can deploy an F2s machine instead of an F2 machine where the “S” indicates that the machine can use Premium Storage for the OS and/or data disks.
Microsoft talks about the F-Series being good for web servers and other similar workloads. That’s a common task, so this gives me further reason to think that this pickup truck will supplant the A-Series.
Microsoft has announced the N-Series machines, which are still not generally available. The N name stands for NVIDIA, and that’s because the hosts will have NVIDIA chipsets, which are presented directly to the virtual machines using a new Hyper-V feature called Discrete Device Assignment (DDA).
You could use the NVIDIA chipsets for powering graphics over RemoteFX (remote desktop), but I expect that the chipsets will be most popular with HPC workloads where the computation abilities of the M60 and K80 GPUs will reduce work times.
Note: Microsoft has now announced a new naming standard with the F-Series, so I expect the above N-Series sizes will be renamed.