Designing a Hyper-V Virtual Machine
When you specify a physical server, you typically figure out the requirements of the operating system and application; determine support and performance requirements; and configure the required storage, memory, processors, networking, and so on. This doesn’t really change with Hyper-V, or any virtualization platform for that matter. In this post I will discuss some of the things you should consider when configuring a virtual machine to run on Hyper-V.
Support for Virtualization
Will your desired guest operating system and software (a) run in a Hyper-V virtual machine and (b) support Hyper-V or virtualization? You cannot just assume that your software and OS will work and be supported.
Microsoft has listed the supported guest operating systems on TechNet. Note that several Linux distributions are listed at this time (this list continues to grow).
- Red Hat Enterprise Linux
- SUSE Linux Enterprise Server
- Open SUSE
- Oracle Linux (thanks to a unique Oracle/Microsoft partnership)
“Supported” and “works” have very different meanings for Microsoft.
What is “Inside Microsoft Teams”?
“Inside Microsoft Teams” is a webcast series, now in Season 4 for IT pros hosted by Microsoft Product Manager, Stephen Rose. Stephen & his guests comprised of customers, partners, and real-world experts share best practices of planning, deploying, adopting, managing, and securing Teams. You can watch any episode at your convenience, find resources, blogs, reviews of accessories certified for Teams, bonus clips, and information regarding upcoming live broadcasts. Our next episode, “Polaris Inc., and Microsoft Teams- Reinventing how we work and play” will be airing on Oct. 28th from 10-11am PST.
- Supported: The above Linux distributions have support. This means Microsoft has the ability to engineer solutions to resolve any issue you might have.
- Works: Lots of x86 or x64 operating systems work on Hyper-V. Any Linux with a modern kernel will even have the Hyper-V drivers built-in. However, Microsoft will not provide technical support for these unsupported guest OSs.
You next need to check the support statement for the software that you want to run. This is information you need to get from the software vendor. For example, Microsoft posts their information on the SVVP site and Oracle’s support statement for Hyper-V can be found here.
Note that support statements are sometimes more than just yes/no decisions. Some software such as SharePoint or Exchange have very detailed guidance on how to specify a virtual machine, and which features of virtualization (not just Hyper-V) that they support.
Finally, you need to consider hardware. Can a Hyper-V virtual machine scale to your needs? For all but a handful of servers, Hyper-V can do it: 64 virtual processors, 1 TB RAM, 256 x 64 TB virtual hard disks, the ability to use iSCSI and Fiber Channel LUNs, and Shared VHDX. But peripheral hardware can be a challenge. There is no SCSI device pass through, PCI virtualization, or USB pass through. If special hardware if required, such as an ISDN card, then virtualization is not possible for that machine.
Configuring the Virtual Machine
In a traditional virtualization you will create each virtual machine according to the needs of the service that runs within it. In a cloud where self service is the rule, you will need to create lots of hardware profiles (using the System Center Virtual Machine Manager (SCVMM) approach) and/or provide a means for tenants to customize virtual machines at or after the time of deployment. In the hosting world, this is a means of generating more revenue – you do want the tenant to consume and pay for more resources, after all!
Generation 1 or Generation 2 VM
Windows Server 2012 R2 Hyper-V added a new kind of UEFI-based and optimized hardware type called Generation 2. While this is supported by SCVMM 2012 R2, many customers might be conservative and stick with Generation 1. Alternatively, going with Generation 2 will prepare your VMs for whatever Microsoft has planned for the future. At this point, I have not formed an opinion on this decision. You might need to consult with your backup solution vendor for support.
Each virtual processor will consume resources from a logical processor (a core or a thread) on the host. Assign only enough logical processors to keep the CPU run queue length in the guest OS at a reasonable level. Do not over-assign virtual processors; the unused vCPUs will needlessly occupy host logical processors and therefore starve other virtual machines of the ability to become active.
Some applications require that a virtual machine has complete ownership of half or a full logical processor on the host for each vCPU. You can use the Virtual Machine Reserve setting to accomplish this.
Configuring processor settings in a Hyper-V VM.
Do not assume that Dynamic Memory is supported. For example, the Exchange Mailbox role has restrictions on memory optimization on all virtualization platforms. There are two approaches to setting the Maximum RAM setting:
- The default: This is set to 1 TB RAM. Memory will be assigned to the guest as required up to this limit or the limit of the host (probably encountered first). I really dislike this “elastic” setting; memory leaks in a guest OS plus grumpy tenants who refuse to pay will ruin your day… regularly.
- Realistic: If you know a service requires a maximum of 8 GB RAM then set that as the maximum RAM of the VM. Allow the VM to balloon down when idle.
What about startup RAM? In production, I will set this to at least 1 GB. This is enough for a tenant to be able to install SQL Server without opening a helpdesk call. Ideally, I set startup RAM to whatever is required to get the (potentially) installed services working correctly.
Deploy storage in the VM as if you were planning LUNs in a server. Use the first disk for the operating system. If you plan to use Hyper-V Replica, then place the paging file in a second VHDX file (to allow exclusion from replication). All data should be stored in dedicated VHDX files on a SCSI controller. This gives more granular control and allows you to take advantage of WS2012 R2 online resizing of VHDX files.
I always use the synthetic network adapter for production because the legacy NIC has a performance cost. The legacy NIC is the only way to get PXE in a VM in Generation 1 VMs, but I only use that in a lab. I deploy production VMs from templates (with SCVMM) or from a generalized (Sysprep) VHDX file (without SCVMM).
I will usually only have one NIC in a VM; the NIC is connected to a virtual switch and the virtual switch is connected to a NIC team. There are rare exceptions:
- A VM must connect to multiple physically isolated networks.
- SR-IOV is being used and two NICs in the VM are required for NIC teaming.
Start Up and Shutdown
You can control and/or delay the automatic start up of a virtual machine. I will automatically start a virtual machine if it was running when the host was shutdown. I try to delay the start up by a few minutes. You can choose to stagger that start up if you want to model services dependencies.
The shutdown action is more interesting. If you choose to place virtual machines in a saved state when the host is shut down, then Hyper-V will maintain a .BIN file with the VM to reserve disk space for the current amount of RAM in the VM. You should account for this disk space when planning physical storage. In WS2012 and later, Hyper-V won’t keep that .BIN file if you choose an alternative setting.