How to Build a Windows Server 2012 R2 Hyper-V Test Lab

As someone whose job it is to learn, teach, and evangelize about Windows Server and Hyper-V, the lab is a critical tool to me. With this in mind, today I’m going to discuss what to consider and talk in detail about building Windows Server 2012 R2 Hyper-V test lab environments and what I’ve learned from them.

Windows Server 2012 R2 Hyper-V Lab Options

There is quite a bit to learn about Windows Server 2012 R2 (WS2012 R2) Hyper-V, especially if you have not yet used Windows Server 2012 (WS2012) Hyper-V, which was a huge release. Many of the important changes are related to networking and storage, and this means you need hardware that is capable of supporting these features.

Storage Options

Let’s start with the most economic storage option: iSCSI. This storage is probably the most commonly encountered storage connectivity solutions in the market. That typically requires an expensive hardware SAN, but there are other options. While the hardware solution might support attractive features such as Offloaded Data Transfer (ODX) and TRIM, and it may give you the option to play with a hardware Volume Shadow Copy Service (VSS) provider for (Cluster Shared Volume) CSV backup, acquiring a hardware SAN just for a lab will be a challenge unless you work for a hardware reseller that needs a realistic demo lab.

testing lab

No, we mean a different kind of testing lab.

Some businesses have chosen to deploy software-based third-party iSCSI SAN products that convert a server (possibly running Windows Server) into a storage appliance. That can be too expensive for a lab, especially a home lab. A more economic option is to use the iSCSI target that is built into WS2012 and WS2012 R2. This is a feature that you can turn on to enable the storage in your server to be shared to other application servers (such as Hyper-V) for data storage. You can quite happily use the iSCSI target as the shared storage for a Hyper-V cluster.
Windows Server 2012 also introduced the ability to virtualize fiber channel host bus adapters (HBAs) so that virtual machines can connect directly to LUNs in a fiber channel SAN. While this is a feature that becomes less important in WS2012 R2 thanks to shared VHDX, some people might still want to learn how to implement this functionality. There is no easy way to say this: Fiber channel storage is the most expensive option for a SAN, and it is really unlikely that you will be able to learn this functionality unless you work for a hardware reseller or have a decommissioned SAN at work that supports WS2012 or WS2012 R2.

One of the more interesting storage options in WS2012 and WS2012 R2 is SMB 3.0 storage; this is where Hyper-V stores virtual machines on a file share. There is much to learn with this new technology, starting with the architecture, which could be a simple file share on a single server or it could be a Scale-Out File Server (SOFS) made up of multiple servers with shared storage. That SOFS shared storage could be a SAN, an iSCSI target, or Storage Spaces on just a bunch of disks (JBOD).
If you choose the SOFS with Storage Spaces (on a JBOD) option, then there are also new features to learn on WS2012 R2 that can leverage SSDs and traditional HDD disks to create tiered storage. The JBOD of a SOFS must support at least two servers connecting to it at once (for two nodes in the SOFS cluster) and the disks must be dual channel SAS. Here’s the bad news: The SSDs that support this currently cost well over $1,000, and you’ll need at least four of them to see any performance. You don’t need SSDs to learn the basics of a SOFS, but they are nice to have – especially if you need to run competitive customer demonstrations.

Networking Options

I probably spent as much time learning the networking of WS2012 as I did on Hyper-V. There is an incredible amount to learn for novices. Some of the functionality can be learned and demonstrated on basic hardware, and some of it will require a very nice budget. Your first step will be to understand the requirements of Hyper-V networking and use this to calculate the number of NICs you will need in your servers.
WS2012 networking plays a huge role in SMB 3.0 storage. You can use and learn SMB 3.0 storage on unenhanced 1 GbE networks, but some of the newer features do require hands-on experience. SMB 3.0 can leverage Remote Direct Memory Access (RDMA) to offload the processing and reduce the latency of SMB 3.0 networking. The NICs and switches with RDMA support are not cheap. You can choose 10 GbE iWARP NICs, ROCE NICs, or Infiniband networking. Of the three, iWARP is probably the cheapest option, but bear in mind that you will require at least one 10 GbE switch with support Datacenter Bridging (DCB).
Single-Root IO Virtualization (SR-IOV) allows virtual machines to bypass the management OS networking stack to connect directly to the physical NICs in a host. It sounds like a very nice feature for minimizing network latency for virtual machines and increasing host capacities to huge levels, but in my years working in IT, I’ve only ever encountered one company that would even need this feature. Adopting SR-IOV comes at a cost – it requires host (motherboard, NICs, drivers, and firmware) support and you lose all of the functionality that the Hyper-V virtual switch offers. If you do want to learn SR-IOV on a budget then choose NICs that offer it… and buy Dell servers. I really dislike recommending a specific brand, but in my opinion IBM have woeful support for System Center and HP are stuck in 2009 (SR-IOV support is only in HP’s most expensive hardware).

Hyper-V

There are two aspects to Hyper-V: the software and the hardware. TechNet, long the best option for acquiring Microsoft software for evaluation and learning is coming to an unfortunate end. MSDN offers the same access to infrastructure software, but it is much more expensive because it includes much more content that is intended for developers. That leaves you with Hyper-V Server and time-limited evaluation copies of Windows Server. My preference is to use a full install of Windows Server in production rather than the Core install, so naturally I’d have to recommend using the time-limited evaluation copies of Windows Server in a lab. But you could deploy Windows 8 or Windows 8.1 on a PC and use the Remote Server Administration Toolkit (RSAT) to remote manage the free and fully featured Hyper-V Server that does not have a time limit.
On the hardware front, you don’t necessarily need servers. If you are on a tight budget you could use PCs or laptops.  The hardware requirements are simple:

  • A CPU that supports assisted virtualization.
  • Data Execution Prevention (no execute bit) turned on in the BIOS.
  • Sufficient RAM and disk space for the management OS and virtual machines.

Any machine that has an Intel Core I processor should support Hyper-V, as well as give you Second Level Address Translation (SLAT) that allows further virtualization functionality such as RemoteFX.

My Home Lab

I needed to build a home lab on a budget when I wrote a book on WS2012 Hyper-V. The lab was purely to be able to learn and document how to implement features of Hyper-V. I already had some old hardware:

  • A six-year-old tower PC that became my domain controller, iSCSI target, and SMB 3.0 file server. I added a USB drive to give it extra storage capacity; the performance was bad but I was more concerned with capacity on a shoestring than a fast environment.
  • A three-year-old laptop which became an additional Hyper-V host for Hyper-V Replica.  I needed to have a third so I could replicate to/from a Hyper-V cluster.

Speaking of the Hyper-V cluster, I purchased two tower PCs with i5 processors and the minimum amount of OEM RAM.  I expanded the RAM capacity to 16 GB each by buying memory from a third-party manufacturer; this was much more economic than the default source. Each PC came with a single 1 GbE NIC, which is not nearly enough. I supplemented this by adding a quad port adapter to each of the hosts, giving me five NICs per host.

Building windows server 2012 R2 hyper-v test Lab

My home test lab.

The networking was simple: I got a simple unmanaged switch that you can get from any decent consumer store and picked up a few network cables from work. (Those things are crazy expensive via retail!) There are no VLANs in this lab. I instead prefer to use simple IP address subnetting, using addresses such as 192.168.1.0/24, 192.168.2.0/24, 10.1.1.0/24, and so on. This makes the lab easy to reconfigure at the software layer without having to delve into switch management.
I rebuilt the lab after writing each chapter in the book. Instead of wasting time by reinstalling the management OS on each host every time, I configured the hosts to boot from VHD. Windows 7 (now Windows 8) was installed on the machine. I configured a second boot entry for a VHD file on the C: drive. The VHD contained the configured management OS and I retained a safe copy of the file. A host reset was easy – reboot the PC into Windows 7 (now Windows 8) and copy over the VHD file. That took just a few minutes.

The Lab at Work

My job revolves around delivering bespoke training, demonstrating, and evangelizing the technology, and going up against the competition. To do that I need a really good kit, one that can perform at levels and demonstrates advanced features that customers expect to see. That means my shoestring budget home lab just would not cut it.
The lab has gone through two generations now. The first generation was purchased during the beta of WS2012, at a time when some of the advanced hardware wasn’t readily available, and none of the manufacturers had a clue of what RDMA, DCB, ODX, and so on, were (sadly it has not gotten much better). We purchased an SFP+ 10 Gbps switch and lots of 10 GbE NICs to put in the servers.
I have learned an important lesson in lab management over the years: You need to separate lab management (patching, remote access, content, and so forth) from the actual demo lab. This gives you the ability to maintain clean management with quick and reliable rebuild processes. That’s why there are two untrusted domains in the lab:

  • Lab: This contains a PC that is a domain controller for the lab and a storage server that also runs Hyper-V. On here I create virtual storage (iSCSI and SMB 3.0) and management systems for the demo domain.
  • Demo: This is where the demonstrations and learning/test environments are built. This can be easily reset. I even do this remotely via VPN access through the lab domain.

The numbers of machines was limited but I was able to do quite a bit. For example, I ran System Center on virtual machines that were stored and placed on the lab storage machine. I also ran a virtual SOFS that was made up of three WS2012 virtual machines:

  • one iSCSI target
  • two virtual file servers that used the iSCSI target as their shared storage

With WS2012 R2 Hyper-V you could (and I briefly did) replace this with two virtual file servers using shared VHDX.

windows server 2012 R2 hyper-v test Work Lab Gen1

Generation 1 work lab

WS2012 R2 is placing a bigger emphasis on Storage Spaces and SMB Direct. This has forced us into expanding the lab. We recently added two more servers and a JBOD that is certified for storage spaces. Three changes are being made to the lab:

  • SMB Direct: We have replaced the server manufacturers 10 GbE NICs with SFP+ iWARP NICs. This will allow me to do SMB Direct for storage and for Live Migration.
  • Scale-Out File Server: The JBOD will be used to demonstrate and teach WS2012 R2 file based storage including SOFS and Storage Spaces.
  • Servers: The two old hosts are being switched to be the nodes in the SOFS. The new servers (64 GB RAM and 2 * 6 core CPUs) that support SR-IOV and Consistent Device Naming are being used as the Hyper-V hosts.

windows server 2012 R2 hyper-v test Work Lab Gen2

Generation 2 work lab

Once again, I have refrained from creating VLANs on the switch (this one is a managed switch). I just prefer having a completely flexible environment. The reward came 1.5 days after configuration started: I live migrated a 56 GB RAM Linux virtual machine from one host to another in just 35 seconds.
Building this lab taught me another lesson: if you can, buy 2U servers with lots of free expansion card slots. The 1U servers that I selected come with just a single full-height slot and a single half-height (or low profile) slot. This means you have to be very selective about picking add-ons for your servers.
While you can learn quite a bit about Hyper-V on just a single machine, the real earning comes when you have three or more machines. This allows you to explore storage, clustering, live migration, and Hyper-V Replica. You can minimize the spending by using PCs, but you will need to spend quite a bit more if you want to learn the advanced hardware-dependent features.