Last Update: Sep 04, 2024 | Published: Sep 30, 2015
Switch Embedded Teaming (SET) is a new way of deploying converged networking in Windows Server 2016 (WS2016), making it easier to take advantage of fewer and bigger network connections in Hyper-V hosts. In this article, I’ll explain what Switch Embedded Teaming (SET) is and discuss the advantages it has over legacy networking. I’ll also explain what we’re able to do with SET in Windows Server 2012 and Windows Server 2012 R2.
I’ve written quite a few articles on converged networking soon after I joined the ranks at the Petri IT Knowledgebase. I know that few people have used the features of Windows Server, because almost every time when I’m in the field, I encounter hosts with gaggles of 1 GbE NICs, and those who have 10 GbE NICs are often using costly blade chassis switching solutions that offer a hardware-based alternative. So here’s a quick reminder on Microsoft’s software-defined converged networking technologies that have been with us since Windows Server 2012.
In the days of Windows Server 2008 and Windows Server 2008 R2 Hyper-V, we had large collections of 1 GbE networks, each made up of 1 or 2 teamed physical NICs. Each network was assigned an individual role, such as management, cluster communications, Live Migration, iSCSI, and so on. Such a design is depicted below.
There are several different problems that are associated with this kind of design:
All of these problems help illustrate why we use converged networking. With this kind of design, we can deploy fewer physical NICs, connect to fewer switch ports, and get access to faster networking. Using software, we carve up the aggregated bandwidth of those physical NICs into a small number software-defined networks. The roles that we had before, such as management, backup, storage, cluster communications, Live Migration, and so on, can be spread out across the networks in a coordinated manner. Service levels can be managed using quality-of-service (QoS) rules both on the host, in a virtual machine’s guest OS, and on the physical network. In effect, what Microsoft gave us in Windows Server 2012 was the ability to deploy very flexible networking in every Windows Server that often would cost enterprise customers an extra $35,000 for every blade chassis that they deployed.
Here is a diagram of a Windows Server 2012 R2 host design with four NICs that takes advantage of converged networking:
The above design simplified the host deployment quite a bit. And with Consistent Device Naming (CDN), it;s really easy to automate the above configuration using PowerShell. You can also deploy it at scale using System Center Virtual Machine Manager (SCVMM) during bare metal host deployment.
With that in mind, there’s room for improvement. If you are implementing SMB 3.0 storage with SMB Direct (RDMA), then you have no choice but to use at least four NICs in the host as above. Wouldn’t it be better if we could deploy a pair of fault tolerant NICs that lets all network roles be converged. In other words, we want to converge RDMA traffic through the virtual switch, but this isn’t possible in Windows Server 2012 R2.
Secondly, why do we need a NIC team? Physical switches have the concept of multiple uplink ports that work as an aggregated unit. Why can’t the virtual switch do that and further simplify the design?
With the release of the Windows Server 2016 Technical Preview 3, Microsoft revealed that Windows Server 2016 will support a new simpler and more efficient way to do converged networking that’s an evolution of what we could do in Windows Server 2012 R2.
The first improvement is that we will not require a NIC team to converge networks in a Hyper-V host. Instead, as you can see below, the physical NICs in the host will be connected to the virtual switch; this is called Switch Embedded Teaming (SET). The concepts of Windows Server teams are still there:
With a SET configuration, networking admins are saved from the NIC teaming step, where the physical NICs in the host are effectively uplinks for the virtual switch.
A SET configuration also provides a major improvement, where RDMA can be converged. This means that we can:
Some of those virtual NICs might be for SMB 3.0 traffic that can now leverage SMB Direct and take advantage of storage, Live Migration, and redirected I/O communication that has lower latency and impact on the host CPU and virtual machine services.
Because we are still in the technical preview stage with WS2016, things are probably going to change. Until then, here are some notes on this feature.
The following are compatible with SET:
The following are incompatible with SET:
Note that host-side QoS is noted, and this makes DCB even more important. I suspect that QoS will be supported at a later date.
You can use up to eight NICs as uplinks in a SET switch. Unlike in teaming, the NICs must be identical. There is no concept of failover or standby NICs. If one member fails, then all NICs are active and traffic will be redirected.
You should enable VMQ with SET. You should also note that VMQ should not be used with one GbE networking, therefore you should use SET with 10 GbE or faster networking. You should configure *RssBaseProcNumber and *MaxRssProcessors to assign cores, not hyperthreads, for processing interrupts for each SET member NIC.
At this point, Live Migration is not supported in SET. In my opinion, this will change in a later preview release. In fact, Microsoft will need to change this in a later preview release to truly make SET a useful design option.
Switch Embedded Teaming has the potential to be a very useful design for Hyper-V host deployment in the future. Right now, SET is theory, but once Live Migration and QoS support are added, SET will become my norm for deploying WS2016 hosts in my lab.