Creating a NIC Team and Virtual Switch for Converged Networks
A common solution for creating a converged network for Hyper-V is to aggregate links using a Windows Server 2012 (WS2012) or Windows Server 2012 R2 (WS2012 R2) NIC team, such as in the below diagram. We will look at how you can do this in Windows Server and then connect a virtual switch that is ready for Quality of Service (QoS) to guarantee minimum levels of service to the various networks of a Hyper-V cluster.
What Is an NIC Team?
A NIC team gives us load balancing and failover (LBFO) through link aggregation:
- Load Balancing: Traffic is balanced across each of the physical NICs that make up the team (team members) in the NIC team. The method of balancing depends on the version of Windows Server and whether the NIC team is being used for dense virtualization or not.
- Failover: The team members of a NIC team are fault tolerant. If a team member fails or loses connectivity then the traffic of that team member will be switched automatically to the other team member(s).
Converging networks through a NIC Team and Virtual Switch.
NIC teams have two types of setting. The first is the teaming mode:
- Switch Independent: The NIC team is configured independently of the top-of-rack (TOR) switches. It allows you to use a single (physical or stack) or multiple independent TOR switches.
- Static Teaming (switch dependent): With this inflexible design, each switch port connected to a team member must be configured for that team member.
- LACP (switch dependent): Switch ports must be configured to use LACP to automatically identify team member connections via broadcast by the NIC team.
Load balancing configures how outbound traffic is spread across the NIC team. WS2012 has two options (with sub-options) and WS2012 R2 has three options (with sub-options):
- Address Hash: This method hashes the destinations of each outbound packet and uses the result to decide which team member the traffic should traverse. This option is normally used for NIC teams that will not be used by a Hyper-V virtual switch to connect to a physical network.
- Hyper-V Port: Each virtual NIC (management OS or virtual machine) is associated at start up with a team member using round robin. That means a virtual NIC will continue to use the same single team member unless there is a restart. You are still getting link aggregation because the sum of virtual NICs are spread across the sum of the team members in the NIC team.
- Dynamic: This is a new option in WS2012 R2. Traffic, even from virtual NICs, is spread pretty evenly across all team members across all team members.
Note that inbound traffic flow is determined by the TOR switch and is dependent upon the teaming mode and load balancing setting.
We’ll keep things simple here, because you can write quite a bit about all the possible niche scenarios of NIC teaming. The teaming mode will often be determined by the network administrator. Switch independent is usually the simplest and best option.
If a NIC team will be used by a virtual switch, then:
- The NIC team should be used by that single virtual switch only. Don’t try to get clever: one NIC team = one virtual switch.
- Do not assign VLAN tags to the single (only) team interface that is created for the NIC team.
- On WS2012, load balancing will normally be set to Hyper-V Port.
- On WS2012 R2, it looks like Dynamic (the default option) will be the best way forward because it can spread outbound traffic from a single virtual NIC across team members, unlike Hyper-V port.
Creating the NIC Team
Note: If you want to implement the VMM Logical Switch or use Windows/Hyper-V Network Virtualization then you should use VMM to deploy your host networking instead of deploying it directly on the host yourself.
There are countless examples of creating a WS2012 NIC team on the Internet using LBFOADMIN.EXE (the tool that is opened from a shortcut in Server Manager). Scripting is a better option, because a script will allow you to configure the entire networking stack of your hosts, consistently across each computer, especially if you have servers that support Consistent Device Naming (CDN). Scripting is also more time efficient: Simply write it once and run it in seconds on each new host. The following PowerShell examples require that the Hyper-V role is already enabled on your host.
The following PowerShell snippet will:
- Rename the CDN NICs to give them role-based names. Change the names to suit your servers. You should manually name the NICs if you do not have CDN-enabled hosts.
- Team the NICs in a switch independent team with Dynamic load balancing.
Rename-NetAdapter “Slot 2” –NewName VM1
Rename-NetAdapter “Slot 2 2” –NewName VM1
New-NetLBFOTeam –Name ConvergedNetTeam –TeamMembers VM1,VM2 –TeamingMode SwitchIndependent –LoadBalancingAlgorithm Dynamic
Converged Networks and Creating the Virtual Switch
In the converged networks scenario you should create the virtual switch using PowerShell. This allows you to enable and configure QoS. The following PowerShell snippet will:
- Create a virtual switch with weight-based QoS enabled and connect it to the new NIC team
- Reserve a share of bandwidth for any virtual NIC that does not have an assigned QoS policy. This share is assigned a weight of 50. That would be 50% if the total weight assigned to all items was 100.
New-VMSwitch “ConvergedNetSwitch” –NetAdapterName “ConvergedNetTeam” –AllowManagementOS 0 –MinimumBandwidthMode Weight
Set-VMSwitch “ConvergedNetSwitch” –DefaultFlowMinimumBandwidthWeight 50
What Has Been Accomplished in the Design
Running these PowerShell cmdlets will create a NIC team ready for a Hyper-V virtual switch. It’ll also create a virtual switch that is ready for Hyper-V converged networking
Next up (in a future post) will be to create virtual NICs in the management OS and virtual machines to connect to the virtual switch.