Last Update: Nov 19, 2024 | Published: Sep 11, 2013
In this post we will show you how to implement and enforce Quality of Service (QoS) bandwidth management rules that are enforced by Data Center Bridging (DCB) -capable hardware.
QoS rules that are managed by DCB can be used when you are deal with networking that is not passing through a Hyper-V virtual switch in the configured operating system. This form of bandwidth management is enforced by DCB capable hardware; your source/destination NICs and the intermediate switches must support DCB. The benefits of DCB are:
You can use DCB to provide QoS for non-virtual switch traffic in:
In this design we will be converging several networks on WS2012, as shown below.
A possible WS2012 converged networks design.
Two physical RDMA capable NICs are being used for the host networking. Try not to overthink this! That’s a common mistake for novices to this networking. The design is quite simple:
This configuration is a requirement for SMB Multichannel communications to a cluster such as a SOFS. Each NIC has a single IP address. Critical services such as cluster heartbeat, live migration, and SMB Multichannel will failover the non-teamed NICs automatically if you configured correctly. For example, enable both NICs for live migration in your host/cluster settings. Also keep in mind that RDMA and NIC teaming are incompatible.
Just as with QoS that is enforced by the virtual switch, your total assigned weight should not exceed 100. This makes the algorithm work more effectively and makes your design easier to understand. Weights can then be viewed as percentages. We will assign weights as follows on this host with 10 GbE networking:
There are two phases for implementing DCB bandwidth management:
In PowerShell, there are four core steps:
If you are using RDMA over Converged Ethernet (RoCE) NICs, then you must also enable Priority Flow Control (PFC) using Enable-NetQosFlowControl to prioritize the flow of SMB Direct traffic – think of this as the bus or car-pooling lane on a busy highway where SMB Direct gets an easy ride past the other traffic on the NICs’ circuitry.
The following will create the four required QoS rules for our example. Note that the IEEE P802.1p priority flag is being used. These priorities will be used to map the QoS rules (or policies) with the DCB traffic classes.
# Create a rule for the backup protocol sent to TCP port 10000 New-NetQosPolicy “Backup”-IPDstPort 3343 –Priority 7 # Create a rule for RDMA/SMB Direct protocol sent to port 445. # Note the use of NetDirectPort for RDMA New-NetQosPolicy “SMB Direct” –NetDirectPort 445 –Priority 3 # Create a rule for the cluster heartbeat protocol sent to TCP port 3343 New-NetQosPolicy “Cluster”-IPDstPort 3343 –Priority 6 # Create a rule for Live Migration using the built-in filter New-NetQosPolicy “Live Migration” –LiveMigration –Priority 5
You can enable the DCB feature in Server Manager:
Enabling the Data Center Bridging role in Server Manager.
Alternatively, if you are writing a script to do configure your networking then you might use PowerShell to install the feature:
Install-WindowsFeature Data-Center-Bridging
In the next step you will create DCB traffic classes. That each cmdlet will create a class that matches the higher level QoS rule:
# Create a traffic class on DCB NICs for backup traffic New-NetQosTrafficClass “Backup” –Priority 7 –Algorithm ETS –Bandwidth 20 # Create a traffic class on DCB NICs for SMB Direct New-NetQosTrafficClass “SMB Direct” –Priority 3 –Algorithm ETS –Bandwidth 40 # Create a traffic class on DCB NICs for Cluster New-NetQosTrafficClass “Cluster” –Priority 6 –Algorithm ETS –Bandwidth 10 # Create a traffic class on Data Center Bridging (DCB) NICs for Live Migration New-NetQosTrafficClass “Live Migration” –Priority 5 –Algorithm ETS –Bandwidth 30
The following step is a requirement for RoCE NICs and should not be used on iWARP or Infiniband NICs. This will force RoCE NICs to prioritize SMB Direct (Priority 3) over other protocols.
Enable-NetQosFlowControl –Priority 3
The first step will be to enable the DCB settings:
Set-NetQosDcbxSetting –Willing $false
The second step will be to enable DCB on the RDMA capable NICs (rNICs). This is much easier if you have servers capable of Consistent Device Naming (CDN) because you can then predict the name of the required NICs on every host:
Enable-NetAdapterQos “rNIC1”
Enable-NetAdapterQos “rNIC2”
There is more work involved in configuring QoS that is managed by DCB, but the effort is worth it. Thanks to DCB you can take advantage of RDMA for economic and fast SMB 3.0 storage (WS2012 and WS2012 R2) and use SMB Direct to offload the processing of live migration (WS2012 R2).