Managing SMB Multichannel in SMB 3.0

Last Update: Sep 04, 2024 | Published: Jul 16, 2014

SHARE ARTICLE

SMB 3.0 is Microsoft’s data protocol that was introduced in Windows Server 2012 (WS2012). This data protocol evolved to SMB 3.02 in Windows Server 2012 R2 (WS2012 R2). One of the features of SMB 3.0 is SMB Multichannel, which can cause unexpected issues if left unmanaged. In this series, I will explain why you need to manage SMB Multichannel, and how you can place constraints on the feature to control different NICs it uses. In this post, I’ll start by looking at how SMB selects a NIC to use.

What is SMB Multichannel?

SMB Multichannel is one of the features that was added in SMB 3.0 that makes this old protocol ready for enterprise-scale and cloud-scale data flow. The role of SMB Multichannel is to allow Windows Server (and Windows client, too) to make the most of whatever network connections are available to push data as quickly as possible from a client to a server. SMB Multichannel works in two ways:

  1. Multiple streams over a NIC: SMB Multichannel is capable of sending many simultaneous streams of data across a network connection. The benefit is that SMB can fill a network, which is a good thing when you need data to go from A to B as quickly as possible. This capability overcomes limitations that are seen in previous versions of Windows when faced with large bandwidth connections.
  2. Multiple NICs: During the initial connection, the SMB client and server perform a mutual discovery of capabilities and connections. If the two computers discover multiple network connections of the same speeds and capabilities, then SMB Multichannel will automatically aggregate the bandwidth to each of those NICs with fault tolerance. You can add or remove NICs, and the flow of data will continue.

It is the latter issue that is commonly a problem. Some Windows features use SMB Multichannel, where we see unexpected use of networks. Alternatively, there are scenarios where people have implemented network designs, but they don’t get the results that they expected because SMB Multichannel went a different direction. I’ll explain why these things happen and how you can control them in the rest of this post.

How SMB Multichannel Chooses Different NICs to Use

There have been scenarios where people have installed unusual network designs in their Hyper-V hosts and they get SMB 3.0 travelling across NICs that they did not expect. The following diagram illustrates such an example. The Hyper-V host has been deployed using System Center Virtual Machine Manager (SCVMM) bare metal deployment. The consultant incorrectly thought that they had to deploy the host with a physical NIC for the management OS, and they use management OS virtual NICs for SMB 3.0 storage traffic. Now, the SMB 3.0 storage traffic is flowing via the physical management NIC (1 GbE) and not the dedicated SMB 3.0 storage virtual NICs (connected to 10 GbE NIC team).

SMB traffic unexpectedly traverses the physical management NIC

Incorrectly designed SMB Multichannel storage. (Image: Aidan Finn)

There is a logical reason to this, and it is because of the process that SMB Multichannel uses to select NICs. NICs are chosen in the following order:

  1. rNICs: Any NICs that support Remote Direct Memory Access (RDMA), known as SMB Direct in Windows, are chosen first. rNICs offer the lowest latency and the least impact on the processor when pushing through large amounts of data.
  2. RSS enabled NICs: Any NIC that supports Receive Side Scaling (RSS) offers great scalability by not being restricted to core 0 on the server. You can learn more about RSS in the Petri IT Knowlegebase article, “Hyper-V Hardware Offloads for Networking.”
  3. The highest speed NICs: A 10 GbE NIC offers more throughput than a 1 GbE NIC.


When SMB 3.0 looked at the available NICs the ordering was:

  1. rNICs: None available.
  2. RSS enabled NICs: Choose the one that is available, which is the the management OS NIC.

Note that Windows Server 2012 R2 does offer virtual RSS, but this feature is not available to management OS NICs. This is why the SMB 3.0 traffic will not select the provided SMB1 and SMB2 NICs.
Note: When building a scale-out file server, you also need to ensure that you configure the DNS settings (TCP settings) and enable client access (Failover Cluster Manager) for the desired storage NICs.

SMB 3.0 is More Than Storage

Windows Server uses SMB 3.0 for more than storage since WS2012 Failover Clustering has used SMB 3.0 for redirected IO. And those of you that are lucky enough to have 10 GbE or faster of bandwidth for Live Migration, then you can enable SMB Live Migration. Both of these features use SMB 3.0 the same way that Hyper-V on SMB 3.0 does. And SMB Multichannel selects NICs in the same way. So it is important to understand this process when you are designing Hyper-V host or storage networking.

Restrict SMB Multichannel to Selected NICs

In part 2 of this series, I will show you how to restrict SMB Multichannel to selected NICs to avoid spill-over into other networks.

Related Article:

SHARE ARTICLE