Managing SMB Multichannel in SMB 3.0

SMB 3.0 is Microsoft’s data protocol that was introduced in Windows Server 2012 (WS2012). This data protocol evolved to SMB 3.02 in Windows Server 2012 R2 (WS2012 R2). One of the features of SMB 3.0 is SMB Multichannel, which can cause unexpected issues if left unmanaged. In this series, I will explain why you need to manage SMB Multichannel, and how you can place constraints on the feature to control different NICs it uses. In this post, I’ll start by looking at how SMB selects a NIC to use.

What is SMB Multichannel?

SMB Multichannel is one of the features that was added in SMB 3.0 that makes this old protocol ready for enterprise-scale and cloud-scale data flow. The role of SMB Multichannel is to allow Windows Server (and Windows client, too) to make the most of whatever network connections are available to push data as quickly as possible from a client to a server. SMB Multichannel works in two ways:

  1. Multiple streams over a NIC: SMB Multichannel is capable of sending many simultaneous streams of data across a network connection. The benefit is that SMB can fill a network, which is a good thing when you need data to go from A to B as quickly as possible. This capability overcomes limitations that are seen in previous versions of Windows when faced with large bandwidth connections.
  2. Multiple NICs: During the initial connection, the SMB client and server perform a mutual discovery of capabilities and connections. If the two computers discover multiple network connections of the same speeds and capabilities, then SMB Multichannel will automatically aggregate the bandwidth to each of those NICs with fault tolerance. You can add or remove NICs, and the flow of data will continue.

It is the latter issue that is commonly a problem. Some Windows features use SMB Multichannel, where we see unexpected use of networks. Alternatively, there are scenarios where people have implemented network designs, but they don’t get the results that they expected because SMB Multichannel went a different direction. I’ll explain why these things happen and how you can control them in the rest of this post.

How SMB Multichannel Chooses Different NICs to Use

There have been scenarios where people have installed unusual network designs in their Hyper-V hosts and they get SMB 3.0 travelling across NICs that they did not expect. The following diagram illustrates such an example. The Hyper-V host has been deployed using System Center Virtual Machine Manager (SCVMM) bare metal deployment. The consultant incorrectly thought that they had to deploy the host with a physical NIC for the management OS, and they use management OS virtual NICs for SMB 3.0 storage traffic. Now, the SMB 3.0 storage traffic is flowing via the physical management NIC (1 GbE) and not the dedicated SMB 3.0 storage virtual NICs (connected to 10 GbE NIC team).

Sponsored Content

Passwords Haven’t Disappeared Yet

123456. Qwerty. Iloveyou. No, these are not exercises for people who are brand new to typing. Shockingly, they are among the most common passwords that end users choose in 2021. Research has found that the average business user must manually type out, or copy/paste, the credentials to 154 websites per month. We repeatedly got one question that surprised us: “Why would I ever trust a third party with control of my network?

SMB traffic unexpectedly traverses the physical management NIC

Incorrectly designed SMB Multichannel storage. (Image: Aidan Finn)

There is a logical reason to this, and it is because of the process that SMB Multichannel uses to select NICs. NICs are chosen in the following order:

  1. rNICs: Any NICs that support Remote Direct Memory Access (RDMA), known as SMB Direct in Windows, are chosen first. rNICs offer the lowest latency and the least impact on the processor when pushing through large amounts of data.
  2. RSS enabled NICs: Any NIC that supports Receive Side Scaling (RSS) offers great scalability by not being restricted to core 0 on the server. You can learn more about RSS in the Petri IT Knowlegebase article, “Hyper-V Hardware Offloads for Networking.”
  3. The highest speed NICs: A 10 GbE NIC offers more throughput than a 1 GbE NIC.

When SMB 3.0 looked at the available NICs the ordering was:

  1. rNICs: None available.
  2. RSS enabled NICs: Choose the one that is available, which is the the management OS NIC.

Note that Windows Server 2012 R2 does offer virtual RSS, but this feature is not available to management OS NICs. This is why the SMB 3.0 traffic will not select the provided SMB1 and SMB2 NICs.

Note: When building a scale-out file server, you also need to ensure that you configure the DNS settings (TCP settings) and enable client access (Failover Cluster Manager) for the desired storage NICs.

SMB 3.0 is More Than Storage

Windows Server uses SMB 3.0 for more than storage since WS2012 Failover Clustering has used SMB 3.0 for redirected IO. And those of you that are lucky enough to have 10 GbE or faster of bandwidth for Live Migration, then you can enable SMB Live Migration. Both of these features use SMB 3.0 the same way that Hyper-V on SMB 3.0 does. And SMB Multichannel selects NICs in the same way. So it is important to understand this process when you are designing Hyper-V host or storage networking.

Restrict SMB Multichannel to Selected NICs

In part 2 of this series, I will show you how to restrict SMB Multichannel to selected NICs to avoid spill-over into other networks.

Related Topics:


Don't have a login but want to join the conversation? Sign up for a Petri Account

Comments (4)

4 responses to “Managing SMB Multichannel in SMB 3.0”

  1. Are there any good articles on how to actually set it up from end to end. I find all these articles saying how great it is but no actual articles or videos where someone actually implements it. Usually it will say something like how to configure it -and say set up a share now test throughput. Never anything about setting IPs on the nics, how to configure the teaming or squat.

  2. Hello,

    I could use any help possible on how to configure multichannel between 2 Win8.1 PCs. I have 2 PCs connected and I often exchange several TB of data between them. Each PC is running Windows 8.1 x 64 enterprise.

    I have 2 onboard realtek cards and 1 add on Intel CT card.

    I would like to have 2 nics directly attached between the 2 PCs with no gateway for the transfers and 1 nic just for internet on each with a default gateway. No switch in between the nics.

    Any info would be helpful with details. I found many articles about what you need -e .g. a nic with rss receive and that it is enabled by default but no actual implementation details for Windows 8.1 – plenty for Windows server 2012 and how you get better performance and hyper V improvements but I need a spoonfeed actual how to configure it article if possible.


    I have the 2 nics that are directly attached between the 2 PCs (4 nics total) configured for, 2, 3 and 4 – no gateway – now how do I get them to bond/multichannel. I imagine there is a next step.

Leave a Reply

Aidan Finn, Microsoft Most Valuable Professional (MVP), has been working in IT since 1996. He has worked as a consultant and administrator for the likes of Innofactor Norway, Amdahl DMR, Fujitsu, Barclays and Hypo Real Estate Bank International where he dealt with large and complex IT infrastructures and MicroWarehouse Ltd. where he worked with Microsoft partners in the small/medium business space.
Don't leave your business open to attack! Come learn how to protect your AD in this FREE masterclass!REGISTER NOW - Thursday, December 2, 2021 @ 1 pm ET

Active Directory (AD) is leveraged by over 90% of enterprises worldwide as the authentication and authorization hub of their IT infrastructure—but its inherent complexity leaves it prone to misconfigurations that can allow attackers to slip into your network and wreak havoc. 

Join this session with Microsoft MVP and MCT Sander Berkouwer, who will explore:

  • Whether you should upgrade your domain controllers to Windows Server
    2019 and beyond
  • Achieving mission impossible: updating DCs within 48 hours
  • How to disable legacy protocols and outdated compatibility options in
    Active Directory

Sponsored by: