Last Update: Sep 24, 2024 | Published: Jul 29, 2013
In this post we will show you how to create a scale-out file server (SOFS) to share scalable and transparent failover storage via SMB 3.0. This lab will use Windows Server 2012 R2 but the instructions also apply to Windows Server 2012.
(If you need to do some catching up, check out our previous articles on Windows Server 2012 SMB 3.0 File Shares: An Overview and Windows Server 2012: SMB 3.0 and the Scale-Out File Server.)
The design of the SOFS nodes in this example is as shown in the below diagram. Illustrated is a single node of the SOFS cluster. There are four NICs which are used for two roles.
It is the SMB NICs that you would consider using Remote Direct Memory Access (RDMA) networking for SMB Direct. It isn’t essential, but SMB Direct does give better performance, especially if your SMB 3.0 workload reaches larger levels. You will implement QoS for the cluster and SMB 3.0 protocols. How this is done depends if you are using SMB Direct or not. RDMA requires Datacenter Bridging (DCB) QoS rules, and this requires support in your NICs and end-to-end SMB connection switches. You can implement simpler per-protocol QoS rules that are applied by the OS Packet Scheduler if you will not be using RDMA/SMB Direct.
The cluster is created, and one CSV has been created for each (two in this example) node in the file server cluster. The CSVs can reside on block storage or on Storage Spaces.
An organizational unit (OU) has been created to store the computer objects of the SOFS cluster. This is to allow us to create restricted delegation of security within AD. You need to delegate security permissions to allow creation of computer objects to the SOFS cluster’s computer object. Failing to do this will create the most common failure in SOFS deployment: The SOFS role will attempt to start on the cluster and will fail, reporting that it cannot create the computer object for the SOFS role in AD.
Another common error is one you see in a lab environment where the same SOFS is constantly being deployed and destroyed. You need to make sure that the SOFS role’s A records have been deleted from DNS.
By default, only a network with a default gateway will be selected for Cluster usage and Client Connectivity. The SOFS nodes must be configured to use the SMB NICs (shown as Cluster1 and Cluster2 below) for client connectivity. These are the NICs that application servers (such as Hyper-V) will connect to for SMB 3.0 data streaming. Note that SMB constraints will be enabled on application servers to limit SMB Multichannel to their NICs that are on this private SMB network.
Edit the properties of each SMB NIC and check the box to Allow Clients To Connect Through This Network. This will create a warning that you can ignore in the case of a SOFS; the SOFS role will actually create a new computer object in Active Directory. That object will register DNS records for itself using each of the IP addresses of the nodes themselves. The SOFS does not use an IP address of its own. SMB clients (application servers) will use these DNS records to find the SOFS, and SMB constraints will restrict those SMB clients to just the NICs on the private network.
The next step is very important. There are two kinds of clustered file server:
The next step is to name the Client Access Point (CAP). The CAP is the computer object that will be created in AD for Kerberos authentication and authorization, and the fully qualified domain name (FQDN) will be registered in DNS with the IP addresses of the nodes (NICs with client communications enabled – above). In this lab, the cluster is called Demo-FSC1.demo.internal. That is the name that is used to manage the cluster. The CAP will be called Demo-SOFS1.demo.internal. That will be the name that SMB clients (application servers) will use to connect to shares on the SOFS.
The new SOFS role will be added to the cluster, and the cluster will attempt to start the role. If the role fails to start, try the following.
You should find a new computer object created and each client enabled NIC in the cluster has its IP registered in DNS with the A record of the SOFS CAP. Note how the two nodes (Demo-FS1 and Demo-FS2) are have also registered their overlapping A records in DNS.
What you have done up to now is the equivalent of setting up a SAN. The next step in a SAN is to provision LUNs; in our case, we will create shared folders in the scale-out file server.
You need to share the storage space of the cluster’s CSVs with application servers. This is done by creating shared folders and setting their permissions.
Note that it can take a minute or two for the SOFS to come online on all nodes in the cluster. This wizard will fail to launch until this happens.
Note how the UNC path of the shared folder is automatically built up for you. Our SOFS CAP is called Demo-SOFS1. The share is called Virtual Disk 1. Clients will therefore connect to Demo-SOFS1Virtual Disk 1. Clients connect to this single path to access this share that resides on between 2 and 8 nodes in the cluster. SMB 3.0, DNS, and the SOFS role handle name resolution, connectivity, and failover “auto-magically.”
The one golden rule is to configure the permissions to be as restrictive as possible. Use security groups, even if they require member servers to be rebooted to pick up their group membership. This should only needed to be done once when a new host or applications server is brought online – consider using Live Migration to eliminate service availability downtime.
How many shares is the right number? There is no guidance yet, but you might consider the following.