Review: DataON Storage DNS-1640 JBOD for Storage Spaces

I have discussed Windows Server Storage Spaces in a number of previous articles here on the Petri IT Knowledgebase. A requirement of Storage Spaces is that you have a supported “just a bunch of disks” (JBOD) tray. You won’t find very many manufacturers in the Storage Spaces category on the Windows Server Catalog, and you might not even recognize most of the names there. One of those manufacturers is DataON Storage, one of the original makers of supported JBOD devices and a supplier of JBOD trays to Microsoft. I’ve been lucky enough to have had their 24-disk tray in my demo lab since last summer, and I’ve recently deployed one as a part of a 2 node Hyper-V cluster at work. In this article I will review the DataOn DNS-1640 JBOD tray for use with Windows Server Storage Spaces.

Reminder: Storage Spaces

Microsoft added Storage Spaces in Windows Server 2012 (WS2012) and matured the feature in Windows Server 2012 R2 (WS2012 R2). The purpose of Storage Spaces is to provide a new software alternative to hardware RAID. Please note that Storage Spaces is not the Windows RAID of the past. Windows RAID sucked. Storage Spaces is designed to be familiar to SAN administrators but overcome some of the limitations of RAID.
Using a collection of dumb disks, as you would get in a JBOD, you can aggregate those disks into a Storage Pool. LUNs, known as Storage Spaces or Virtual Disks (depending on who you talk to or what GUI/PowerShell you use) are created from this disk pool. It is at this point that you configure your disk fault tolerance. We have two-way mirroring (like RAID 10), parity (like RAID 5), and more options (including hot spares, which is now considered outdated in WS2012 R2) that we can configure. WS2012 R2 also added the ability to mix SSD and HDD disks to create automatically managed tiered storage; this gives you the speed of SSD with the economic bulk capacity of HDD.

storage pool with different kinds of virtual disks

Conceptual illustration of a storage pool with different kinds of virtual disks.


Some services, such as virtualization or databases, bypass write caches in storage controllers to avoid data corruption. This means that the storage controller’s write cache offers no performance boost. Storage Spaces allows you to take advantage of an SSD tier to improve the performance of services during spikes of write activity. Write-Back Cache allocates 1 GB (by default, which happens to be the sweet spot) of an SSD tier to each new virtual disk (you can disable this). During spikes in write activity, the writes are sent to the SSD tier, committed to disk, and are later demoted to the HDD tier of the virtual disk.
The promise of Storage Spaces is that you get software-defined storage (created in Windows via the GUI or PowerShell) that is universal across manufacturers (because the JBOD trays are “dumb”). You also get scalable storage that is more cost effective than in-server or SAN storage. This is because you are acquiring economic JBOD trays and populating them with disks from (potentially) any manufacturer; you no longer are paying up to three times more for disks with a special firmware that are provided by the SAN or server maker.
Storage Spaces is a supported storage type for Windows Server failover clustering, joining traditional storage such as iSCSI SAN, SAS-attached SAN, and fiber channel. Storage Spaces has a number of use scenarios, including:

  • Providing storage capacity to a non-clustered server.
  • Used as the cluster shared volumes storage of a small (probably two hosts) Hyper-V cluster.
  • Utilized as the shared storage of a Scale-Out File Server (SOFS) that is then shared via SMB 3.0 to application servers, such as Hyper-V hosts.

The important thing to remember with Storage Spaces is that you cannot use just any old disk tray. In the real world, you must use a JBOD that offers the hardware features that are required by Storage Spaces (shared SAS and SCSI Enclosure Services, aka SES). And for production systems, you should use a tested and supported JBOD for your version of Windows Server.

DataOn Storage DNS-1640

The 2U DNS-1640 is the entry-level JBOD for Storage Spaces from DataOn. It provides 24 x 2.5-inch disk slots, capable of hosting HDDs or SSDs. If 24 disks is not enough, you can daisy chain up to four JBOD trays to get 96 disk slots. If that’s not enough, maybe the 4U 60-disk DNS-1660 might do? Both JBOD models are certified for WS2012 and WS2012 R2 and can be used as cluster storage.
Thanks to SES, Storage Spaces knows the position of each disk that is in a Storage Pool, and this means that mirrored virtual disks will span trays. If you have (at least) three trays in a configuration, then you can afford to lose one of those trays and still retain all of your virtual disks and their contained data.
A common question that I get asked when I talk about the DataOn JBODs is “do they come with the disk caddies?” Yes, the JBOD does come complete with the disk caddies so all you need to do is buy disks to populate them. A rail kit is also included for securely placing the JBOD in a standard server rack (not a comms rack).
 

clip_image001

The 24 x 2.5-inch disk slots in the DataON Storage DNS-1640.

There is no administration console; everything is done from Windows Server via Storage Spaces (Server Manager, PowerShell, and/or Failover Cluster Manager). There are disk activity/health lights on the caddies, and there are status lights on the right-hand side of the tray. SES keeps Windows Server informed about disk location and health.

Disks

The beauty of Storage Spaces is that you can buy your disks from anywhere. In my early research I found a 600 GB 10K disk from a server/SAN manufacturer cost over €600 (list price; over $800). I then looked for a similar disk on Amazon (not even trade price) and found it was just over €250 (about$340). The disks probably came from the same factory in Thailand, so what is the difference?
If you talk to a registered partner of one of the SAN manufacturers, they’ll tell you that each disk comes with a specially written and tested firmware. This firmware gives you better performance and fault management. When you consider that this firmware appears to triple the cost of the disk, this sounds like a bunch of baloney. In truth, the firmware does actually improve the quality of the disk. However, this does not justify the cost increase, especially when a certain server/SAN manufacturer allegedly makes 85 percent margin on their list price!
In theory you can insert any old dual channel SAS disk into the DNS-1640, just as you can with any of the JBODs that support Storage Spaces. But this isn’t a good idea, as not all disks are created equal. That firmware really does make a difference. DataOn continuously tests disks from a number of manufacturers and publishes their own hardware compatibility list (HCL) of recommended units for the DNS-1640.

Storage Spaces Disks

Physical disks and virtual disks in my Storage Pool.

For clustered Storage Spaces you must use dual channel SAS disks. You can use SATA disks with a DataON-supplied DNS-BJMB 9252 6Gb/s SAS Interposer, but DataON strongly recommends using native SAS disks.
If you do want to use tiered storage then you really need between four and eight SSDs per JBOD tray. SAS SSDs cost several thousand dollars each – don’t confuse them with “enterprise” SSDs that are intended for business PCs and laptops. This does make using tiered Storage Spaces more expensive, but let’s be honest, tiered storage is not a typical requirement for small/medium businesses! An all-HDD deployment will suffice for most of organizations of that scale. Only consider adding SSDs to your Storage Spaces deployment if you were considering tiered storage in an alternative SAN installation.

Connectivity

This is a subject that most who are new to Storage Spaces over-think. The architecture is actually quite simple. Let’s stick to a simple example with a single shared JBOD to begin with.
Typically you will connect two clustered servers to use the JBOD as shared cluster storage. Each server will have a dual port mini-SAS adapter configured with Windows Server MPIO. Two 6 GB SAS cables connect each server to the JBOD. Another common mistake: the cables are not a bottleneck. SAS cables actually contain four channels running at 6 Gbps, meaning that they provide a total of 24 Gbps of storage bandwidth, which is much faster than 4 Gbps or 8 Gbps fiber channel!
The JBOD has two interface modules (SIM Board 1 and 2). Each SIM has a SAS IN 1 and SAS IN 2 port. Server 1 is connected to SAS IN 1 on SIM Board 1 and to SAS IN 1 on SIM Board 2. This means that you have fault tolerance for the SAS adapter ports, SAS cables, and the SIM Boards on the JBOD, thanks to MPIO.
There are also two fault tolerance PSUs, which should ideally be plugged into two different power circuits in the computer room.

fault tolerant DNS-1640

Rear view of fault tolerant DNS-1640.

The third SAS port on the SIM Board (SAS OUT) is used to daisy chain up to four trays in a single storage brick. This scales out your configuration to 96 disk slots and provides tray fault tolerance if you have at least three trays.

Adding more DNS-1640 trays to scale out and add fault tolerance (single server connection)

Adding more DNS-1640 trays to scale out and add fault tolerance (single server connection).

Simplicity

There’s not much more to the DNS-1640 – it’s quite a simple device. That’s the magic of JBODs: All the work is done in Windows Server where your skills are easier to develop and are universal across JBOD manufacturers. There is a small manual for the DNS-1640 that provides racking instructions and guidance for disk placement.
The device itself is quite basic. That’s another strength. When I’ve asked DataON about their support experience they told me that the JBODs have proven to be resilient. They only moving pieces are the third-party disks and that’s typically what needs to be replaced. Some customers will probably acquire additional spare disks with some of their savings by going with Storage Spaces.
My experience is that I’ve racked the JBOD and pretty much forgotten about it. I’ve rebuilt my demo SOFS cluster numerous times, and I’ve not had to touch the JBOD since we acquired it. At work we just completed the migration of two non-clustered Hyper-V hosts to a cluster by adding a DNS-1640 (with just HDD) and using Storage Live Migration to relocate the VM files from the D: drives to the CSVs that are on the JBOD. Storage Spaces and the JBOD just worked, making my life easier, and that’s exactly how we want it in the small business, and in the large enterprise or public cloud (where the DNS-1660 might fit better).