Last Update: Sep 04, 2024 | Published: May 18, 2016
In today’s Ask the Admin, I’ll explain the differences between four types of disk storage: Just a Bunch of Disks (JBOD), Direct Attached Storage (DAS), Network Attached Storage (NAS), and Storage Area Networks (SANs).
There’s a bewildering choice of storage options available today, and while some organizations are moving data to the cloud, there’s still an immediate requirement for onsite storage. If you’re looking into disk storage solutions, the first challenge is to understand the differences between the four main types of storage.
JBOD is a collection of disks in a box presented to the OS either as a single volume or a combination of drives as larger logical volumes. However, there’s no support for RAID fault tolerance or performance optimization. But for some applications, the lack of fault tolerance and performance optimization doesn’t matter. Exchange Server Database Availability Groups (DAGs) are commonly stored on JBODs when budgets are limited for example.
For more information on RAID, see “An Overview of RAID Storage Levels” on Petri.
More than ‘Just a Bunch of Disks’ (JBOD), Direct Attached Storage connects disks directly to the host controller of a PC or server, without going through a switched network, but doesn’t allow the direct assignment of hard disks to multiple computers. As such, DAS is faster than other storage solutions.
In its simplest form, DAS can be a single internal or external disk attached directly to a PC. And although it’s a quick and dirty way to address simple storage needs, it’s not a very flexible solution. SOHO/SMB DAS solutions look much like NAS from the outside, but instead of connecting to a switched network using Ethernet, DAS connects to a server or PC using FireWire, USB, eSATA, or other storage connection type. If you have an existing file server, DAS can be a good way to expand its storage capacity.
Despite that traditional DAS volumes can’t be directly shared with multiple computers, ‘Shared DAS’ uses a set of array controllers that allow more than one server to connect, providing the speed and lower costs of DAS with some of the flexibility of NAS and SAN.
Conceptually, Network Attached Storage (NAS) devices are computers running server software, usually based on Linux, that other devices on the network can access. NAS devices run file server software that allows devices to connect to network shares, and often other software allowing a range of options including media and FTP servers. Just like Windows Server, access to resources hosted on NAS can be controlled using permissions.
Offering the disk performance and fault tolerance features of DAS, devices connect to NAS through a switched Ethernet network, with some NAS devices allowing networked servers to connect to logical volumes, using iSCSI LUNs, presenting the volumes to the OS as if they were physically attached. For more information on iSCSI, see “Setup Windows Server 2012 R2 as an iSCSI Storage Server” on the Petri IT Knowledgebase.
NAS can be fast enough in many scenarios, but because disks cannot be directly connected to servers, there’s a performance hit. A Storage Area Network (SAN) combines the flexibility and sharing capabilities of NAS with the much of the performance of DAS. Rather than devices connecting to storage via network shares, servers connect to SAN-hosted logical volumes directly via high speed fibre channel or 10GB Ethernet networks.
SANs offer higher hardware utilization than NAS and can be comparable in performance to DAS, but can also suffer performance degradation if not planned very carefully or when under high utilization. While many NAS solutions offer the ability to connect to iSCSI LUNs, unlike NAS, server capabilities aren’t built directly into SANs. It probably comes as no surprise that SANs are the most expensive and complicated to set up out of all the storage options available.