Why Blade Servers are the Wrong Choice

When you’re starting a new deployment of physical servers, probably with the intention of using them as virtualization hosts, you are going to have a basic choice: do you go with blade servers or rack servers? This issue is nearly as divisive as putting pineapple on a pizza.

Blade server fans will argue that blades are the only choice. Hardware resellers see large deals. And then there are those of us who have used blades in the past and understand the — how do I say this politely? — the challenges. In this opinion post, I will explain to you why I think blade servers are the wrong choice for your data center. This post is written from the perspective of a Hyper-V engineer, but I think those working with other technologies may find that my reasoning spans hypervisors and operating systems.

What is a Blade Server?

Blade servers were an invention from a time when virtualization was still in its infancy. Deploying physical servers, with one operating system per physical machine, was the norm. As we now know, those machines were underutilized and consumed lots of space. Hardware manufacturers came up with a new solution.
A chassis provides a collection of shared resources including cooling, power, networking and storage connectivity. This chassis is populated with a number of blade servers. The total number of servers contained within the chassis varies depending on the height of the blade or the manufacturer of the solution.
For example, a HP C7000 chassis can hold 16 half-height blade servers (aka blades) or 8 full-height blades. A typical 42U rack can contain 4 of these chassis, with a total of 64 (4 x 16) half-height blade servers. This is superior to the maximum of 38 x 1U rack servers plus 2 x 48 port top-of-rack switches that one might get with a traditional deployment.
A blade contains processor and RAM just as you would expect with a traditional rack server. After that things can be quite different. For example, some blade servers are configured to load their operating system from a SAN, while others will have just enough on-board disk capacity for an operating system. There are no traditional network cards or ports on the back of a blade.
Instead, there is a single interface that plugs into a backplane in the blade chassis. This interface provides power from the chassis to the blade, allows network connectivity, and with additional cards added to the blade, can provide additional networking and/or storage connectivity via the blade chassis.

HP ProLiant BL460 G7 blade servers

Racks full of half-height HP ProLiant BL460 G7 blade servers (Source: hp.com)

Storage is often provided by a storage area network (SAN) which the blades connect to via special appliances in the back of the chassis. Alternately, you can insert special storage blades into a chassis, but this all depends on the blade manufacturer.
Connectivity to the physical network is also provided by special switches in the back of the blade chassis. However, blades within a single chassis can communicate with each other by a very fast interconnect on the backplane. This can make Live Migration or vMotion within a chassis very fast.

Why Blade Servers are the Wrong Choice

Now that you have some idea what a blade is, without getting caught up in manufacturer specifics, now I will spend some time telling you why you’d be crazy to invest in this hardware platform.

Blade Server Vendor Lock-In

I like the idea of using a single set of hardware that is tightly tested together. But events in recent times make me think that this testing is more marketing than reality. With rack servers I can choose:

  • Servers from vendor A
  • Network cards from vendor B
  • Storage connection from vendor C
  • Switches from vendor D
  • Storage from vendor E

Compare that to a blade solution:

  • Blade chassis from vendor A
  • Server from vendor A
  • Network cards from vendor A
  • Storage connection from vendor A
  • Switches from vendor A
  • Storage probably from vendor A, but maybe from vendor B

With choice comes the power of negotiation and that leads to better pricing. You might get a nice bid price on your initial installation, but once you’re invested in the ecosystem, you have no choice but to continue spending money with that manufacturer. You are locked in, no matter what the pricing or performance might become.

Blade Server Pricing

I remember when a previous employer was shopping for a new server installation. We were convinced by a manufacturer that blades were cheaper to buy. And you know what? The salesman was right; when we looked at the list price of a blade server it was cheaper than the equivalent 1U rack server. But this was deceptive.
The sales person didn’t stress the cost of the shared components in the chassis, such as the chassis itself, the PSUs, and the connection modules. At that time, the HP Flex 10 virtual connect module cost around $18,000, and you needed 2 of them to provide 10 GbE networking to up to 16 blade servers, each with 2 NICs. If any blade required an additional 2 NICs, then you would have required 4 x virtual connect modules!

If I was installing a rack of hosts now, I’m pretty sure that I could install a pair of 10 GbE or Infiniband switches at the top of a rack for a lot less than the cost of 8 (2 modules per chassis) or 16 (4 modules per chassis) connectivity modules.

Blade Server Flexibility

I like using best-of-breed components. Let me give you an example. If I was building Hyper-V hosts with the intention of using SMB 3.0 storage, then my specification might be as follows for my Scale-Out File Server nodes:

  • Dell R720 with dual Intel processors
  • Chelsio (iWARD/RDMA) or Mellanox (Infiband/RDMA) networking
  • LSI SAS external interfaces

I recently helped a company through the specification process for Hyper-V on SMB 3.0. This company had invested in Cisco UCS blade servers. This solution doesn’t use network cards as we know them. The customer consulted with Cisco and found that they could not use either iWARP or Infiniband networking. They would have to use legacy 10 GbE instead. And when it came to the SOFS nodes, they would have to invest in rack servers, breaking from their standard, to insert supported SAS external interfaces.
That customer could have easily deployed the Infiniband networking that they lusted after if they had installed rack servers, assuming that they had the required free PCI expansion capacity.

Blade Servers and Service Level Agreements

Those who have studied evolution know that the over-specialization of a species can lead to extinction when the environment changes. Most of those who have installed Windows Server 2012 R2 on a blade server have discovered that the vendor lock-in has led to a specialization. Emulex NICs are present in almost every manufacturer’s blade servers. That would be fine if Emulex NICs were stable. However, those who have deployed WS2012 R2 Hyper-V on these servers have discovered bugs on these NICs’ firmware and/or driver.
The reaction of the blade manufacturers and Emulex has been worse than lack or sluggish. It has been atrocious, leaving many customers with unstable installations. Those brands affected include HP and Dell, and I have heard that Hitachi (they sell blade servers too) might also be in the basket case.
In the case of rack servers, we Hyper-V veterans know to avoid all things Broadcom. We can pick and choose components as we see fit.

Blade Servers and Converged Networking

Blade servers brought us the ability to converge networking through devices such as the HP Flex 10 connectivity module. With this device, we could carve up 10 GbE networking into multiple 1 GbE NICs and Fiber Channel over Ethernet (FCOE) SAN connectivity. That sounded pretty cool. Except:

  • Microsoft does not support these kinds of solutions: It’s not written anywhere publicly that I can find, but this is indeed the case as has been confirmed to me and others by networking program managers in the Windows Server group.
  • WS2012 Introduced Software-Defined Converged Networking: We can do converged networking/fabrics using a few PowerShell cmdlets or by using logical networks in System Center Virtual Machine Manager. This doesn’t require $18,000 networking devices and will work with any NIC on the Windows Server hardware compatibility list. I can deploy the same solution across hardware from different manufacturers. And I can easily change the design and QoS implementations after the server/host is deployed.

It’s About the Virtual Machines and Services, Dummy!

The last cry of the blade salesperson is that they can get up to 64 blades into a 42U chassis and I can never do that with rack servers. That is true … to a certain extent. But to be quite honest, I really do not care. And for those of you who do, I’d beg you to leave 2006 when you dreamed of 10:1 virtual machine to host ratios, and join the rest of us in 2014 when we’re focused on virtual machines and services instead of physical servers. You see, I care about how many services I can run in a rack. If I ran a true data center, then I’d care about how many services I could run on a collection of racks, copying the fault domain concept of Microsoft’s Azure.
It’s been a few years since I ran a spreadsheet on this subject, but the last time I did, I found the cheapest Hyper-V host to buy and own (assuming that I could fill it with workloads) over 3 years was a 5U server.  I won’t fit too many of those into a rack, but I guess I wouldn’t need too many switch ports either!
Maybe you go another way and install half-U servers with 10 GbE or Infiniband converged networking. That can give you maybe 80 servers in a rack … assuming that you have the capability to push enough electricity into that rack!

Blade servers were a solution for a time when we needed more lower capacity physical servers with complex connectivity. However, the times have changed. Now we want fewer physical servers, with simpler and customizable storage and networking. Flexibility and cost-effectiveness are critical, and in my opinion, this is why blade servers are the wrong choice.
Editor’s Note: Aidan isn’t alone in this opinion: Petri IT Knowledgebase contributor Brian Suhr presents a similar argument about rack servers over blade servers in a VMware-focused post that asks if traditional rack mount servers are winning back enterprise customers.