Microsoft Shares New Azure Server Specs

Azure Hero Server
In this post I’ll discuss Microsoft’s newest contribution to the Open Compute Project (OCP), which gives us a peek behind the curtain, and suggests what the next generation of Azure hosts will look like.

The Open Compute Foundation

Back in 2009, Facebook was challenged by the incredible growth of the business; the infrastructure needed to keep up, and the company realized that traditional large data center infrastructure is not suitable for cloud-scale computing. Facebook needed to rethink IT and innovate. In 2011, Facebook, Intel, Rackspace, and others created the OCP to share what they had learned and created. The goal is that by sharing ideas and shedding traditional intellectual property concepts, each of the member can benefit by maximizing innovation, reducing complexity, increasing scale, and decreasing costs. Since the launch, a who’s who of cloud innovators has joined and contributed to the OCP, including:

  • Google
  • Apple
  • Dell
  • Cisco
  • Lenovo
  • Microsoft

Note that Bill Laing, Corporate Vice President of Cloud and Enterprise at Microsoft until September 2016, is listed as a member of the board of directors at the OCP. While at Microsoft, Laing worked on building Microsoft’s data center hardware.
 

 

Microsoft and the OCP

The first time I heard of the OCP was when Microsoft contributed some hardware designs in January 2014. Since then, Microsoft has been an active contributor. Those initial contributions were based on the server and data center designs that powers Azure, and more recent submissions were based on the software-defined networking of Azure.
Although there’s little of actual use in these designs for me, I still find it interesting to see how different cloud-scale computing is from the norms that I deal with on a day-to-day basis. There is some nerd-gasm going on, but there is some useful information in there; it’s good to know how an Azure host is designed, because this helps me understand how an Azure virtual machine is engineered, and that impacts how I teach and make recommendations. And this material might even give us a peek into the future of Windows Server, which Microsoft boasts about being inspired by Azure.

Chassis and server design based on Microsoft’s cloud server specification [Image Credit: Microsoft]
Chassis and server design based on Microsoft’s cloud server specification [Image Credit: Microsoft]
We keep saying it, and it’s true — this is a very different Microsoft than the one when open source was considered a cancer. Microsoft believes in the OCP open-source hardware concept and wants to do more than just contribute. More than 90 percent of the servers purchased for Azure are based on OCP designs.

Project Olympus

Microsoft’s first contributions were based on finalized hardware. In a recent post, Kushagra Vaid, GM, Azure Hardware Infrastructure at Microsoft, said that this sort of contribution offers little to the open-source hardware community; the design is final and leaves little room for feedback or contributions, meaning that Microsoft gets little out of sharing its IP beyond a few positive headlines.
A new effort within the OCP, Project Olympus, seeks to make open-source hardware as agile as open-source software. Instead of sharing a finalized design, Microsoft is sharing a design that is 50 percent complete. This means that members of the OCP can review the designs, fork from them, make contributions, Microsoft can learn from its partners, and merge these changes into its design before locking it.
To start this project, Microsoft has shared a 1U and 2U server design, high density storage expansion, a rack power distribution unit (PDU), and a rack management card. These standards-compliance components can be used individually or together as part of a modular cloud system.

The Project Olympus storage, servers, networking and PDU [Image Credit: Microsoft]
The Project Olympus storage, servers, networking, and PDU [Image Credit: Microsoft]
I find the server to be interesting. This, combined with the rack, gives us a hint of what the next generation of Azure storage might look like. Note that there’s a single networking switch and a single NIC: this is why understanding the concepts of fault domains and availability sets is important for Azure virtual machines; when you deploy at cloud scales, you overcome points of failure at rack levels not at component (NIC) levels.
The server features a 50 Gbps NIC — we know from some of the “R” virtual machines (and I know from private discussions) that Microsoft uses Mellanox hardware. It looks like Microsoft is using either the 100 GbE or the 200 GbE chipsets from Mellanox; I suspect the latter because a splitter cable could allow each switch port to be carved up into 4 x 50 Gbps connections.
The PSUs have a built-in battery; this distributed battery concept negates the need for a centralized UPS. According to a story by The Next Platform, this concept cut the cost of a data center by one quarter!
The 1U Project Olympus server [Image Credit: Microsoft]
The 1U Project Olympus server [Image Credit: Microsoft]
Note how the server uses NVMe SSDs. This isn’t a case of hyper-convergence; the previous rack diagram includes a JBOD, which implies to me that the current model of converged (not hyper-converged) storage continues for the storage system in Azure. I would guess that the in-server NVMe drives are instead of classic SSDs and are used to store the temp drives of virtual machines. Most of the new virtual machines use “SSD” storage for the temp drive (faster paging and disk-based caching), and using NVMe would give a lower cost per gigabyte and reduce the space consumed by storage. I wonder if these NVMe drives might be switched out for NVDIMM devices instead.

It’s rare that we get to see what lies behind the doors of an Azure data center (without signing a bunch of NDAs!), so I find this public glimpse of the future to be quite interesting.