In Defense of the Server GUI

The GUI is dead; long live the GUI! You might remember Microsoft Ignite, where presentation after presentation declared the death of the UI on Windows Server 2016. Nano Server was what everyone would be doing. No one would ever log into a server ever again, and we’d be shooting servers in the head every five minutes. So about that…, let’s just say there’s been a little bit of a step back to something closer to reality. In this post-PC era, where I need a PC to fix my iPad or iPhone, the post-UI era sees the return of the UI.

The GUI is Dead

Microsoft has been trying to kill the server UI since Windows Server 2008, when a big push was made with Windows Server 2008. Back then, I was going on the bleeding edge, by pushing out Hyper-V into production in a hosting company the day after it was released. I spent months working with the beta and release candidate, and I listened to all the best practices and advice.
Part of that guidance from Microsoft was to deploy Server Core, a new stripped down installation of Windows Server that removed all the windows from Windows, leaving me with just a command prompt, PowerShell, and remote management. I’d been a remote management junky for as long as I could remember, and I was not afraid of scripting. At that point, I had written COBOL, Pascal, C, C++, Batch Scripting, Rexx, and VBS on VMS, various breeds of UNIX, and Windows over the years. I decided the Hyper-V with Server Core was the way forward, until I tried to manage, configure, and troubleshoot my hosts. The Windows element was fine, but dealing with hardware management and troubleshooting was beyond a nightmare; it was actually impossible. So I went with a full installation of Windows Server.
Windows Server 2008 R2 (W2008 R2) was released, and Microsoft boasted about an improved PowerShell and management experience for Server Core. I looked at it, and the original management and troubleshooting problems continued. I stuck with a full installation. I ran a survey with some of my Hyper-V MVP colleagues. We got a pretty large sample of responses from around the world, and it was clear that the vast majority of respondents agreed with me; Server Core was too hard and the benefits didn’t pay off.
Windows Server 2012 (WS2012) and WS2012 R2 were released. At this time, PowerShell was not only complete, but it was the only tool to offer access to all the features, and I include System Center in that statement. I started to learn PowerShell in earnest with the release of WS2012 because my role required me to create the same infrastructure over and over, plus I needed to be able to teach the advanced features of Hyper-V and related features. But I still stuck with the full UI, because nothing had changed; the majority of problems in Hyper-V are caused by hardware vendors, and when things go bad, the network can often disappear (making remote management irrelevant) or it can happen to all the hosts at the same time (try shooting the entire herd of servers then!). If you want an example of these sorts of issues, then look at ODX on many SANs (most recently Dell EquaLogic), Emulex NICs, or VMQ enabled on 1 GbE NICs (disable it, please!).
And now for Windows Server 2016, Microsoft declared that the UI is dead, deader than dead, and that anyone that uses it is a lesser human. Microsoft even removed the option to do a full install of Windows Server from the Technical Preview 2 (TP2) media. The correct install of Windows was to do Server Core or Nano Server, which was initially restricted to Hyper-V and Scale-Out File Server roles. A certain group of acolytes echoed these dogma, insulting IT pros for being lazy, failing to comprehend that there’s more to owning an infrastructure than just deploying it. And then Microsoft felt the angry feedback.

Misunderstanding

I wrote a series of articles about the importance of the GUI during the TP2 timeframe. Most people agreed with me, but the Kool-Aiders attacked me from a few fronts:

You should be doing PowerShell!

I do use PowerShell, and if those people bothered to read my articles or watch my presentations, they’ll see lots of PowerShell being used. But good luck at figuring out Emulex-hell with cmdlets at 3:00 am.

You should be working remotely.

I do prefer to work remotely. But when the messy stuff hits the fan, there’s usually no better solution than to get on the machine in question and use the tools there. By the way, that’s what Microsoft support engineers typically request to do when they are called in.

You’re out of touch!

No, I am right out there on the bleeding edge. I’m usually far ahead of my customers. I just don’t choose to accept everything I am told as the truth. I will test, evaluate, and make informed decisions based on my experience and that of others. And for this reason, I will continue to deploy full installations of Windows Server on Hyper-V hosts and Scale-Out File Servers.

The install is smaller, there are fewer patches, the security is better …

Keep drinking, my friends! The smallest disk I can get in a server is 300 GB, so I don’t care if my installation of Windows Server is a few MB or 12 GB. And let’s be clear, the paging file is probably going to consume some space too, regardless of the size of the OS (memory based). I don’t care if I have one or a 100 patches; I’ll still be doing one reboot per month. And as for the security, if you’re reading emails or browsing the net from production servers, then you deserve everything bad that can happen to you.

You can add the GUI to Server Core

Yes I can, unless you have the weird bug on a well-patched system that prevents the GUI being re-added to Windows Server. I have to add the role and reboot the server — and that adds another 10 to 15 minutes to the delay, thanks to hardware checks.

Long Live the GUI

I am not an active IT pro; I learn, teach, evangelize, and write for a living. But my lab is like a production system for me, I work with consultants who do this stuff every day, and I interact with IT pros around the world via social media who own Hyper-V and Windows Server systems. Most of the people I have contact with might not be on the bleeding edge of what’s talked about at the likes of Ignite, but they are pretty current and keen to learn and do things right for their customers. So when they were told that they were lazy or stupid, they didn’t like it. And they voted to show their displeasure. One of the most voted for items on Windows Server User Voice was to bring back the full installation option in Windows Server.
This vote wasn’t because people were stuck in the past; it was because these Microsoft customers deal with an own real production systems running Windows Server and Hyper-V, and while Core and Nano might be perfect for certain huge scale deployments, customers that have to deal with faults on production systems can’t be running around the computer room or data center with a gun, shooting every time a driver or firmware starts bringing down a cluster or corrupting virtual hard disks. The cold hard truth is that the most painful issues we see, especially in Hyper-V, are caused by drivers, firmware, and hardware offloads that have passed the Swiss Cheese logo test.
Now we hear a more moderate line. Microsoft, the company that made Windows Server a success because of the GUI, loves GUIs (just not on the server), and that the GUI is required for small-to-medium enterprises. And in WS2016 Technical Preview 3 (TP3), the option to do a full installation returned, and I expect that most Windows Server customers will opt to deploy the GUI as they always have.

GUI-Less Servers Have a Place

Please don’t misinterpret my opinion; I do believe that GUI-less servers have a place. I do believe in Hyper-V on Nano server, but under three conditions:

  • I have a huge clustered environment, where the value of each host is reduced.
  • Microsoft disables all hardware offloads by default, as there are too many OEM-caused bugs.
  • The logo tests for drivers and firmware are stronger, test all features under stress and over time, and must be re-certified for each version. Any problems, and the version is publicly ejected. Right now, certain brands, such as Emulex, are tainted and should not be touched with a barge pole, but they continue to be supported and widely deployed.


If I ran a huge cloud, I think I might run Nano Server as the OS on physical installations if these conditions were met. I also think that Nano will be a fine choice for virtual machines — see nested Hyper-V and container operating systems. Personally, I think the role of Server Core will be deprecated over time as more roles and features are added to Nano Server.
I hope that we get to the point where Nano Server proves to turn each machine into a reliable appliance and something that I can discard at a moment’s notice. At this point, I believe that few of us are at that point. Infrastructure that we build on isn’t stable enough, and business applications are not “born in the cloud.” So a more reasoned and balance approach is more business ready than the GUI-less dogma that has been spread in the last year.