It’s time once again for every tech author to make dauntless divinations about what will happen in the IT industry in the following year. Hopefully I will take delivery of my robot chauffeur and moon buggy soon, but in the meantime much like like I did last year, I’ll make some predictions about what I think will happen in the world of Microsoft virtualization in 2015. I sure hope I do better this time around than I did for 2014!
We’re going to be hearing a lot about a technology called Docker. This snowball started to roll in recent months when Microsoft announced Azure support for Docker via a partnership with the company. Docker can work in Azure right now through a command-line interface (CLI) that was launched for Windows clients, and we know that Docker is coming to Windows Server in the future.
Docker is more so an addition to Azure or Hyper-V, rather than being viewed as a distinct competitor. In machine virtualization we have isolation between machines, with each machine having its own operating system. This security boundary contains an OS, libraries, and typically one application. Containerization is done on a per application deployment basis. A Docker engine runs on a single operating system. Containers for different applications are deployed on shared libraries that are hosted by this shared operating system. The Docker engine could be installed on a physical server, or it could be run in the guest OS of a virtual machine (Azure or Hyper-V).
The benefit of Docker is that you can deploy applications nearly instantly from a library of containerized apps. This is a great benefit in a private deployment where change is frequent, such as in a DevOps environment. For example, a farm of a few virtual machines could run lots of application instances.
I think we are going to hear a lot about Docker in 2015, with the trickle becoming a torrent. But I don’t think many IT pros will use Docker in 2015, as it will be early days, and I think Docker’s market is geared toward larger organizations that demand a pace of change that even a flexible private/public cloud can offer.
At the moment, we purchase Windows Server (and therefore Hyper-V virtual machines running Windows Server) on a per-physical processor basis. Each Windows Server license covers two physical processors, no matter how many cores are in each processor. If you license a physical host with two Intel E5-2699 v3 processors (18 cores each = 32 total cores or 64 logical processors), then you can license that host for an unlimited number of virtual machines with just one copy of Windows Server Datacenter Edition, which costs $6,155 with Open NL licensing. Remember that this license can be installed on the host and you can enable Hyper-V for free.
Here’s the issue: Just a year ago, you most likely required four processors and two datacenter licenses to get a host with that sort of core density. Microsoft would have made $12,310 on Open NL license fees. Intel continues to increase the core density of hosts, and this is reducing Microsoft’s earning power in larger host deployments.
I believe that Microsoft will switch from a per-pair of processors licensing model to a per-core pricing system with the release of Windows Server vNext. I can understand why because newer processors will reduce their earnings. Hopefully Microsoft will calculate the per-core pricing based on the typical processor (6 -8 cores) that most of us still use. This would mean that we wouldn’t necessarily see a price rise.
I could be lazy and tell you that Windows Server vNext will arrive in the second half of 2015, but everyone in the business knows that. Instead, I will take some chances. I am guess-timating that general availability (not RTM) of Windows Server will be in September.
Microsoft typically uses the same naming system as games by EA Sports. The reasoning for this is simple — Microsoft’s financial year starts in July, so anything released after July uses the following year. The next version of Windows Server will be released in Microsoft’s financial year of 2016 and will be called Windows Server 2016.
If you attend any Microsoft events or you have any dealings with a Microsoft employee, then expect to hear a lot about Azure in the first six months of 2015. Microsoft is going to be doing a lot get you hooked on their public cloud. Microsoft wants customers to try out their public and hybrid offerings. If you’ve got an on-premises environment, then they’ll be making big pushes with online backup and disaster recovery in the cloud. Services such as site-to-site VPN and ExpressRoute have had recent upgrades, and the number of WAN partners will continue to grow. Microsoft Ignite (the super conference in Chicago in May 2015) will seem like a great big cloud conference!
No matter what you might think of the public cloud (and I was a skeptic of Microsoft’s offerings!) you cannot get over:
A lot more IT pros will be managing some services in Azure by the end of 2015 than at the end of 2014, but the focus will remain on hybrid cloud.
This is a depressing prediction by me. Most of you reading this article will not be using PowerShell to simplify management, deployment, and operations and enable automation. I am not a programmer, but I’ve taken to PowerShell, where I used it to drive the demos from my presentation at TechEd Europe 2014. I use PowerShell to rebuild my demo lab at work. I use PowerShell scripts to provision large numbers of virtual machines in Hyper-V or Azure. And I use PowerShell to get predictable and fast results. There was a certain time investment to get all that stuff right, but as all scripters know, we start off by blatantly copying and pasting from other people’s work from the many available online resources and tweak from there. The investment, even in small deployments, is recouped later on when that script, a function, or a cmdlet is reused over and again.
But no matter what I say, no matter what Jeffrey Snover says, and no matter what anyone in Microsoft says, most of you will ignore our advice on using PowerShell and will continue to use a subset of the product that they have purchased by not adventuring beyond the GUI.
A number of hyper-convergence specialist companies have been shouting very loudly and very expensively about the benefits of their kind of compute and storage deployment. I get the feeling (my opinion and not the official view of Petri.com) that these companies are more successful at getting investment and the attention of the media than at selling their products.
I think hyper-convergence will go the way of tab, parachute pants, leg warmers, Furby, the Rubik’s Cube, and music on MTV, and we’ll get back to operating compute on one tier and storage on another, connected by a high-speed fabric.