Interview: Microsoft’s Edwin Yuen, Matt McSpirit Discuss Windows Server 2012 and Hyper-V 3.0

The Petri IT Knowledgebase editorial team recently had the opportunity to visit the Microsoft campus in Redmond, WA, and interview some members from the Windows Server 2012, System Center 2012, and SQL Server 2012 product teams. We recorded audio interviews with Microsoft employees from all of these teams, and we’re now happy to provide edited transcripts of those interviews to Petri IT Knowledgebase readers.

We’ll begin with our discussion with Edwin Yuen, director of cloud and virtualization strategy at Microsoft, and Matt McSpirit, a senior product marketing manager on the Windows Server team. In part two of our interview, Yuen and McSpirit discuss System Center 2012.

Petri: The latest release of Hyper-V in Windows Server 2012 has a raft of new features. Maybe you could talk a little bit about what you feel are the top maybe three or four really most important new features in this next release of Hyper V. [Download the free Windows Server 2012 trial edition]

Edwin Yuen:[From our customers] there has been a desire to have greater scale. You’re going to see that in Windows Server 2012 Hyper V. You’re going to see this massive scale, in terms of number of processors, in terms of memory, in terms of storage and adapters. It’s across the board. We really feel it’s segment leading. It’s at or above anything that the competitors have. I think we are ahead in many areas. But really, it gives the assurance, for many people out there, to understand that we can certainly scale to whatever workload that they want to have, even that is really honestly beyond what they normally would do.

I think what we’ve seen, consistently before this release, is that we’ve shown that Hyper V has always had very good performance. Very close to what the physical servers were.

We scaled fairly linearly. What we’re going to see is that, if we could continue that performance scale and we can continue that near bare metal performance, they can be sure that they can essentially run any workload they want.

Microsoft's Edwin Yuen

Edwin Yuen, director of cloud and virtualization strategy at Microsoft

When you have that possibility it’s up to the customer to sit there and decide “Do I want to virtualize or not virtualize? What’s right for the business? How does it affect the application?” Rather than starting with the gating factor of could this be virtualized, well, if it can’t we’ll just set that aside. That’s been removed, and [the decision] is now really being driven by what the applications and the business requirements are.

[Other top features] include what we’re doing with networking and storage. [We’re] extending and really making the use of commodity storage a lot easier for a lot of customers, eliminating the requirements to have back insured storage to do things like clustering, or to do migrations. Even though a lot of customers are already invested in that, and if they have we have additional features that really a lot of them take advantage of it. What we’re saying is if you’ve already made the investment on [high availability, or HA] in hardware and high insured storage, we not only support it, we can actually take advantage of it and give you better performance. For those who haven’t invested in [HA for] specific scenarios… they now have the option to get things like failover or live migration, especially for scenarios like branch offices. It’s not just small businesses. A lot of times when you’re putting a couple of servers here a couple servers there, you didn’t want to have to build an entire infrastructure to enable virtualization scenarios. [We’re] really enabling virtualization scenarios, without requiring the large traditional infrastructure that it might have required in the past.

Matt McSpirit: Just to answer the scale side of the discussion, we really focused on three elements of scale. Everyone talked about if there is a workload that still can’t be virtualized. We see very, very few now. If you think about a virtual machine, it can support 64 virtual processors, up to a terabyte of VM memory as well, without any additional licensing or other considerations… if you want to create VM that big, you can create VM that big… like hosts of up to 320 logical processors and four terabytes of RAM… for the future, going forward, we consolidation of up to a thousand VMs on a single host. That’s what we’ll support in [Windows Server 2012 Hyper-V].

When it comes to scale out on the host cluster sizes, everyone talked about cluster as a foundation for resiliency failure. But we support up to 64 physical nodes. That’s increased from 16 in 2008 R2, to 64 in 2012. So 64 physical nodes with up to 4,000 virtual machines. So, there is scale up, individual scale VMs, and scale outs. We’ve really invested in all of those areas. [Download the Hyper-V Server 2012 free trial]

Petri: What about the networking improvements in the next version of Hyper-V?

MM: Single-root I/O virtualization (SR IOV) is an innovation in hardware that we are going to take advantage of with Windows Server 2012. If you imagine how virtual networking normally works, with VMware and with Microsoft, you create a VM as a virtual network adapter and that binds to a virtual switch, which is attached to a physical network adapter. That path goes through virtualization managed by the software. SR IOV effectively moves large portions of that path and optimizes the connection from virtual NIC through to physical NIC. But you need an SR IOV capable adapter, which one of our web partners are shipping. That SR IOV performance is significantly higher than a regular virtual network adapter.

Microsoft's Matt McSpirit

Matt McSpirit, senior product marketing manager at Microsoft

Customers who want to virtualize a workload that is really network intensive are going to be able to do that and see high levels of performance and throughput. But we don’t want to give performance while sacrificing agility. So, while you can still attach a VM to this SR IOV capable NIC, we still allow you to live migrate the VM around, even between hosts where one may have an SR IOV NIC and the VM is using it to a host that doesn’t. We’ll manage that live. Even though they are changing from SR IOV to a non SR IOV, we’ve got that flexibility. It’s the best of both worlds.
One of the ways we have integrated with hardware partners, from a storage side, is through a feature known as offloaded data transfer. What that effectively means is, imagine you’ve got a task. It might be to clone a VM. It might be to move virtual disks around to the backend, for instance. Normally you would think, “Okay, the blocks need to get shifted on the SAN, maybe from one to one or whatever.”

But it’s happening on the SAN. Yet, if it was a copy operation, traditionally in Windows Server and SAN it would go via the host or via Windows Server or via the network, using valuable host resources. Well, with offloaded data transfer, we can offload that to the SAN, so the SAN will do it, without impacting the host. We demoed that technology at TechEd, and it was in the keynote and in the regular breakout sessions as well. We showed demonstrations where the speed of the offloaded data transfer (ODX) enabled copy was finishing, whilst the non ODX was just getting started and initializing. There was so much more performance. There are just some of the examples from a scalability performance standpoint, where both across network and storage and general VM scale, we know we can virtualize those real high end mission critical workloads on Hyper V.

EY: I think you’ll see in two areas what we’ve done in Windows Server 2012 and Hyper V is added a significant number of features that are really,I think, directly applicable to the administrators today, that have an impact. It’s not just a slightly better way of doing what they are doing today. There are features and functions that will potentially dramatically change how they can either do what they do today or how they can potentially do different work in the future. The other thing we want to point out is that a lot of the things that we brought up and a lot of the features and how we have designed Hyper V in 2012 is that it’s really designed for the future. It’s not simply that we are designing it so that we can catch up to today for the needs. Much of it is designing for the future so that, as you grow and the customers grow, the operating system and the capabilities are already there for them. The capability is already designed and built in for them.

They literally can either grow into it or adapt into it or take advantage of it. That gives growth. It’s not simply about creating something that meets the needs of today or catches up to the requirements of today. Much of what we are talking about is really talking about the future.

MM: [On the administration side] there are administrators out there who have already or may want to adopt PowerShell. PowerShell is a fundamental core part of the operating system in 2012, so much so that there are well over 2,000 [PowerShell] cmdlets now, from building a cluster to establishing a V line ID on a virtual NIC or configuring some of the settings or features within the operating system. So, a considerable amount of investment has not only been made to expose those interfaces for automation. But, with the new scripting environment that is built in as well, to help administrators learn PowerShell with things like IntelliSense for rapid development of PowerShell scripts, we are really enabling that new administrator to PowerShell and existing ones to really streamline how they manage their Windows Server driven IT.

EY: Yeah. That’s where we have good interest here. It continues the great tradition of people developing and creating code or scripts or shortcuts or things that they hadn’t thought of or solving problems. Instead of sitting there saying, “Well, this feature… does this feature… can you go do this?” You can sit there and say: “Look, let’s just run these three commands. They are reusable. Let’s all share.” People really build this. The fostering of the sharing of that admin has really been integral to how people work. We are keeping that tradition alive by making sure that PowerShell can do pretty much anything you need to do within Windows Server.

MM: For our software partners, our ISVs, who are developing the ecosystem around Windows Server and have been for a number of years, PowerShell is, for them, a great interface that is universal that they can now plug into.

Petri: In addition to all the features we’ve discussed so far, are there any other big new features in Windows Server 2012 you’d like to let our readers know about?

MM: I would say, as I’ve mentioned before, live migration has been in Hyper V since 2008 R2. It has been streamlined from a speed and efficiency perspective…in [Windows Server] 2008 R2 I could move one VM between two hosts at the time, now it’s an unlimited number through the GUI. But what you would specify in the GUI would be dependent on your network environment and your workloads. But there is no restriction that we impose on how many VMs you can simultaneously migrate. But live migration still does require some form of shared storage, whether it’s traditional SAN or whether it’s now SMB storage. Organizations can utilize simple file share, using SMB 3.0, new in 2012, to place virtual machines. It allows those environments that want to simplify and move away potentially from LUNS and so on, on configuring them in a storage, to simple file shares, where an administrator can just have “\\ Hyper V” share for dropping VMs.

That’s suitable for a cluster, which is an enabler of live migration. But we introduced storage live migration as well. It’s the ability to move virtual disks around between different locations, for instance, between a starter based LUN on a SAN to an SSD based LUN. We’ll do that live, with no downtime. But take that and take live migration combined and you’ve got something called shared nothing live migration…

Petri: And “shared-nothing” is also the reason [Microsoft Hyper-V program manager] Jeff Woolsey carries around that Ethernet cable prop for his presentations?

MM: Exactly. That’s why we all carry network cables around. [Laughter] You can even do that number wirelessly, in a live demo environment. But in an enterprise environment it would all be wired. What that enables an organization to do is have that complete agility. If I’ve got a server here that’s a stand-alone server, and I’m running some local workloads and I want to do some testing on it, I can do that while it’s running. I can then move it into a production cluster, from standalone on local storage into a cluster that may have SMB or it may have SAN storage. From there I could potentially move it to another stand-alone post or a completely different cluster. The idea here is, if you can see the target and the source, you can move the virtual machine between those two. So it’s flexible.

Read the second part of our two-part interview with Edwin Yuen and Matt McSpirit.