Interview: Microsoft's Edwin Yuen, Matt McSpirit Discuss System Center 2012
In this final part of our two-part interview with Microsoft’s Edwin Yuen and Matt McSpirit, we’ll touch on how Microsoft System Center is the linchpin of Microsoft’s private cloud strategy and how System Center has evolved over the years to integrate with platforms, tools, and technologies from other vendors, from Linux to VMware. [Download the free Windows Server 2012 trial edition]
Read part one, in which Yuen and McSpirit discuss Windows Server 2012 and Hyper-V 3.0.
Question: You had mentioned before our interview that your team is tasked with making sure that Microsoft products work well together. One of the things that is becoming clear is that the new Microsoft System Center 2012, when combined with Windows Server 2012, really does open up a whole lot of extra potential and capability on the private cloud front. If you were talking to an IT professional who has primarily used Hyper-V for server virtualization, what are some of the things you would tell him about combining System Center with Hyper-V 3.0 that would be the most effective and impactful for him?
Edwin Yuen: What System Center 2012 really gives that administrator is an additional level of capability and abstraction, so to speak. So what you have is the great power of virtualization and the infrastructure you’re going to be leveraging, networking, storage, and virtualization… you get the management you need there, the multiple server [management] that you have in [System Center] Virtual Machine Manager. You can then leverage all these other tools within System Center, and it becomes seamless, because you’re managing [both physical and virtual] assets.
Devolutions Remote Desktop Manager
Devolutions RDM centralizes all remote connections on a single platform that is securely shared between users and across the entire team. With support for hundreds of integrated technologies — including multiple protocols and VPNs — along with built-in enterprise-grade password management tools, global and granular-level access controls, and robust mobile apps to complement desktop clients.
You can do great monitoring and management both of the physical infrastructure and the application infrastructure, and the application performance itself, within things like Operations Manager. You have Configuration Manager for distribution and patching. You have things like Orchestrator, so now you can do this great orchestration engine. I always bring up orchestration since it’s not just automation. It’s not like a 1 2 3 macro. It’s not that you know you have ten steps, and “I just want to do the ten steps with one button press.” It’s automation, and what that means is that there are ideas, and there’s a flow to it. You can build a rulebook and sit there and say, “In this situation I will do A, and B, and C, and D, and at D, there might be a split point, or there might be a gate.”
For example, you can use System Center and Operations Manager to detect a fault in the application, and when a fault occurs, you could then trigger an action to occur. The action could be, “Open a trouble ticket.” It could then be, “Wait for trouble ticket to get accepted before proceeding,” or it could automatically do it. You could take inputs and outputs, but then it could go and it could do things like migrate a machine, or it could go and add memory, or add storage, or whatever it is, or reconfigure, or move the SAN around, and then close trouble tickets, update CMDBs, or add additional human elements or other elements into it. It’s about using all these capabilities we have, and it’s a little bit more than I think we’ve been used to in the past, but it goes beyond, I think, what a lot of administrators think of as the next step, which is, “I’m doing this by hand, or I’m using virtualization. I’m just going to have it do it for me. I have that capability, have it do it for me.” “Why am I going to have it do it for me? What is it going to do? How can I build it? How can I have it do much of what I would be there to do?”
It’s like I’m taking the smarts, and the brains, and the runbook that I would traditionally have used, and I’m automating the runbook rather than just automating steps. That’s a lot of what System Center 2012 can do for you. It really allows you to manage the applications, and then the infrastructure that’s underneath.
Matt McSpirit: I think System Center is an incredibly powerful aggregator of all these nuts and bolts for IT professionals. You’ve spent time wiring together and configuring on an individual level, and it gives them a level of visibility that enables them to see much more than just, “Okay, is Hyper-V on? What’s that virtual machine doing?” Previous versions of System Center used to consist of a set of individually licensed components that were integrated, but it would still work for the IT pro to integrate them together as part of a wider solution.
In System Center 2012, not only has the license been simplified, so for an organization to acquire System Center, it’s a single step, but for the IT pro to implement Service Manager, it now automatically talks with Ops Manager, and Configuration Manager, and Virtual Machine Manager, and all of these bits talk together, and what that enables is some of the scenarios that Edwin alluded to, around things like self service.
For example, take requesting cloud resources. I would like a private cloud to deploy some servers into some virtual servers… [System Center] gives you that self service, it gives you that ongoing monitoring of those applications and workloads, it gives you that control, and all of those interfaces right for the relevant people.
If I’m that application owner, for instance, I would go to the self service portal with Service Manager. I would say, “I’d like some compute and resource” in a simple menu driven restaurant menu style. “I’ll choose this, this, and this.” I just want three large virtual machines to run in the private cloud. “Here’s my IO code, or my billing code,” or whatever it may be. Hit submit, it gets approved, the automation kicks in and triggers a new private cloud creation called “Mark’s New Campaign,” or whatever it may be.
As the application owner who’s requested that resource, I now have a slightly different hat on, because I’m now interrogating an app through a slightly different portal. I can see my VMs and my services, and so on, and we’re providing that. That’s just a simple example of a couple of the underlying System Center tools working together in harmony to deliver an answer to that particular problem. I think System Center, for an IT pro going forward, it is so powerful and integrated that if they can think of a scenario that would make their life easier, always getting asked about such and such.
They can do that with System Center, and it allows them to abstract away the nuts and the bolts of Hyper-V servers, ESXi servers, XenServer servers. Virtual Machine Manager can manage all three [primary hypervisor platforms.] So it abstracts them away, so for an IT program manager in a heterogeneous environment it’s a real key enabler to help them elevate tasks at their discretion…they can help IT evolve from being a call center to being an enabler of the business, where they can streamline how they deliver services to the rest of the business.
EY: It goes from being a manager, the infrastructure of managing of IT. You’re going from just building VMs to understanding why you’re building VMs. The example I give is live migrations. Millions of live migrations have been done in the past. If you think about it, all of them have been driven by a human being pressing a button, or just the understanding of how the host is doing. But now, if you can integrate all these things together like System Center, now business rules, events, application needs can then drive those live migrations. It can drive them automatically. We’re not just exposing a self service portal where the guy can build a VM, and he selects this amount of memory, and this image, and this template. He can now build a cloud, and not just build a cloud. He can build multiple VMs. He can build an aggregate amount of compute. That aggregate amount of compute that’s built is now within a logical construct, a logical construct that both the end user and the administrator understand. It’s about managing more than just the resources.
MM: Just to add to that, I think a lot of IT pros are working in enterprises that have a lot of different technology investments, whether it’s Microsoft, whether it’s Linux workloads, and different hardware than the storage, and whatever it may be. If they’ve already got investments in different management technologies from the likes of HP, CA, BMC, and they’ve invested over a period of a number of years in these different solutions, and they’ve really struggled to get that holistic view and integrate these different tools, System Center, in particular, Orchestrator, would really enable them to start to gel together some of these mixed tools.
For instance, if I want to take an alert that has been registered by System Center Operations Managers that has detected that a power supply has failed on a server, or it’s detected that an application is having trouble, for instance, but I want to take that alert and register it as a ticket in a completely different help desk solution, it’s not a Microsoft based remedy. I’ll pass that across with Orchestrator.
Orchestrator is almost that middle man between, “I’ll take from Microsoft’s side, and I’ll pass that to,” in this case, “BMC,” or vice versa. Having that Orchestrator component which is driven by its integration packs, which ultimately are its knowledge about the things, just like Operations Manager has in management packs, knowledge about the things.
It really helps IT pros that have been troubled by that mixed environment for a long time really enhance that one view, even if they’ve got a mixed infrastructure of tools. Yeah, heterogeneous building private clouds with Microsoft is a core pillar of our story. It’s enabling them to integrate more effectively.
Question: I know that with Windows 7 and Windows Server 2008, from the marketing side, Microsoft employed a “better together” marketing strategy. It’s like if you get these two together, it’s almost like putting chocolate together with peanut butter… [laughter] …are you taking the same approach with System Center 2012 and Windows Server 2012 in terms of how you’re marketing the product?
EY: I think that they work really integrally together. We see System Center 2012 as a great enabler for the private cloud. It doesn’t require Windows Server 2012, but if you have Windows Server 2012, you can certainly do a lot more. It’s really one of those things where one and one equals three, so to speak. But System Center, as Matt has described, it works with Windows Server 2008 R2, it works with ESX, it works with XenServer, it’ll work with BMC, it’ll work with CA. It’s about bringing a management solution that leverages the existing hardware that a company has, but then adding this capability.
We’ve got all this great capability for cloud, and private cloud, and hybrid management, but we’re a firm believer that the best way to get that environment built up is to use everything you already have. You’re not going to exactly throw out everything and go and buy a new block of servers that are going to come in and that’s going to be my private cloud.
You don’t have to throw out the bathwater, so to speak. As I always tell people, I’ve got a six year old and a two year old. If they don’t clean their room, I don’t buy a new house — I get them to clean their room. The room is perfectly fine. We just want to put in order. The stuff they have in it is great. That is one of those grades. I’m not going to go out and buy a new house. That solves my problem. But the efficiency that you get from doing cloud and really doing advanced virtualization is part of the efficiency of using what you have and getting the most out of what you have now and not necessarily buying new hardware and new software. Let’s use the management software you have. Let’s use the software we have going forward. Let’s use the server and the hardware we have and try to scale as large as we can and get the best economic investment. I think we obviously feel that we supply and provide the IT pros and the admins with the best software there is out there with the capabilities they need and the functionality that they need. But we also don’t want to get lost. We think we provide the best value for companies out there. They can take what they have and just go with it and get the most use out of it.
Read part one of the Petri IT Knowledgebase interview with Microsoft’s Edwin Yuen and Matt McSpirit.