What You Need to Know Before Migrating VMs to Microsoft Azure
The momentum for deploying services in Microsoft Azure is starting to build. Many of the scenarios involve moving existing virtual machines (VMs) into Azure. To executives and salespeople, that’s a trivial matter for the IT folks to take care of. But for those IT folks, it is a challenge that must overcome the opposing forces of bandwidth limitations, acceptable downtime, and migration project deadlines.
In this first article in a two-part series on migrating VMs to Microsoft Azure, I’ll expand on the challenges that you face when deciding to move your VMs to Azure. In part two, I will discuss different methods you can consider for migrating your VMs into Azure.
The Challenges of Migrating Virtual Machines to Microsoft Azure
Imagine this scenario: It’s the start of a new work week and you walk into the office, preparing yourself for the usual onslaught of password resets, PC boot failures, and general annoyances that distract you from doing the more interesting engineering or project work. As you walk to your desk, your manager calls you into their office to inform you that the CIO has just committed the company to using Microsoft Azure, and you need to move lots of virtual machines off of your on-premises environment and up into Azure by some impossible date.
Does that sound implausible? I don’t think it is because, in my experience, most executives and sales people just do not understand that you just cannot lift and shift the masses of data that make up VMs and drop them into Azure like you’ve got access to a wormhole or a Star Trek transporter.
Say Goodbye to Traditional PC Lifecycle Management
Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.
Unfortunately, there are four major and contradicting issues at play:
- Incompatible platforms
- Project deadlines
A business purchases or leases Internet connectivity based on their normal Internet consumption requirements. This includes things like e-mail, browsing, consuming Software-as-a-Service products from public clouds, and maybe the occasional moderate data transfer. Few people, if any, ever size their Internet connection for the possibility of having to move most or all of their computer data to the cloud. A small business will struggle to upload a few terabytes, and it’s equally a struggle for a large enterprise to move petabytes of data.
Imagine if you do decide to do that big upload to the cloud. What will happen to your bandwidth if the upload is left unmanaged? Every other Internet service will be crushed by the weight of the upload and your business could cease to operate. That would be bad. You can limit damage in a couple of ways by performing serialized uploads or implementing Quality of Service (QoS).
This should limit the bandwidth required to perform an upload, but it is going to take longer to migrate individual VMs to Azure, which will have consequences.
Microsoft Azure is based on Windows Server 2012 Hyper-V. That means that moving an on-premises Hyper-V VM to Azure won’t require some monumental conversion. However, a lot of computer rooms and data centers are filled with legacy installation of VMware virtualization and many have previously made the leap to Amazon Web Services (AWS). How do these customers move to Azure? A conversion is required, where they often require tooling and downtime.
Just how are you going to upload your VMs to Azure? Are you going to shut down services, prepare the virtual machines for Azure, power them down on your on-premises infrastructure, upload the virtual hard disk files to Azure, and then create new virtual machines by re-using those VHD files and start the virtual machines up again? If so, how much downtime do you think is acceptable for each production VM that allegedly enables business operations?
For the consumer of the services that IT provides, downtime is the enemy. And for IT pros, downtime is stress that we do not want or need.
If you work for a large enterprise, there is a good chance that the executives have added Azure credit through the Enterprise Agreement. That is a ticking clock because once it is activated, then you have one year to consume that credit or lose it. Those who acquire Azure credit via open agreements will hopefully be more cautious in their spending so the economic pressure isn’t the same. But no matter how you acquire Azure, it is a safe bet that the decision maker will want to see the results of their plans as quickly as possible. Some arbitrary date will have been pulled from nether regions of their body, and you need to finish migrating the agreed workloads to Azure by that point in time.
Hmm, that’ll be fun! You’ve got limited bandwidth to work with, you’ve got to move VMs maybe one at a time, you’re worried about downtime to services, and you’ve got the boss on your back stressing you about finishing the project as quickly as possible.
Is an Azure migration the right thing to do? It sounds like it might not be… until you consider the options for migrating workloads to Azure. There are different solutions for different scenarios, mainly because things are in flux at the moment, but they will settle down. We can limit downtime, probably the most important issue for a level-headed decision maker, we can start making the most of an Azure investment as quickly, as possible, but a starter of reality must be served before the boss can start to enjoy the main course of migration and the dessert of results. You have bandwidth limitations (though signing a contract to add ExpressRoute would help mid-large companies that can justify it!) and you cannot give instant results.
Common Migration Gotchas
You cannot just pick up a VM and drop it into Azure. Azure does things differently than your typical Hyper-V host:
- Temporary drive: Azure reserves the D: drive on Windows and /mnt or /mnt/resource on Linux as a temporary drive. D: is often used by Hyper-V admins for data drives, so you will need to account for that before uploading a VM.
- KMS: Windows Server licenses, even those with Software Assurance, have no mobility rights. Therefore you will be switching the guest OS activation to KMS and configuring it to activate from Microsoft’s KMS service, and you will be paying for the license as a part of the cost of running the VM (as normal for Azure Windows virtual machines).
- Guest OS configurations: EFS must be disabled and Remote Desktop must be configured to be more resilient.
- Disks: The C: drive must be 127 GB or less. Different specs of virtual machines allow between 1 and 16 data disks, attached to the SCSI controller, and those data disks must each be 1 TB or less.
And let’s not forget those VMs that aren’t running on Hyper-V! We need a way to be able to migrate them to Azure and ideally minimize the downtime.
Migrating to Azure won’t be a trivial task for anyone. You will need to do some work to ensure that your VM will run efficiently on Azure. However, maybe when Microsoft completes there upgrade of the hypervisor (currently at Windows Server 2012 Hyper-V, not even R2), then we might see more per-machine and per-disk scalability and flexibility.
There are tools there now to aid you in your migration, but they’re not perfect. However, the ideal tool is on the way. You can learn about your options in my next article.