Last Update: Sep 04, 2024 | Published: Jan 12, 2016
Azure Backup is finally starting to generate interest among small and medium enterprises (SMEs), which is mainly driven by the huge amount of improvement in the service that Microsoft Azure developers have been working on. In this article, I’ll outline recent performance in Azure Backup, where some of these improvements have come with a recent update to the Azure Backup agent.
This biggest improvement I’ve witnessed so far is the launch of the Microsoft Azure Backup Server (MABs) as a part of Project Venus. It amazes me how few people are aware that Microsoft is giving away an enterprise-class backup product for free to customers of Azure Backup. This disk-to-disk-to-cloud solution, installed on an on-premises machine, will protect the following:
Local cheap disk storage is used for short-term retention. Azure backup vaults are used for long term retention, and all data is compressed and encrypted before it’s sent to Microsoft. You have two different options for a restore. First, you can restore from the local short term storage. The second method can be accomplished by reaching back in time and download from the Azure backup vault, which results in no outbound data transfer charges for Azure Backup.
If you’re in the market for a backup solution that will protect a Microsoft-centric workload that can do automated off-site storage using the cloud, then you really need to look at Microsoft Azure Backup Server.
The time it takes to perform a backup is determined by several different elements. Obviously for disk-to-cloud backup, the time it takes to upload data to Azure is a big factor. But another factor is the time it takes to determine what files have changed since the last backup.
A recent update to Azure Backup introduced the usage of USN Journal. According to MSDN:
… the NTFS file system maintains an update sequence number (USN) change journal. When any change is made to a file or directory in a volume, the USN change journal for that volume is updated with a description of the change and the name of the file or directory.
Azure Backup can support up to 54 TB on a single file server volume. Adopting USN Journal to track file changes has reduced the time it takes to protect 2 million files. Microsoft says that everyone’s mileage will vary, but we should see improvements thanks to USN Journal.
A virtual hard disk (VHD) is used as a container to store metadata about files that are being backed up. This cache previously required 15 percent of the volume being protected. That’s not much space on smaller volumes, but it can prove to be expensive on larger volumes.
Improvements to the Azure Backup agent have lead to Microsoft seeing a 3x improvement in space utilization by the cache. They are observing large volumes, requiring just five percent of the volume capacity for the VHD.
I’ve been working with Azure Backup since early 2014, and I’ve seen one consistent item of feedback: more retention, please. The last improvement to retention allowed you to keep up to 366 recovery points for up to 99 years. Think of that as V daily backups, X weekly backups, Y monthly backups, and Z yearly backups, all tucked away safe in an LRS or GRS blob-based storage account in Azure.
I’ve yet to encounter a real business opportunity where a customer wants more than one month of retention in the cloud, but I know that some organizations have a genuine need for much more. The recent update to Azure Backup now allows you to keep up to 9999 recovery points for up to 99 years. That should suffice for almost everyone!
There were other less headline-worthy updates that will make a difference:
You can download and deploy the updated Azure Backup agent to avail of these general performance improvements to Microsoft’s cloud backup service.