Coming Soon: GET:IT Endpoint Management 1-Day Conference on September 28th at 9:30 AM ET Coming Soon: GET:IT Endpoint Management 1-Day Conference on September 28th at 9:30 AM ET
Microsoft Azure

What's New with Azure Infrastructure – August 2021 Edition

I think you’ll find that there are quite a few announcements this month. The summer quiet period is over, and we’re into a whole new development/release semester in Microsoft, not to mention the countdown to the usual peak release season for Microsoft Ignite has started – most releases announced at Ignite happen well before Ignite. You should also note the announced retirements in Microsoft Azure’s services.

VM Retirements

Microsoft made a lot of announcements about Azure features/SKUs being retired in the future; that news includes a bunch of virtual machine SKUs:

Many moons ago, the Basic A-Series and the Standard A-Series (what you might call A_v1) were the only option for Azure virtual machines. Microsoft eventually expanded the roster and even added a newer A_v2, which was slightly more affordable, and the various D-Series (more performance) and Bs-Series (much more affordable for “bursty” workloads).

Early customers commonly used the A-Series. And I’d safely say that many accidently deployed “A_v1” sizes instead of the A_v2. And those machines probably are still sitting there today, unchanged, even though there are better options for performance, SLA, and cost optimization. Those customers need to act – as do customers using any of the other (above) affected series that will be turned off when their retirement date comes.

Sponsored Content

Say Goodbye to Traditional PC Lifecycle Management

Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.

If you are changing size, the correct thing to do is use a structured approach to optimize your costs and performance. Sometimes, Azure Advisor is going to help – but I’ve rarely found that the advice was there or even relevant. You can get some external tooling, but I’ve been told that many of those tools cost more than the money that can be saved – a pointless purchase, one might say! You already have a lot of what you need to change your SKUs in the best ways:

  • Knowledge of the workloads
  • Access to Azure Monitor/Log Analytics

If you have a nice baseline of performance metrics (at least a month during a typical period of usage) then you can use the low/peak/average data to identify size changes. You might see underutilized machines that can be reduced in size, overutilize machines that need more resources, or infrequently used machines that might be suited to Bs-Series. Armed with some workload knowledge, you can make users happier with faster access to services, or the shareholders happier with smaller Azure bills.

But your optimizations should not stop there! Armed with some “licensing” knowledge you can save a lot more money over several years – it depends on how you acquire licenses from Microsoft. Reserved instances can discount the compute cost of your virtual machines (and other types of resources) by quite a bit. And an “on-premises” Windows Server licenses with Software Assurance can be assigned to Azure virtual machines (Hybrid Use Benefit/HUB) to remove the Windows cost from Windows Server virtual machines.

All that might sound like a lot of work, but it can do something that everyone wants: reduce the cost of The Cloud.

Azure Backup Archive Tier

One of the first services I put a lot of time into in Azure was Azure Backup; that was because it was a very affordable, easy to service than any kind of customer or partner could use on-prem or in the cloud. A recent announcement, General availability: Azure Backup now supports Archive Tier for backups of SQL Server in Azure VMs, reminded me that Azure Backup has recently added support for storing backup data in the Archive Tier and I completely forgot to give them some credit!

Way-back-when, Azure Backup was very simple. It supported very little and retention periods were measured in days. We bashed them over that over and over. Eventually, we were given up to 99 years of retention, depending on the source. Imagine retaining 99 years of VM backups! How much storage would that consume? Even with the ability to age your backups (keeping selected daily backups for weekly backups, weekly as monthly, monthly as yearly), you could be talking about a lot of storage. But who would ever do that? I was talking to a local government just this week that needs to retain some VM backups for 99 years.

That’s why customers, including me, asked for support for an archive tier in Azure Backup – somewhere to store data for long-term retention that normally doesn’t need to be touched and can be on ultra-affordable physical storage. I had long chats with Azure Backup program managers, and for one reason or another, this was genuinely a difficult problem to solve. I have told many people two things about the Azure Backup team:

  • They listen to feedback – even if they are not able to act on it.
  • They are very smart people.

And here we are: Azure Backup now has an archive tier that supports Azure virtual machines and SQL Server in Azure virtual machines.

Other Announcements from Microsoft

Azure Storage

Networking

Azure Virtual Machines

Azure Virtual Desktop

App Services

Azure Backup & Site Recovery

Management

Security

Azure Automation

Miscellaneous

And Now for Something Different

I’m not what I would call an expert on Windows Admin Center (WAC), but I’ve used it – actually in an Azure virtual machine to manage the guest OS of Azure virtual machines. The free product is clearly getting love from the right people in Microsoft – it’s gradually getting bigger and better. It’s still not as good as logging into a machine (yes, with a GUI, folks) but it gives you a central admin point that does a lot of the necessary daily tasks.

A recent update brought the best feature yet – automatic upgrades. It’s easy to take your eye off the ball, and quickly you can find yourself 2-3 versions behind on a product. Automatic upgrades save me time and ensure that I always have the sharpest tool to work with.

A feature I think is a total waste is the “Azure Portal integration”. The documentation (the weakest element of WAC) implies that I have to install WAC into any machine I want to manage using this feature – really!?! I talked with a colleague about this recently. He suggested that we roll out WAC using Azure AD Proxy instead – we ended up bypassing the Azure Portal (for a better admin experience), re-using the single WAC deployment we already had, and still protected the workload with Azure AD authorization and Multi-Factor Authentication. The result is that an admin or operator can browse to a URI for WAC, sign in using their Azure AD credentials (with MFA) if they haven’t already done so in their browser, and start using WAC securely from anywhere without a VPN connection outside of the Azure Portal.

Related Topics:

BECOME A PETRI MEMBER:

Don't have a login but want to join the conversation? Sign up for a Petri Account

Register
Comments (0)

Leave a Reply

Aidan Finn, Microsoft Most Valuable Professional (MVP), has been working in IT since 1996. He has worked as a consultant and administrator for the likes of Innofactor Norway, Amdahl DMR, Fujitsu, Barclays and Hypo Real Estate Bank International where he dealt with large and complex IT infrastructures and MicroWarehouse Ltd. where he worked with Microsoft partners in the small/medium business space.