Everything You Need to Know About Azure Infrastructure – August 2019 Edition
The half-year point has passed meaning that Azure (and Windows/Windows Server) are into a new planning & development cycle. And we’re on the run into Microsoft Ignite, the pinnacle Microsoft event for enterprise technology. Things are starting to warm up after a cool summer of releases. In August, we had dedicated hosts, news on VMware support in Azure, loads of storage news, and most amazingly of all, support for assessing Hyper-V for migration to Azure was finally launched after being “announced” at the previous two annual Ignite events!
Azure Introduces Dedicated Hosts
I guess everyone at this point knows that you can deploy virtual machines in Microsoft Azure. Most of the time, people are directly using these virtual machines, running either Windows Server or Linux. Sometimes the virtual machines are hidden, and probably are more likely to run Linux. And sometimes there are no virtual machines at all – “serverless” is the latest trend.
Microsoft recently turned 180 degrees and released Azure dedicated hosts. A dedicated host is a physical server that you get exclusive access to – to run Azure virtual machines with either Windows or Linux. You don’t get access to the host operating system or management, just exclusive use of the resources. Primarily, this is intended for those who are worried about compromises such as a breakout attack. With this, you get a choice (control over) the underlying hardware (the host type, thus deciding the series & quantity of virtual machines it can host), processor brand/features, and the number of cores.
Say Goodbye to Traditional PC Lifecycle Management
Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.
Today, the choice of hardware is limited to those capable of running Ds_v3 and Es_v3 virtual machines, but Fs_v2 support is coming soon. You can use Hybrid Use Benefit for Windows Server and SQL Server licensing to reduce the cost of the hosts – note that the cost of the host is the cost of the hardware plus the cost of the OS and any Microsoft software such as SQL Server.
Those of you who are big enough o require dedicated hosts should bear in mind that they will affect how you engineer high availability – there is no SLA for virtual machines/services hosted on a single host. You can create a host group and specify how many fault domains it will span – each host is placed by round-robin into a fault domain in the host group. There is also a choice of using availability zones; you can assign a host group to a specific zone, and use multiple groups spanning up to 3 zones. And you can combine both high availability features for higher levels of resilience.
Why Recommendations In Azure Are Not Best Practice
Hands up if you remember Microsoft’s “best practice analyzers” from around ten years ago? Keep those hands up if you thought those tools were more use than harm? I’m guessing that most hands were put down.
Automated tools that tell you how to best configure things would be useful … if the recommendations were correct. But often, they are silly, wrong, or pure dangerous. Two Azure tools spring to mind:
- Azure Advisor
- Security Center, specifically Recommendations and Security Score.
I have spent most of this year building mission-critical secure infrastructures. I began with a lot of knowledge, but I wanted the opinion of these tools.
Security Center insisted that I enabled the firewall feature of every storage account. I did – and it broke every management feature in Azure. Security Center disabled that recommendation during the summer months.
Azure Advisor recommends that I reconfigure my virtual machines to use availability sets for a higher SLA (99.9%). The problem is, they are deployed into availability zones with a 99.99% SLA! Meanwhile Azure Advisor continuously fails to recommend (despite all the diagnostics and insights monitoring that I have enabled) that some Ds_v3 virtual machines might be better suited as Bs virtual machines, saving me lots of money but not harming application performance.
Back in Security Center, I’m told to enable just-in-time VM access for all my virtual machines – which you cannot RDP directly to. And I’m informed that I should install diagnostics agents into the guest OS of virtual appliances which do not support the Log Analytics (“OMS”) agent.
I have enough knowledge to ignore these warnings. Now I use Security Center purely as a “have I thought of everything” checklist – definitely not as an “I must do everything recommended here” list. The problem is that some people (managers and customers) view these recommendations as “bible from Microsoft” and insist on the recommended changes.
Be careful what a robot recommends. Think of Siri when you are relying on artificial intelligence to guide your high availability and security – even my three-year-old tells Siri to shut up when she accidentally triggers on my watch.
Other Announcements from Microsoft
Here are other Azure IaaS headlines from the past month:
- Moving your VMware resources to Azure is easier than ever
- New Azure Blueprint simplifies compliance with NIST SP 800-53
- We’re making Azure Archive Storage better with new lower pricing
- When to use Azure Service Health versus the status page
- High Availability Add-On updates for Red Hat Enterprise Linux on Azure
- Disaster recovery of Azure disk encryption (V2) enabled virtual machines
- Better security with enhanced access control experience in Azure Files
- Announcing new AMD EPYC™-based Azure Virtual Machines
- Introducing NVv4 Azure Virtual Machines for GPU visualization workloads
- Introducing the new HBv2 Azure Virtual Machines for high-performance computing
- Building resilient Azure ExpressRoute connectivity for business continuity and disaster recovery
- Improving Azure Virtual Machines resiliency with Project Tardigrade
- Geo Zone Redundant Storage in Azure now in preview
- Announcing the general availability of Azure Ultra Disk Storage
- Azure Ultra Disk Storage: Microsoft’s service for your most I/O demanding workloads
- Azure Archive Storage expanded capabilities: faster, simpler, better
- Azure Security Center single click remediation and Azure Firewall JIT support
- Plan migration of your Hyper-V servers using Azure Migrate Server Assessment
- Preview of custom content in Azure Policy guest configuration
- Azure and VMware innovation and momentum
- Azure Load Balancer becomes more efficient
- Latency is the new currency of the Cloud: Announcing 31 new Azure edge sites
- Microsoft Azure available from new cloud regions in Switzerland
- Track the health of your disaster recovery with Log Analytics
And Now for Something Different
Virtualization is boring. Yup, I said it. I spent 10 years writing about Microsoft virtualization, promoting Hyper-V, sharing knowledge, writing books, and having a great time bashing vFanboys. And then virtualization plateaued. Worse still, Microsoft and VMware became “friendly”. I fondly look back on the days when Jeff Woolsey got into blog wars with VMware, trading barbs on features, performance, and pricing. I even got a little in on the action and had a campaign launched against me by some VMware employees on Twitter.
But now, Microsoft are back (they were banned) as speakers at VMworld and showing off things like VMware Workstation running side-by-side with Hyper-V! The old Hyper-V guy inside of me is screaming and would pull my hair out … if I wasn’t already bald. VMware gets support before Hyper-V from Azure. VMware virtualization is even running and supported by VMware in Azure. Seriously, what’s a guy gotta do to get a marketing war going again?
I am joking … mostly … but if you follow virtualization of any platform then you know what features aren’t being added like they used to. A new brand of NIC gets RDMA, a new kind of flash can be plugged into the motherboard, and someone gets another few million IOPS in their HCI, which is about as useful as a Bugatti Chiron to me. The hypervisor is a commodity now, being used by other things. It’s good enough. These days, you hear “Hyper-V” more often in the context of Windows 10 preview builds than anything else. That’s probably a sign of maturity, but it makes the tech less interesting. My posts and conference sessions on Hyper-V started to dwindle back in 2015 partly because I had shifted focus to Azure, but mostly because I ran out of interesting things to write about. Just a few weeks ago, I logged into a (Azure) virtual machine for the first time in months. I’ve moved to the next level up … building the cloud infrastructures that sit on top of the hypervisor (Azure runs on Hyper-V) and power the business services.
Who knows, maybe all this will one day swing around. In abstract ways, the IT world does swing back and forth between centralized and distributed computing. But who knows what OS or hypervisor we will all be using when the next swing comes.