Everything You Need to Know About Azure Infrastructure – March 2019 Edition
In my monthly summary, I will gather up all the Azure infrastructure news from the last month. You can tell that a big conference (Build) is coming; there are lots of announcements this month. The trend since November was that there was a tiny trickle, and that continued into February. Then things started to increase.
A Big Month for Storage
Various groups in Azure storage had news to share this month:
- Azure Data Box family now enables import to Managed Disks: If you need to migrate massive amounts of virtual machines, and you’re OK with an offline (and slow) import process via courier, then this might be the answer to your needs.
- Azure Premium Block Blob Storage is now generally available: Claus Joergensen, previously of Storage Spaces and Storage Spaces Direct, has brought his experience of driving performance in software-defined storage to Azure. If you need to work with many small blobs and have low latency, then Premium Blob Storage might be for you.
- High-Throughput with Azure Blob Storage: High-Throughput Block Blob (HTBB) improves the write throughput when working with larger blobs.
- Azure Blob Storage lifecycle management generally available: The process that allows you to automatically move blobs between hot, cool and archive tiers is now available for production use.
- Azure Data Box family meets customers at the edge: A family of hardware and virtual appliances is now available to extend Azure into your data center. Note that Data Bog Gateway is a virtual appliance (Hyper-V or VMware) that succeeds StorSimple 1200.
- Blob storage interface on Data Box is now generally available: You now have the ability to copy blob data into Azure storage via this appliance using REST APIs.
- Larger, more powerful Managed Disks for Azure Virtual Machines: Premium SSD, Standard SSD, and Standard HDD disks now support sizes up to 32 TiB. Max IOPS is up to 20,000 per Premium SSD managed disk. Max throughput is now up to 900 MB/second per Premium SSD managed disk. Note that the new 8 TiB, 16 TiB, and 32 TiB sizes don’t have support from Azure Site Recovery or Azure Backup yet.
- Azure Storage support for Azure Active Directory-based access control generally available: You can use AAD accounts/groups to set permissions on blobs and queues. This will be useful for systems that require strict isolation to data or queue (messaging) access.
- AzCopy support in Azure Storage Explorer now available in public preview: The handy AzCopy tool now has a built-in interface in the Azure Portal, albeit in a preview.
I’d love to spend about 10,000 words talking about networking, but I’m not going to go there just yet. Instead, let’s talk about the continued expansion of Azure’s global footprint.
Say Goodbye to Traditional PC Lifecycle Management
Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.
Microsoft made news by becoming the first of the “big clouds” to open regions (each made up of multiple data centers) in Africa:
- South Africa West “in” Cape Town
- South Africa North “in” Johannesburg
I say “in” because anyone with access to Google Maps can quickly figure out that some of the shared locations are … inaccurate.
These new data centers will bring local deployments of Azure to Africa, followed by Office 365 and Dynamics 365. Other regions are planned, and will be coming online in the not-too-distant future, adding more localized capacity to Microsoft’s 100,000+ mile fiber network – thought to be the second largest private WAN on the planet.
Other Announcements from Microsoft
Here are other Azure IaaS headlines from the past month:
- Announcing new capabilities in Azure Firewall
- Announcing new Azure Security Center capabilities at RSA 2019
- Secure server access with VNet service endpoints for Azure Database for MariaDB
- Intel and Microsoft bring optimizations to deep learning on Azure
- Create a transit VNet using VNet peering
- Simplify disaster recovery with Managed Disks for VMware and physical servers
- Simplifying your environment setup while meeting compliance needs with built-in Azure Blueprints
- Hardware innovation for data growth challenges at cloud-scale
- Microsoft Azure portal March 2019 update
- Azure Backup for SQL Server in Azure Virtual Machines now generally available!
- Azure Container Registry virtual network and Firewall rules preview support
- Reducing security alert fatigue using machine learning in Azure Sentinel
- March 2019 changes to Azure Monitor Availability Testing
- Windows Virtual Desktop now in public preview on Azure
- Analysis of network connection data with Azure Monitor for virtual machines
And Now for Something Different
Those with grey/no hair might remember the first versions of Windows Server Failover Clustering, or even Windows Server Data Center Edition. You could only purchase those software / hardware / support solutions from a limited number of vendors with highly tested configurations. The total solution should have been more stable, and a side-effect was that the hardware was often 2x or 3x more expensive than just buying the components by themselves – but without the ability to get access to the software.
Microsoft democratized access to the software and that played a big role in assisting Microsoft to penetrate the data center mission-critical role with Windows Server 2003 – 2012 R2.
Then along came a new hardware-integrated feature: Storage Spaces Direct (S2D). This role depends highly on the features, firmware, and drivers of high-performance disks, NICs, and switches. If any one component performs below expectations, it brings down the entire cluster – that’s the nature of converged (Storage Spaces or SAN) or hyper-converged infrastructure (HCI, such as S2D, Nutanix, and so on).
Those of you who have worked with Hyper-V or S2D know that hardware, drivers, and firmware are a risk. Certain brands are a known problem. Sadly, some server manufacturers rely exclusively on certain risky brands and bureaucracies don’t think of that when creating approved purchasing lists.
It appears that Microsoft is taking a pinch of the past and a smidgen of the present to ensure that S2D customers have a quality experience. Azure Stack is lending more than a brand name (and some news headlines) to Azure Stack HCI; also coming with the name is a quality controlled collection of more than 70 solutions from 15 hardware partners. Unlike Azure Stack, this set of partners includes specialists such as DataON, who are widely regarded as one of the best vendors in the field.
What are you getting? Is it Azure? Therefore I have this story under “something different”. What you get in Azure Stack HCI is not Azure. It is Hyper-V on Storage Spaces Direct. Yes, it integrates natively with Azure in many ways, including, but not limited to:
- Azure Site Recovery (also VMware)
- Azure Monitor (also VMware)
- Cloud Witness
- Azure Backup (also VMware)
- Azure Update Management (also Linux)
- Azure Network Adapter (probably best for small business)
- Azure Security Center (also Linux, but best on Windows Server 2016+)
I guess Azure is the new Windows – remember when everything in Microsoft had to be labeled as “Windows …”. Now it appears that everything has to be branded as “Azure …”.