
close
close
Chance to win $250 in Petri 2023 Audience Survey
When you hear about some giant server with crazy memory or processors, or a SAN that supports immense amounts of flash storage, you’ll most likely think to yourself, “I’ll never see one of those, let alone have the chance to play with one.”
The great thing about having access to a public cloud like Azure is that you have access to anything, and you only pay for it by the minute while you use it. That gives you a chance you play with the big stuff for a very short amount of time!
I decided to deploy the largest possible virtual machine that you can in any of the “big three clouds” according to Microsoft, a GS5. I’ll explain what I got from this machine, how it performed when stressed, and how much it cost me.
At the moment, the GS5 is the premium virtual machine in Azure. The GS-Series is based on the G-Series machines, based on hosts with the 2.0 GHz Intel Xeon E5-2698B v3 CPU. The G- and GS-Series virtual machines offer much more RAM per core than any of the other Azure virtual machine specifications. The GS1 starts with two cores and 28 GB RAM, which is much more than the DS2, which has two cores and 7 GB RAM. The workloads that you run in these machines are intended to be extremely memory intensive, possibly using RAM to cache data instead of disk.
The GS-Series gives you the option to deploy the OS and data disks on HDD-based Standard Storage or SSD-based Premium Storage. The temporary disk is based on a host-local SSD drive. You can have up to 64 data disks offering up to 80,000 IOPS with a transfer rate of 2,000 MB per second.
So, let’s think about that for a second. This is a machine with:
How often would you ever get to fire that machine up, and what would it cost you? I decided to deploy such a machine in Azure on my spare MSDN Premium subscription, which includes a nice Azure credit benefit. Note that Windows virtual machines are charged at Linux rates for MSDN customers because the Windows license is covered by MSDN benefits, terms, and conditions.
To speed up deployment and clean up, I deployed a V2 virtual machine using Azure Resource Manager, containing all of the dependent resources in a single resource group. Next, I used Azure Resource Manager and PowerShell to create and assign 64 x 1023 GB P30 data disks, with caching disabled, to the virtual machine.
$vm = get-azurermvm -Name pdwe1 -ResourceGroupName pdwe for ($i = 1; $i -lt 65; $i++) { CLS Write-Host "Disk $i" $diskname = "premdata$i" $l = $i - 1 Add-AzureRmVMDataDisk -VM $vm -Name "$diskname" -VhdUri "https://pdwe.blob.core.windows.net/vhds/$diskname.vhd" -LUN $l -Caching None -DiskSizeinGB 1023 -CreateOption Empty } Update-AzureRmVM -ResourceGroupName "pdwe" -VM $vm
The following screenshot shows the configuration of the resulting machine.
A fully utilized Azure GS5 virtual machine (Image Credit: Aidan Finn)
I logged into the machine and did what every nerd does when faced with a big machine; I launched Task Manager and took a look around.
Task Manager in an Azure GS5 virtual machine. (Image Credit: Aidan Finn)
In Disk Manager, I could see the OS disk and 64 x data disks:
Disk Manager in a fully utilized GS5 Azure virtual machine. (Image Credit: Aidan Finn)
The storage pool and the resulting E: volume. (Image Credit: Aidan Finn)
The only problem was that:
I had a limited amount of time to reconfigure the tests to hit a peak. I managed to hit 58,817 IOPS before I ran out of time and money. It’s not 80,000 IOPS, but it’s more than I’d hit before. I did observe some spikes to 77,000 IOPS, so I probably wasn’t far from the optimal test configuration before Azure shut the subscription down due to lack of credit.
The best disk IOPS results I had before I ran out of credit. (Image Credit: Aidan Finn)
I normally deploy Azure resources in either the East US 2 or North Europe regions. I deployed this machine in West Europe (Amsterdam) because the GS5 spec was available here, and I could also isolate the billing to see how much the test would cost.
The machine wasn’t running for very long, so it ‘only’ cost me €4.49 (€9.3607 per hour) to run it. However, the Premium Storage capacity cost me much, much more. The machine was deployed in the evening, and that’s when I ran my first tests. I planned to run more tests the following day, but Azure shut down the subscription due to lack of credit remaining. Overnight, the storage consumed €151 of credit! Note that the test file in the data LUN was just 5,000 MB.
A breakdown of the GS5 virtual machine costs. (Image Credit: Aidan Finn)
I was lucky enough to have some spare credit around that allowed me to run these tests in Azure. With it, I ran the biggest machine in Microsoft Azure’s cloud, and, yes, it was big. I wish I had more time to stress the disks, RAM, and CPU, but playtime was ended by Azure before I was ready.
This leads me to another point. All too often, newbies to the cloud assume that service, and therefore machine, designs should be the same in the cloud as they would be on-premises. Public clouds are designed and priced to suit smaller machines. Services should be designed to scale out, ideally on the fly, based on demand using small machines. This gives you cost efficiencies by using smaller machines, as well as availing of the per-minute billing that Azure offers.
Crazy-big machines should be a rarity in the cloud, for genuinely crazy-big workloads that justify the costs. As you can see above, those costs can be high. But for nerds like me, accruing those costs can be fun — while it lasts.
More in Microsoft Azure
Azure Native New Relic Service Provides Full Stack Observability To Boost Digital Transformation
Jan 25, 2023 | Rabia Noureen
Microsoft to Roll Out EU Data Boundary Plan for Cloud Services on January 1
Dec 15, 2022 | Rabia Noureen
Most popular on petri