Options for Load Balancing Services in Azure
In this article, I will explain the options you have for load balancing applications or services in Azure.
Passwords Haven’t Disappeared Yet
123456. Qwerty. Iloveyou. No, these are not exercises for people who are brand new to typing. Shockingly, they are among the most common passwords that end users choose in 2021. Research has found that the average business user must manually type out, or copy/paste, the credentials to 154 websites per month. We repeatedly got one question that surprised us: “Why would I ever trust a third party with control of my network?
Why Load Balance?
There are a few reasons for deploying a load balancer in Microsoft Azure and they aren’t always about load balancing, I’ll explain a few reasons here.
If you want high availability, then you will typically need more than one machine or maybe even many machines across data centers or even Azure regions! Load balancing allows stateless services, such as web servers, to be aggregated and presented to clients as a single unit; if one fails, the client is redirected to another that is still responding to a load balancer probe.
Tied into this is the ability to scale out services. The cloud is designed to add performance by adding more machines. If you need more RAM/CPU, add another machine. If the service is abstracted by a load balancer, then the new capacity is added/removed without the client needing to make any changes.
A load balancer can do other cool things too, including but not limited to:
- SSL offload
- Content or domain redirection
- Security functions
- Geo-fault tolerance
Azure Load Balancer
Azure includes a load balancing service called the Load Balancer. This is a simple Layer-4 (TCP or UDP) service that can load balance services and is commonly used. A probe tests to see if the members of a backend pool are responsive; if so, traffic is directed to one of the running members. A simple client affinity method can be enabled (keep sending a shopper to the same web server) for client IP address or client IP address plus protocol.
One of the less obvious reasons for using the Azure Load Balancer is for creating NAT rules. If you next-next-next your way through creating an Azure virtual machine, then every machine gets it’s own public IP address. This is both wasteful, difficult to manage with scale, and creates multiple entry points, which increase security complexity. Instead, one can deploy a single public IP address with an Azure Load Balancer and create NAT rules for services such as RDP or SSH. This is much like one might do with an on-premises firewall.
There are two kinds of Azure Load Balancer. The free Basic tier load balancer is the one that you will use for simpler deployments that require no more than 100 backend endpoints, don’t need to span availability zones in a region, and have simple networking requirements.
The Standard tier load balancer adds scalability (up to 1000 backend endpoints), is supported for unifying services across availability zones, and offers more complex networking options. For example, HA Ports enables active/active scale-out and high availability for network virtualization appliances.
The other major difference between the Basic and Standard tiers is pricing. The Basic tier load balancer is free but the Standard tier load balancer has a complex consumption-based charge.
Web Application Gateway (WAG)
The WAG is Azure’s native Layer-7 load balancer. Being a Layer-7 solution, it brings application awareness to the table. A backend pool aggregates machines that are sharing the load of delivering a service. A probe optionally tests those machines to see if they are online or not. And a listener is created to accept HTTP or HTTPS traffic for a domain. You can have up to 16 of those listeners in a WAG. This means that a single WAG can handle many domains on a single public IP address, including SSL protected domains.
There are many features of the WAG:
- Web Application Firewall: An additional feature (with additional costs) adds Layer-7 security for web services using the open source OWASP ruleset.
- Cookie-based session affinity: This is a more intelligent way of directing a client to the same web server than source IP address.
- SSL offload: Preserve web server CPU for handling requests. The certificate is uploaded to the HTTPS listener in the WAG.
- End-to-end SSL: Any traffic to the web servers can be re-encrypted for complete security.
- URL-based content routing: A large or complex website could span many web farms, with each farm handling a subset of URLs on the site. For example, static content might be on some servers, media on another.
- Multi-site routing: Multiple domains can be hosted on different backend pools and listeners will forward traffic to the correct pool.
- Websocket support
- Health Monitoring
- SSL policy and ciphers: Limit and order the forms of SSL protocol and cipher suites that are used.
- Request redirect: Redirect HTTP traffic to a HTTPS listener when your site only offers SSL services.
- Multi-tenant backend support: Different kinds of services can be supported, such as Azure App Services or virtual machines.
- Advanced diagnostics: Log data is generated and can be analyzed for troubleshooting.
There are three major differences to highlight with the Azure Load Balancer:
- The Azure Load Balancer handles any kind of TCP or UDP traffic. The WAG only understands HTTP or HTTPS traffic.
- The Azure Load Balancer is intended for resources on a virtual network, such as virtual machines. The WAG is capable of supporting virtual machines but with a PowerShell deployment, it can be used with Azure App Services too.
- A WAG is deployed as one or more instances (preferably 2 or more in production). Adding more instances offers high availability and greater web load balancing scale. Sometimes the load balancer needs to scale out too!
Third-Party Load Balancing
Microsoft load balancing options offer enough functionality for a large percentage of deployments. You might find yourself using a mix of the Azure Load Balancer and the WAG. But there are times when a third option will enter the mix or be your solution of choice. Third-party load balancers from the likes of Kemp, Citrix, and F5 (and more) are available in the Azure Marketplace, making them easy to deploy and supported in Azure. One can deploy these Linux virtual machines into a virtual network to add extra Layer-7 functionality that Microsoft doesn’t have. The funny bit about this is that you’ll probably deploy two of these load balancers and then probably need to deploy a Standard tier Azure Load Balancer (HA Ports) to unify these devices as a single load balancer to external clients.