Design Considerations for Azure Web Apps
In this post, I will discuss some things you should consider when designing an Azure App Service deployment.
Say Goodbye to Traditional PC Lifecycle Management
Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.
Public or Private?
I think this is the most critical question you should start with. Will the application be published publicly or privately? Note that security via authentication is an option for public sites. Until recently, all App Services on an Azure App Service Plan were published on the Internet in a multitenant environment (App Service Environment or ASE). Today, there is an isolated App Service Plan tier, which allows you to create a private ASE on a virtual network, thus allowing you to control network security, enable direct site-to-site VPN/Expressroute (WAN) connections, and allow App Services to interact with virtual machines at NIC performance speeds instead of Point-to-Site VPN (gateway) speeds. An ASE can be shared publicly using a virtual IP address (VIP or Azure public IP address) or privately using an Azure internal load balancer.
Do you want a Free, Shared, Basic, Standard, Premium, or Isolated deployment? Your choice is a balance of price, performance, capacities, SLA, and features. A free plan has the obvious benefit of being free but is quite restrictive in functionality. Those restrictions might be irrelevant for a test and dev environment where you want costs to be minimized. It is possible to have a free App Service Plan running alongside another App Service Plan in a higher tier. Code can be produced and tested with reduced costs on the free tier and then migrated (directly or indirectly) to the production app or pre-production slot on the paid-for App Service Plan.
Linux or Windows Server?
I’ll try not to get into any Seahawk versus Penguin battles here. You will choose Linux over Windows if:
- You prefer (Debian) Linux over Windows Server
- Apache is your choice of web server over IIS
- You want to code on Ruby, which is only available via Linux at the moment
- Docker containers are your preferred method of delivering services
- You want the possibility of customizing the frameworks that you are coding on
If you want to use non-container deployments or if you want to use the other kinds of App Services, then you must deploy on a Windows App Service Plan. But as you’ll read later, you don’t have to restrict any single application to a single App Service Plan.
How will you be producing code and releasing it into Azure? Are you using traditional, cloud, or DevOps practices? You can use a number of systems to put your code into an App Service, including FTP or code management systems such as GitHub or Visual Studio Team Services (VSTS).
Maybe you want to use a DevOps approach. DevOps resides in the world of opensource products and an example of one that supports App Services is Jenkins.
Then one has to decide what you are releasing code into. Will you create a deployment slot for your production App Service (strongly recommended)? Using this system, you can publish code into a pre-production slot or one of a chain of test slots. VSTS can be used to stress test the code and then you can validate that things work correctly. Once ready, a simple click of a Swap button will swap the deployment slot with the next slot up or the production system. This method minimizes risks to production and allows you to further validate code evolution before customers see it.
Scale-Up or Scale-Out?
As I like to say in my training classes, the cloud offers you the options to run an army of ants or a squad of giants. Each instance (virtual machine) in an App Service Plan is of a virtual machine series (processor) and size (cores, memory, and disk capacity). If you run monolithic code that doesn’t scale-out (more machines), then you need big virtual machines. But you’ll find that cost-effectiveness and performance are best achieved with code that can scale out to more smaller machines or instances.
Should you enable auto-scaling? Auto-scaling allows you to create rules to control how and when Azure will assign more instances to your App Service Plan when demand increases or remove instances when demand subsides. With auto-scaling, you avoid the dreaded situation of trying to balance cost with performance. The cost of IT becomes a “fixed” percentage of revenue, where costs increase and decrease in line with operations.
Where in the world will you deploy your App Service Plan? Normal best practice is that your application will be placed as close as possible to your customers. You’re not restricted to one region, though! And this decision also contributes to regional fault tolerance or disaster recovery.
You can deploy your application in multiple regions. For Example, I can deploy my system in:
- West US
- East US
- North Europe
- West Europe
Then I can use Azure Traffic Manager to unify the deployments under one address/URL. Traffic Manager policies will direct clients based on:
- Client-application latency
- Load balancing
- Priority (failover)
If there are any databases, then you’ll have to figure how they’ll be fragmented/unified behind a multi-region App Service deployment.
Web Content Caching
An application doesn’t have to reside in every Azure region to give a good performance. Using geo-caching, you can increase the performance of your App Services for remote clients. Azure offers three plans from Akamai and Verizon (2). You are not restricted to Azure’s own CDN (content delivery network); I’m increasingly hearing customers bring up services, such as Cloudflare, that can offer CDN and security services before the Internet reaches your application.
Layer-7 Load Balancing and Security
You’ll have Layer-4 security and load balancing with an ASE (via hidden frontends that might be running Application Request Routing or ARR). If you want load balancing at Layer-7 then the Web Application Gateway (WAG) can provide that to App Services. You can also enable the Web Application Firewall for additional application layer security, such as malformed requests or SQL injection attacks.
Multiple Apps Hosting
An App Service Plan can host multiple App Services; do you want to do that? If the App Services are lightweight, then you can run many App Services (web apps in its own application pools) on your “farm”. Each App Service will consume resources from the App Service Plan instances. Things get more complicated if you’re an App Service will be resource intensive. In a Microsoft reference architecture, you’ll see that the web app runs in one App Service Plan and the background tasks (Log Apps, WebJobs API apps, run in second App Service Plan. This will isolate the resources of the front end and the back end and if everyone was sized correctly, probably would have resulted in the same number of App Services Plan instances (costs). If your backend tasks won’t compromise the performance of the web app on smaller sized instances, then you can probably merge them into a single tier.
Database performance will make or break a customer facing or line-of-business application. You can increase the size (DTUs) of a database but a query will always have to be run and have a certain level of latency. What if the results of that query could be cached? Azure offers Redis Cache to improve database performance; it comes at a cost but caching is often more successful at improving performance than increasing database sizes.
This list of considerations is far from complete, but it will give you a starting point. Review the capabilities of App Services and compare them against your requirements. Remember that services can span App Service Plans, tiers, platforms, and regions. Caching is good because it has more economical versus performance impact than adding more backend capacity. And I cannot stress enough that you consider auto-scaling for any application that will face variable levels of demand.