Last Update: Sep 04, 2024 | Published: Feb 25, 2019
In this post, I will show you how to architect an Azure Firewall deployment where a centralized firewall will inspect traffic that is flowing across a VPN connection before it reaches the Azure virtual network(s) or returns to on-premises.
The premise of this design is a simple and, probably, a familiar one: anything outside of the “virtual data center” that you deploy in Azure virtual networks will not be trusted – and that includes the networks in your office that will be connected via a VPN (or ExpressRoute) connection.
In summary, the design will feature a hub virtual network that hosts shared services. In this case, the shared services will be:
All traffic coming from the office, over the VPN connection, will be routed through the Azure Firewall before it can be forwarded to applications, which are hosted in spoke virtual networks. Data from the applications to the office network(s) will route via the Azure Firewall, and then to the gateway which will tunnel the traffic across the VPN connection.
This is a relatively simple virtual network with two subnets:
A route table will be created and associated with the GatewaySubnet subnet. A route to the address space(s) being used by the application virtual networks will route all traffic via the internal IP address of the Azure Firewall. Let’s say that you will use the following addresses:
The user-defined route that you will create will be as follows:
Note that the local network definition that you use for the VPN connection will create system-managed routes for returning traffic back across the VPN tunnel to the office.
Each application or service is deployed into a dedicated virtual network with one or more subnets. Remember that each subnet has its own route table that, by default, contains only system-managed routes.
VNet peering will be used to share the resources of the hub virtual network with the spoke VNets that contain the applications. There is always a pair of configurations between peered virtual networks. In this case:
Note that Use Remote Gateways can only be enabled in the spoke VNets once the hub gateway is complexly deployed and in a running state. Attempting to configure this setting while the gateway is in a creating state will fail because the route cannot be propagated to the gateway.
By default, each subnet in those VNets will attempt to use an automatically propagated system route: send all traffic to on-premises via the gateway across the peered connections. However, we want to take control of this routing and force traffic from the spokes to pass through the Azure Firewall before reaching the gateway.
To accomplish this, you will create and associate a route table resource for each spoke subnet that must communicate with on-premises networks. If 192.168.1.0/24 was the address space of the on-premises network(s) and 10.0.1.4 was the IP address of the Azure Firewall, the route in these tables would be:
Now the Azure Firewall is in control of everything flowing between on-premises and your applications running in Azure virtual networks. You can create Network Rule Collections to allow or deny (everything is denied by default) traffic flows as required.
On-premises is now considered external to and untrusted by your “virtual data center” in Azure – and that means you’ve created a form of isolation between malware on your user device networks and the valuable services and data in The Cloud.
Related Article: