Deploy Active Directory and Certificate Services Using Azure Resource Manager

Last Update: Sep 24, 2024 | Published: Jan 05, 2018

Programming-code

SHARE ARTICLE

In this article, I’ll discuss how I deployed an Active Directory (AD) forest with 2 domain controllers, and a member server running certificate services, in Microsoft Azure. In the first part, I’ll walk you through the JSON Azure Resource Manager (ARM) templates used to provision the three virtual machines (VMs) and required infrastructure. I’ll show you how to add a PowerShell Desired State Configuration (DSC) resource to the template for deploying AD Certificate Services (ADCS) in Windows Server. And finally, I’ll show you how to use Visual Studio to provision the solution in Azure.

Azure Resource Manager (ARM) is a deployment model that allows organizations to provision Azure resources using JavaScript Object Notation files (JSON). Microsoft has a collection of templates in JSON format that can be used to provision Azure resources. If you need a primer on using Visual Studio and JSON templates to provision Azure resources, read Microsoft Azure: Use Visual Studio to Deploy a Virtual Machine on Petri.

The solution is based on two JSON templates, which I downloaded from Microsoft’s Quickstart gallery and PowerShell Desired State Configuration (DSC). The first template deploys two domain controllers in a new Active Directory forest with a high availability configuration. The second template deploys a virtual machine and joins it to the domain. Finally, I added some additional PowerShell DSC code to install Active Directory Certificate Services (ADCS) on the member server.

For a primer on working with JSON templates and Azure resources, see Aidan Finn’s articles on Petri:

If you don’t want to use readymade templates as the basis for your own project, Azure lets you download the JSON code required to deploy resources. For more information, see Creating JSON Templates From Azure Resource Groups on Petri.

Import the Project into Visual Studio

Before I start describing how the solution works, you can download the entire project (Petri_ADLAB5.zip) here and import it into Visual Studio (VS). If you don’t have a Visual Studio license, install the Community edition with the Azure SDK.

To import the project template into VS, you’ll need to put it in the My Exported Templates folder in DocumentsVisual Studio 2017. Open the template by clicking CTRL+SHIFT+N in VS. In the New Project dialog, expand Installed on the left, and then click DeploymentProject. You can then select the project template and click OK.

On the right of VS, you will see the Solution Explorer pane. If you can’t see it, click CTRL+ALT+L. Here are all the project files and folders. Double click azuredeploy.json. This is the main deployment code. The file will open in the central pane. On the left you should see the JSON Outline pane. If you can’t see it, click View > Other Windows > JSON Outline. In the JSON Outline pane you can quickly jump between different parts of the code. Expand Resources to see a list of Azure resources that the template provisions.

Infrastructure-as-Code Azure Resource Manager JSON template in Visual Studio (Image Credit: Russell Smith)
Infrastructure-as-Code Azure Resource Manager JSON Template in Visual Studio (Image Credit: Russell Smith)

Quickstart Templates and Other Files

The first template I used was Create an New AD Domain with 2 Domain Controllers. It provisions two domain controllers with load balanced public IP addresses. The second template, Join a VM to an Existing Domain, provisions a third VM and joins it to the domain. Each template also includes parameters in azuredeploy.parameters.json. Just as I did with the azuredeploy.json files, I combined the contents of these files. There is one more key file: Deploy-AzureResourceGroup.ps1. Visual Studio automatically generates this file. It is used to provision the Resource Group for the resources you will provision in Azure and it sets the artifacts path. The file can be edited if necessary. We’ll learn more about artifacts in the third part of this series.

 

 

Managing Dependencies

In this project, I combined the two templates and added PowerShell DSC code to deploy ADCS. But in retrospect, it might have been better to link the second template to the first. There are already several nested (linked) templates in the project. For more details on using linked templates, see Microsoft Azure: Using Linked ARM Templates on Petri.

When orchestrating the deployment of a complex application, it’s important to note that the order in which code appears in the template doesn’t necessarily reflect the order in which it is executed. The order of execution is vital because a second domain controller can’t be installed until the first is provisioned, a member server until at least one domain controller is available, and ADCS can’t be installed until the member server has been joined to the domain.

The code looks complex because you need to deploy the infrastructure to support the virtual machines, not just the VMs themselves. For example, a virtual network (Vnet), network interfaces (NICs), and load balancer are required.

Resource manager determines what code should be executed when using the dependsOn parameter. For example, perhaps I want to deploy the second domain controller only after the first has been provisioned. You can see in the code that the forest isn’t created until the virtual machine, on which the first domain controller will be deployed, has been provisioned.

          "dependsOn": [
            "[resourceId('Microsoft.Compute/virtualMachines', variables('adPDCVMName'))]"
          ],

The dependsOn parameter requires a resourceId. But as shown in the code above, it’s possible to enumerate the resourceId on the fly from a variable. The two VMs hosting the domain controllers can be provisioned in parallel. But we must wait for the first DC to come online before we can configure Active Directory Directory Services (ADDS) on the second domain controller.

      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/',variables('adBDCVMName'),'/extensions/PrepareBDC')]",
        "Microsoft.Resources/deployments/UpdateBDCNIC"
      ],

The code above waits for the network interface card (NIC) that’s attached to the second domain controller to be updated with the IP address information of the first DC. In turn, UpdateBDCNIC waits for UpdateVNetDNS1, which waits for the forest to be installed on the first DC.

"dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('adPDCVMName'),'/extensions/CreateADForest')]"
      ],

The ADDS server role is installed and the server promoted to the first domain controller in a new forest using PowerShell DSC. The CreateADForest resource calls a PowerShell DSC script (CreateADPDC.ps1) that uses Microsoft’s Active Directory DSC resources. If you open CreateADPDC.ps1 in VS, you’ll see that the file contains a set of declarations about how Windows Server should be configured. The xADDomain resource is used to determine how the ADDS bits should be installed. For more information on working with PowerShell DSC, see How Do I Create a Desired State Configuration? on Petri.

        xADDomain FirstDS
        {
            DomainName = $DomainName
            DomainAdministratorCredential = $DomainCreds
            SafemodeAdministratorPassword = $DomainCreds
            DatabasePath = "F:NTDS"
            LogPath = "F:NTDS"
            SysvolPath = "F:SYSVOL"
            DependsOn = @("[WindowsFeature]ADDSInstall", "[xDisk]ADDataDisk")
        }
Infrastructure-as-Code PowerShell DSC file in Visual Studio (Image Credit: Russell Smith)
Infrastructure-as-Code PowerShell DSC File in Visual Studio (Image Credit: Russell Smith)

 

The DNS server address of the VNet is updated after the installation of the first domain controller with the domain controller’s IP address because it also hosts DNS. As you can see above, not only can we wait for a resource to be deployed but also for Azure VM extensions to complete.

The concat string function combines multiple string values or multiple arrays and returns a concatenated string or array. You can see it used in the previous two code examples where dependsOn waits for a VM and PowerShell DSC to complete on the VM. For a complete list of available sting functions, see String functions for Azure Resource Manager templates on Microsoft’s website.

Once the forest has been configured, PowerShell DSC is used to promote the second VM to a domain controller (PrepareADBDC.ps1 and CreateADBDC.ps1). After the third VM has been provisioned, the provisionvm1.ps1 (later renamed to PKI.ps1) script uses the PowerShell DSC xAdcsDeployment resource to install ADCS and configure it as a root certification authority (CA). Because I combined two templates for this solution, provisioning of the certificate services VM appears as [dnsLabelPrefix] in JSON Outline panel in Visual Studio. As you can see in the Solution Explorer panel in VS, all the PowerShell DSC scripts must be uploaded to Azure artifact storage as zipped files.

Unlike the two domain controllers, where AD is configured using PowerShell DSC scripts, the member server is joined to the domain using the domain join ARM extension. Virtual machine extensions are small applications used for post-deployment configuration and automation tasks. For more information on the domain join extension, see Microsoft’s website here. Once the VM has been joined to the domain, ADCS is configured using PowerShell DSC.

Reviewing the Situation

There’s no doubt that Infrastructure-as-Code can be complicated. While it is not compulsory to understand exactly how each template works and orchestrates the provisioning process, if you want to deploy complex apps or combine existing templates, an understanding of this process can help you troubleshoot any issues you might come across.

But we still need to install and configure AD Certificate Services (ADCS) on the member server.

Add a Resource

Active Directory is configured on the two domain controllers using PowerShell DSC. DSC is Microsoft’s management platform based on PowerShell. It allows DevOps and system administrators to manage IT and development infrastructure using code. DSC uses a declarative model that lets you state how you’d like servers to be configured without having to worry about the ‘mechanics’. For more information on working with PowerShell DSC, see How Do I Create a Desired State Configuration? on Petri.

The first step is to add a PowerShell DSC resource to the third VM. Remember that this VM is a member of our domain.

  • Make sure that the azuredeploy.json file is open in the central pane.
  • Locate the VM in the JSON Outline panel on the left, right-click and select Add New Resource from the menu.
  • In the Add Resource dialogue, give the new resource a name. I called it provisionvm.
  • Select a VM from the Virtual machine drop-down menu. In this example, the VM resource is called [dnsLabelPrefix].
  • Click Add.
Add a PowerShell DSC resource to a project in Visual Studio (Image Credit: Russell Smith)
Add a PowerShell DSC Resource to a Project in Visual Studio (Image Credit: Russell Smith)

The azuredeploy.json template will be updated and the code that was added highlighted in the center panel. We need to make sure that the PowerShell DSC code will run only after the VM has been joined to the domain. In this case, I want to install certificate services as an enterprise root certification authority (CA), i.e. a CA that is integrated with Active Directory, so we need to wait until the server is not only provisioned but also joined to the domain. Let’s modify the dependsOn parameter to make sure the PowerShell DSC resource isn’t provisioned until after the domain join operation. By default, dependsOn for the resource looks like this:

          "dependsOn": [
            "[resourceId('Microsoft.Compute/virtualMachines', parameters('PKI'))]"

But we need to change it to this:

          "dependsOn": [
            "[resourceId('Microsoft.Compute/virtualMachines', parameters('PKI'))]",
            "[concat('Microsoft.Compute/virtualMachines/', parameters('PKI'),'/extensions/joindomain')]"

There are two more parameters that we need to modify. modulesUrl should be set to a variable name:

"modulesUrl": "[variables('pkiTemplateUri')]",

pkiTemplateUri is defined in the list of variables at the top of the azuredeploy.json and is set to the following path:

"pkiTemplateUri": "[concat(parameters('_artifactsLocation'),'/windows-powershell-dsc/DSC/provisionvm.ps1.zip', parameters('_artifactsLocationSasToken'))]",

Finally, configurationFunction should be set as follows:

"configurationFunction": "[variables('pkiConfigurationFunction')]",

And the pkiConfigurationFunction variable set like this:

"pkiConfigurationFunction": "provisionvm.ps1\rootca",

Where provisionvm.ps1 is the name of the PowerShell DSC source code and rootca is the configuration name in the file that we want to call. Note that in the final version of the project, I changed the name of provisionvm.ps1 to PKI.ps1.

PowerShell DSC

Now let’s create the PowerShell DSC code that will deploy ADCS on the virtual machine. I wrote the code that follows and it is based on examples by Microsoft that show how to install Windows features and use the xAdcsDeployment resource. When you add a PowerShell DSC resource to the project, VS automatically creates a PowerShell file for you. You’ll be able to see it in Solution Explorer in the DSC folder and it will have the same name as you gave the resource that was added in the previous steps. There’s some sample code in the file that VS creates. You can safely delete it or use it as the basis for your own code.

Configure Active Directory Certificate Services Using PowerShell DSC (Image Credit: Russell Smith)
Configure Active Directory Certificate Services Using PowerShell DSC (Image Credit: Russell Smith)

The first step is to import two required DSC resources, xAdcsDeployment and PSDesiredStateConfiguration, using the Import-DscResource cmdlet. Then the ADCS bits are installed on the server. Because DSC uses a declarative syntax, we specify just that the component should be ‘Present’ and PowerShell will work out the rest. The same method is used to install the ADCS management tools. Finally, the xAdcsDeployment resource is used to configure ADCS once it is installed.

Configuration rootca
{
   param
   (
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds

    )

     Import-DscResource -ModuleName xAdcsDeployment, PSDesiredStateConfiguration

     Node localhost
     {

        # Install the ADCS Certificate Authority
        WindowsFeature ADCSCA {
            Name = 'ADCS-Cert-Authority'
            Ensure = 'Present'
        }
        
        # Configure the CA as Standalone Root CA
        xADCSCertificationAuthority ConfigCA
        {
            Ensure = 'Present'
            # Credential = $LocalAdminCredential
            CAType = 'EnterpriseRootCA'
            CACommonName = $Node.CACommonName
            CADistinguishedNameSuffix = $Node.CADistinguishedNameSuffix
            ValidityPeriod = 'Years'
            ValidityPeriodUnits = 20
            CryptoProviderName = 'RSA#Microsoft Software Key Storage Provider'
            HashAlgorithmName = 'SHA256'
            KeyLength = 4096
            DependsOn = '[WindowsFeature]ADCSCA' 
        }

            WindowsFeature RSAT-ADCS 
        { 
            Ensure = 'Present' 
            Name = 'RSAT-ADCS' 
            DependsOn = '[WindowsFeature]ADCSCA' 
        } 
        WindowsFeature RSAT-ADCS-Mgmt 
        { 
            Ensure = 'Present' 
            Name = 'RSAT-ADCS-Mgmt' 
            DependsOn = '[WindowsFeature]ADCSCA' 
        } 
     }
  }

The configuration is called rootca. You can choose any name you like. Notice that the code block that installs ADCS is called ADCSCA. Again, you can choose whatever name you want. In the code, you can see that DSC uses a parameter called DependsOn. This allows us to say that we won’t configure the CA until the ADCS server role is installed on the server.

But the real problems are only just about to start. Getting VS to successfully provision the resources in Azure is the hard part.

The final step is to provision the resources, which you can do directly from Visual Studio.

Prerequisites

In part one, I showed you how to import the project template into Visual Studio. But before you can use it to provision resources in Azure, there are several components that need to be in place. If you haven’t already got an Azure subscription, sign up for a free trial here.

You’ll also need the latest version of PowerShell, which is part of the Windows Management Framework (WMF). If you have Windows 10, the latest version of WMF should be installed on your device. If you are using an earlier version of Windows, you can download WMF 5.1 from Microsoft’s website here. You will also need Microsoft Azure PowerShell installed. I recommend that you use Microsoft’s Web Platform Installer to get the Azure PowerShell cmdlets.

Because we’re using PowerShell Desired State Configuration (DSC) as part of the project, you’ll need to install the following modules on your PC:

  • xActiveDirectory
  • xAdcsDeployment
  • xDisk
  • xNetworking
  • xPendingReboot
  • xStorage

To install a module, open Windows PowerShell and use the Install-Module cmdlet as shown below:

Install-Module -Name xActiveDirectory -Scope CurrentUser

Deploy the Project

Now we are ready to deploy the project. Follow the instructions to provision the resources from VS.

  • In the Solution Explorer panel, right click the project name and select Deploy > New from the menu.
  • In the Deploy to Resource Group dialog, select an account that you will use to connect to Azure in the first dropdown menu. If no accounts appear, click Add an account and follow the onscreen instructions to add a new account.
  • Select an Azure subscription from the Subscriptions menu.
  • Click the Resource group menu and select <Create New…>.
  • In the Create Resource Group dialog, enter a name for the new resource group and then select a region.

 

 

I recommend setting the name of the resource group to ActiveDirectory. If you want to call it something different, it is best to modify the $ResourceGroupName string in the parameters section of Deploy-AzureResourceGroup.ps1.

 

Deploy the solution in Visual Studio 2017 (Image Credit: Russell Smith)
Deploy the Solution in Visual Studio 2017 (Image Credit: Russell Smith)
  • Select azuredeploy.json from the Deployment template menu.
  • Select azuredeploy.parameters.json from the Template parameters file menu.
  • Select a storage account from the Artifacts storage account menu. If no storage accounts exist in the subscription, the deployment script will automatically create a new storage account.

Artifacts are items like PowerShell DSC files that virtual machines must access as part of the provisioning process. The artifacts storage account should be in the same region to avoid access token expiry errors like that shown below.

Deployment sometimes stops with the following error:

The access token expiry UTC time ’12/19/2017 1:23:07 PM’ is earlier than current UTC time ’12/19/2017 1:23:09 PM’.

I created the artifacts storage account manually and put it in its own resource group. This allows me to delete the deployed resources by deleting the ActiveDirectory resource group without losing the artifacts storage account. If you decide to create a storage account manually, you can use the New-AzureRmResourceGroup and New-AzureRmStorageAccount PowerShell cmdlets as shown below.

New-AzureRmResourceGroup -Name 'ARM_Deploy_Staging' -Location 'US West'

New-AzureRmStorageAccount -ResourceGroupName 'ARM_Deploy_Staging' -AccountName 'armdeploypetri' -Location 'US West' -SkuName 'Standard_LRS'
  • Click Deploy.
  • Before provisioning starts, there is the opportunity to edit deployment parameters. Check that you are happy with the parameter values and click Save.

The password for both the adadmin and vmadmin accounts is Password12341234. You can change this if you want. The _artifactsLocation and _artifactsLocationSasToken parameters are automatically generated by default. But I found that the _artifactsLocation parameter value should be set manually. You can get the URL for your storage account in the Azure management portal. Make sure that there is no backward slash on the end of the URL.

Set parameter values before deployment (Image Credit: Russell Smith)
Set Parameter Values Before Deployment (Image Credit: Russell Smith)

Right:

https://armdeploypetri.blob.core.windows.net

Wrong:

https://armdeploypetri.blob.core.windows.net/

Artifacts storage account in Azure (Image Credit: Russell Smith)
Artifacts Storage Account in Azure (Image Credit: Russell Smith)

Deployment will now start, and you can monitor its progress in the Output panel. If you can’t see the Output panel, click CTRL+ALT+O. The deployment can take a long time. Anything from 30 minutes to an hour. So, be patient.

Notes From the Field

If you’ve observed the notes I made about working with PowerShell DSC and JSON templates in the previous two parts of this series, your resources should be provisioned without any errors. The deployment might fail with an error message like this:

Error: Code=InvalidContentLink; Message=Unable to download deployment content from ‘https://************.blob.core.windows.net/windows-powershell-dsc/nestedtemplates/vnet.json.’

The deployment validation failed

To solve this problem, make sure that all the .json and .ps1 files in the project have the Build Action set to Content and Copy to Output Directory set to Copy always. Right click each file in Solution Explorer and select Properties from the menu to access the Property Pages dialog.

Setting properties on files in Visual Studio 2017 (Image Credit: Russell Smith)
Setting Properties on Files in Visual Studio 2017 (Image Credit: Russell Smith)

Most other problems I encountered were with PowerShell DSC. My original script used a separate file for the $ConfigData section. This is not supported by Deploy-AzureResourceGroup.ps1, which is an automatically generated script that manages deployment and uploads artifacts to Azure storage. Moving the $ConfigData into my PowerShell DSC script also didn’t help.

The solution was to use the same method for passing parameters as the DSC scripts used to provision Active Directory on the VMs. A parameters section is required at the top of the DSC script and I added the DomainName and AdminCreds parameters to the part of the azuredeploy.json template that runs DSC for the certification authority configuration. The values for username and password are taken from variables at the top of the template (adminUserName and adminPassword). Note that AdminPassword is in the protectedSettings section. There’s a reference to it in the settings section: “Password”: “PrivateSettingsRef:AdminPassword”.

           "settings": {
              "modulesUrl": "[variables('pkiTemplateUri')]",
              "sasToken": "",
              "configurationFunction": "[variables('pkiConfigurationFunction')]",
              "properties": {
                "DomainName": "[parameters('domainName')]",
                "AdminCreds": {
                  "UserName": "[parameters('adminUserName')]",
                  "Password": "PrivateSettingsRef:AdminPassword"
                }
              }
            },
            "protectedSettings": {
              "Items": {
                "AdminPassword": "[parameters('adminPassword')]"
              }
            }

File path locations are also important. Especially if you decide to tidy up a project by moving files to different locations. For example, I moved PKI.ps1 from the root of the project to the DSC folder so that it was located with the 3 other PowerShell DSC scripts. But this also required me to update the pkiTemplateUri parameter in azuredeploy.json to reflect the new file location. windows-powershell-dsc is a container in Azure artifact blob storage where DSC archives that are generated by Deploy-AzureResourceGroup.ps1 are uploaded from the DSC project folder.

Testing PowerShell DSC configuration in a virtual machine (Image Credit: Russell Smith)
Testing PowerShell DSC Configuration in a Virtual Machine (Image Credit: Russell Smith)

To test the deployment, you should make sure that your PowerShell DSC script runs correctly on a server before trying to deploy it remotely from VS because it’s much faster to carry out the testing locally. I commented out two lines of code in the PowerShell DSC script (PKI.ps1) that you can use to generate a MOF file and run the configuration locally. I also commented out the $ConfigData section because I’m using the param section at the top of the script to pass those values from the JSON template. If you want to test the script locally on the device, you need to uncomment the $ConfigData section and comment the param section.

rootca -ConfigurationData $ConfigData 
Start-DscConfiguration -ComputerName localhost -Wait -Force -Path C:temprootca -Verbose

To use this code, create a directory called temp in the root directory of the server where you want to test the configuration. The PowerShell DSC script should be placed in the temp directory. Open a PowerShell prompt, change the working directory to temp (cd c:temp), and uncomment the two lines of code shown above and the $ConfigData section in PKI.ps1 by removing the hashes. Then run all the code in the file. The Local Configuration Manager will start the configuration and you can check to see if it works. You can also use Test-DscConfiguration to check all the components are deployed as expected:

Test-DscConfiguration -ComputerName localhost

If you need to add new or existing files to the project, right click a folder in Solution Explorer and select Add > New Item or Existing Item from the menu. If you are adding a new item, in the Add New Item dialog select PowerShell Script Data File. For example, change the name if needed in the Name: field and click Add. Don’t forget to set the Copy to Output Directory and Build Action properties for the new file.

A successful deployment in Visual Studio (Image Credit: Russell Smith)
A Successful Deployment in Visual Studio (Image Credit: Russell Smith)

Azure ARM templates are supposed to be idempotent. I.e. if a resource has already been deployed, ARM will not redeploy it if it is still configured as set out in the JSON template. So, you should be able to redeploy the project in VS, and provisioning will pick up where it left off without redeploying already existing resources. But in practice, I found that I needed to delete the resource group containing the resources before I could redeploy the project without receiving an error. But it might depend at what stage in the process the deployment stops.

Template deployment returned the following errors:

Resource Microsoft.Compute/virtualMachines ‘anprefix5’ failed with message ‘{

“error”: {

“code”: “PropertyChangeNotAllowed”,

“target”: “osDisk.vhd.uri”,

“message”: “Changing property ‘osDisk.vhd.uri’ is not allowed.”

–   }

– }’

When a DSC script has finished running successfully, you’ll see a message in the Output panel in Visual Studio:

09:43:42 – VERBOSE: 9:43:42 AM – Resource Microsoft.Resources/deployments ‘UpdateBDCNIC’ provisioning status is succeeded

You won’t always see this message for the last DSC script to run. But that doesn’t mean that it hasn’t run successfully. The last DSC script in the template provisions certificate services on the third VM. So, log on after deployment and check to see if AD Certificate Services appears on the dashboard in Server Manager. You might need to wait a few minutes for the DSC script to complete.

Checking that Certificate Services has deployed in the VM (Image Credit: Russell Smith)
Checking that Certificate Services Has Deployed in the VM (Image Credit: Russell Smith)

 

I showed you how to use a set of Azure JSON templates and PowerShell DSC to deploy an Active Directory forest with two domain controllers and a third server as an Enterprise Root Certification Authority.

 

SHARE ARTICLE