Enrolling Terraform Deployed AVD Session Hosts into Intune

Background / Requirements:

This post will describe the recent problem my team faced with enrolling Terraform deployed AVD session hosts into Intune.

Below is a summary of the high-level requirements for the wider AVD deployment.

  • Deploying AVD programmatically using Terraform through Azure DevOps Pipelines
  • Personal host pool only
  • All session hosts deployed directly from an Azure Marketplace Windows 10 Multisesson image (no custom images)
  • All session hosts are to be Azure AD joined only
  • All session hosts are to be enrolled in Intune for MDM (including app deployment)

Problem

The deployed session hosts would join Azure AD without issues, however, would not enrol in Intune.

Solution

The solution was simple in hindsight, however, admittedly took some head-scratching to get there.

To get to the solution we deployed a session host manually from the Azure portal and compared the resultant JSON from the Overview pane of the virtual machine, see below, to that of a session host deployed using Terraform.

In comparing the JSON output we found that the VM Extension used for the AAD Login for Windows had an additional setting block defined for MDM.

We updated the Terraform code block for the same VM Extension to include the missing settings block and deployed the session hosts, thankfully each session host auto-enrolled in Intune!

resource "azurerm_virtual_machine_extension" "AADLoginForWindows" {
    depends_on  = [
    azurerm_virtual_machine_extension.registersessionhost,
    ]
  name                 = "AADLoginForWindows"
  virtual_machine_id   = azurerm_windows_virtual_machine.vm.id
  publisher            = "Microsoft.Azure.ActiveDirectory"
  type                 = "AADLoginForWindows"
  type_handler_version = "1.0"
  auto_upgrade_minor_version = true

  settings = <<SETTINGS
    {
        "mdmId" : "0000000a-0000-0000-c000-000000000000"
    }
SETTINGS
}

Notable thanks to Chris Aitken, my AVD and DevOps SME for his efforts, and the hours sitting on Teams calls to get this fixed!

If you have any queries or questions, please reach out on Twitter or LinkedIn.

Thanks!

Learning Terraform > WVD-as-a-Module

Learning Terraform Series
01. Deploying WVD
02. Remote State
03. WVD-as-a-Module [This Post]

In this third post in my Learning Terraform series I’ll explore the concept of Modules.

What is a Module?

“With Terraform, you can put your code inside of a Terraform module and reuse that module in multiple places throughout your code. Instead of having the same code copy/pasted in the staging and production environments, you’ll be able to have both environments reuse code from the same module.

This is a big deal. Modules are the key ingredient to writing reusable, maintainable, and testable Terraform code. Once you start using them, there’s no going back. You’ll start building everything as a module, creating a library of modules to share within your company, start leveraging modules you find online, and start thinking of your entire infrastructure as a collection of reusable modules.”

Source: https://blog.gruntwork.io/how-to-create-reusable-infrastructure-with-terraform-modules-25526d65f73d

Learning about Modules has completely changed how I approach Terraform, now rather than thinking of every Terraform file as a standalone entity I’m instead looking at what common elements I can make into a module.

I’ve used Windows Virtual Desktop (WVD) as a common theme for learning Terraform in my previous posts, and this fits extremely well into the model of a Module as the architecture of WVD is static, that is to say, the relationship between a Workspace, Application Group and Host Pool doesn’t change.

The Anatomy of a Module

The beauty of a Module is in its simplicity.

In short, any Terraform file is pretty much a module by default.

There is no discernible difference between the syntax and structure of a standard configuration file and that of a module other than when calling a module you pass in all unique resource values from the main configuration file rather than a variables file.

I’ve tried to show this in the figure below, in a standard Terraform configuration you would create a folder for your code and within it store the main and variable files. The main.tf file contain the Terraform provider (Azure, AWS etc) and the resources to create, you could pass in values from a variables.tf file in the same folder.

Figure 1 – Standard Terraform Configuration Architecture

When calling or referencing a module however you would specify the variable values within the main configuration file, shown below in blue.

The module would commonly reside in its own folder structure, a central module library perhaps, the structure of which is identical to a standard Terraform configuration.

The biggest notable difference, and this will become evident in the code, is that modules’ variable file (shown in green) doesn’t contain any default values as those are passed in from outside.

Figure 2 – Terraform Module Architecture

WVD-as-a-Module

Building on from the code in my previous posts I’ve now converted the code to deploy WVD into a reusable module.

This code is available from my GitHub repo, here

Firstly, the below code is the main.tf that will call the module, in figure 2 above this is the script in blue.

# Get AzureRM Terraforn Provider
provider "azurerm" {
  version = "2.31.1" #Required for WVD
  features {}
}

# Remote State, replace with your resource group, storage account and container name
terraform {
  backend "azurerm" {
    storage_account_name = "vfftfstateusw2"
    container_name       = "tfstate"
    key                  = "terraform.tfstate"
    resource_group_name  = "VFF-USE-RG-WVD-REMOTE"
  }
}

# Create resource group
resource "azurerm_resource_group" "default" {
name     = "VFF-USW-RG-WVD-FromMod"
location = "West US 2"
}

# Call WVD-as-a-Module and pass in variables
module "WVD-as-a-Module" {
  source                         = "../Modules/WVD-as-a-Module"
  rgname                         = azurerm_resource_group.default.name
  region                         = azurerm_resource_group.default.location
  pooledhpname                   = "VFF-WUS-TFRM-Mod"
  pooledhpfriendlyname           = "VFF Pooled Host Pool"
  pooledhpdescription            = "VFF Pooled Host Pool"
  pooledhpremoteappname          = "VFF-WUS-TFRM-Mod-RA"
  pooledhpremoteappfriendlyname  = "VFF Pooled Host Pool Remote Apps"
  pooledhpremoteappdescription   = "VFF Pooled Host Pool Remote Apps"
  pooledhpdesktopappname         = "VFF-WUS-TFRM-Mod-DT"
  pooledhpdesktopappfriendlyname = "VFF Pooled Host Pool Remote Apps"
  pooledhpdesktopappdescription  = "VFF Pooled Host Pool Remote Apps"
  workspace                      = "VFF-Terraform-Wkspc-Mod"
  workspacefriendlyname          = "VFF-Terraform-Workspace"
  workspacedesc                  = "VFF-Terraform-Workspace"
  pooledhpmaxsessions            = 50
}

This next code is the WVD-as-a-Module main configuration File, in figure 2 this is show in orange.

Note, I recommend create a new folder structure for your modules.

terraform {
  required_version = ">=0.12"
}

# Create "Pooled" WVD Host Pool
resource "azurerm_virtual_desktop_host_pool" "pooleddepthfirst" {
  location                 = var.region
  resource_group_name      = var.rgname
  name                     = var.pooledhpname
  friendly_name            = var.pooledhpfriendlyname
  description              = var.pooledhpdescription
  type                     = "Pooled"
  maximum_sessions_allowed = var.pooledhpmaxsessions
  load_balancer_type       = "DepthFirst"
}

#Create RemoteApp Application Group
resource "azurerm_virtual_desktop_application_group" "pooledremoteapp" {
  name                = var.pooledhpremoteappname
  location            = var.region
  resource_group_name = var.rgname
  type                = "RemoteApp"
  host_pool_id        = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
  friendly_name       = var.pooledhpremoteappfriendlyname
  description         = var.pooledhpremoteappdescription
}

#Create Desktop Application Group
resource "azurerm_virtual_desktop_application_group" "pooleddesktopapp" {
  name                = var.pooledhpdesktopappname
  location            = var.region
  resource_group_name = var.rgname
  type                = "Desktop"
  host_pool_id        = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
  friendly_name       = var.pooledhpdesktopappfriendlyname
  description         = var.pooledhpdesktopappdescription
}

# Create Workspace
resource "azurerm_virtual_desktop_workspace" "workspace" {
  name                = var.workspace
  location            = var.region
  resource_group_name = var.rgname
  friendly_name       = var.workspacefriendlyname
  description         = var.workspacedesc
}

# Associate RemoteApp Application Group with Workspace
resource "azurerm_virtual_desktop_workspace_application_group_association" "workspaceremoteapp" {
  workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
  application_group_id = azurerm_virtual_desktop_application_group.pooledremoteapp.id
}

# Associate Desktop Application Group with Workspace
resource "azurerm_virtual_desktop_workspace_application_group_association" "workspacedesktop" {
  workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
  application_group_id = azurerm_virtual_desktop_application_group.pooleddesktopapp.id
}

Lastly, this is the associated variables file for the module, this is shown in green in figure 2.

variable "rgname" {
  description = "Resource Group Name"
  type        = string
}

variable "region" {
  description = "Region"
  type        = string
}

variable "pooledhpname" {
  description = "Pooled Host Pool Name"
  type        = string
}

variable "pooledhpmaxsessions" {
  description = "Max sessions per pooled host"
  type        = number
}

variable "pooledhpfriendlyname" {
  description = "Pooled Host Pool Friendly Name"
  type        = string
}

variable "pooledhpdescription" {
  description = "Pooled Host Pool Description"
  type        = string
}

variable "pooledhpremoteappname" {
  description = "Pooled Host Pool RemoteApp App Group Name"
  type        = string
}

variable "pooledhpremoteappfriendlyname" {
  description = "Pooled Host Pool RemoteApp App Group Friendly Name"
  type        = string
}

variable "pooledhpremoteappdescription" {
  description = "Pooled Host Pool RemoteApp App Group Description"
  type        = string
}

variable "pooledhpdesktopappname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  type        = string
}

variable "pooledhpdesktopappfriendlyname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  type        = string
}

variable "pooledhpdesktopappdescription" {
  description = "Pooled Host Pool Desktop App Group Description"
  type        = string
}

variable "workspace" {
  description = "WVD Workspace Name"
  type        = string
}

variable "workspacefriendlyname" {
  description = "WVD Workspace Friendly Name"
  type        = string
}

variable "workspacedesc" {
  description = "WVD Workspace Description"
  type        = string
}

This code is available from my GitHub repo, here

If you have any questions or queries please get in touch on Twitter

Thanks.

Learning Terraform > Remote State

Learning Terraform Series
01. Deploying WVD
02. Remote State [This Post]
03. WVD-as-a-Module

This is the second article in a series I’m enjoying writing on my journey to learn Terraform, in this post I’m going to cover the concept of State within Terraform and more importantly why its location should be carefully considered if you’re using Terraform in a production environment.

If you’re just starting out with Terraform and Infrastructure as Code, it might be worth spending a few minutes reading my post in which I cover the fundamentals – you can find that post here.

So, what is State and why is it so important?

Terraform keeps a detailed record of everything it creates, every network, subnet, VM, everything!  That way if you needed to update a particular infrastructure, for example, change a VM size or deploy a new subnet Terraform would know exactly which resources it had previously provisioned, thus knowing which it had to destroy in order to recreate.

Remember Terraform is a declarative language, it only knows desired end-state, that is, if you need to change the size of a Terraform created VM from Ds4_v4 to Ds8_V4 Terraform will not simply scale it as you would from the Azure Portal manually, via PowerShell or from the Azure CLI, instead it will destroy and redeploy it from scratch using the new size.

This detailed historical record is known as the Terraform State.

Terraform records its State in a custom JSON format in the same folder as the main Terraform files are executed from, saving it as terraform.tfstate

This file contains values that map the Terraform resources declared in your configuration files (eg. main.tf) to the resultant resources it created in Azure.

Below is an example of a subset of JSON from a terraform.tfstate file showing the Azure Resource Group created by Terraform

{
   "version": 4,
   "terraform_version": "0.13.5",
   "serial": 28,
   "lineage": "6xxxxxx2e-cxx0-7xx9-e68e-ecxxxxxxx50",
   "outputs": {},
   "resources": [
     {
       "mode": "managed",
       "type": "azurerm_resource_group",
       "name": "default",
       "provider": "provider[\"registry.terraform.io/hashicorp/azurerm\"]",
       "instances": [
         {
           "schema_version": 0,
           "attributes": {
             "id": "/subscriptions/xxxxxxxxx/resourceGroups/VFF-WUS-RG-TFWVD",
             "location": "westus2",
             "name": "VFF-WUS-RG-TFWVD",
             "tags": {},
             "timeouts": null
           },
           "private": "xxxxxx"
         }
       ]
     }

One of the key benefits to Terraform, and Infrastructure as Code in general, is the ability to control how an infrastructure changes over time, also known as configuration shift.

As mentioned early Terraform is only concerned with desired end-state and through the using the State file Terraform can compare what should be deployed with what is deployed.

For example, if through Terraform you deploy 10 VM’s and an over-eager penny-pinching admin deletes 2 without your knowledge, using Terraform Plan, which uses the State file to recall what was previously provisioned resources versus resources that are running live you can quickly and easily remediate the infrastructure and return it to the desired end-state.

Remember, every time you run Terraform Plan, it will fetch the latest status of deployed resources in Azure and compare that to what is in your Terraform configuration to determine what changes need to be applied. In other words, the output of the plan command is the difference between the code in your configuration files and the infrastructure deployed in Azure as discovered via the state file.

Needless to say, the Terraform State file is hugely important and should be vigorously protected as mismanagement, such as overwriting, corruption or loss of this file can be disastrous, especially in a production environment.

That said, if like me you’re running Terraform from your personal computer in a Dev/Test subscription in Azure for learning purposes you’re not going to be distraught if you accidently overwrite the state file through a moment of copy and paste madness.

But what if you’re working collaboratively as part of a wider DevOps team, especially in a Dev/Test environment were you’re consistently and dynamically iterating the infrastructure, you’re working on one set of functionality and your colleague is working on something else but within the same Terraform configuration, this is where it gets tricky.

We’ve all been in those situations before the days of modern SharePoint and OneDrive were you’ve opened a Word document from a mapped drive to a server share to find someone else had it open, but you’re too impatient to wait for that person to close it so you create a copy of it on your desktop, modify it and copy back into the mapped drive overwriting previous copies thus awakening a fury in your usually sedate colleague of which you’ve never seen before – well image that same scenario with a shared set of Terraform files including the State file!

There are options to host the files in shared version controlled locations but none are ideal as the very nature of those systems will have you working on an offline, locked or checked out copy at the same time as a colleague and you’ll be playing beat the clock as to who checks there copy back in last.

Thankfully Terraform does offer a more elegant solution to this problem, that is its built-in support for remote backends.

A Terraform backend determines how Terraform stores state.

The default backend is the local backend which stores the state file on your local disk.

Remote backends however allow you to store the state file in a remote shared storage location, in the case of this example, an Azure Storage account.

Using an Azure Storage Account to store the State file solves several issues, I’ll cover a few of the main ones below.

Resiliency > Azure Storage Accounts can be utilise the plethora of replication and resiliency benefits of Azure, such as LRS, ZRS and GRS.

Locking > Blob Storage Containers natively support locking, this means that when Terraform is either reading or writing to the state file it locks it, this mitigates the chances of two people writing to the same file at the same time.

Encryption > Blob Storage Containers are encrypted at rest by default.

Versioning > Versioning can be enabled to retain a history of file versions.

Access Control > Azure Network Security including Private Link, Azure RBAC, Access Keys and Shared Access Signatures can be used to secure access to the State file to only authorised users and networks.

Cheap > Even the most complex State files are rarely significantly large so the cost of storing them in Azure is minimal.

To summarise, storing State files in a local backend (your own personal computer’s local disk) is fine for single user dev test projects but is not suitable for projects involving a team, nor is it suitable for production environments – instead it is recommended to store the file on a durable resilient enterprise scale storage solution, such as in an Azure Storage Account.

So, let look at how to configure your Terraform configuration files to use a Remote Backend.

First you’re going to need a Storage Account to store the State File, you can provision this manually, via PowerShell or using Terraform.

Note, I’d recommend creating this outside of the main Terraform configuration file so you don’t subsequently delete it when you use Terraform Destroy to remove your provisioned resources.

To create a Storage Account using Azure CLI execute the below script from the Azure Cloud CLI or locally as you should already have the Az CLI tools installed as they’re a pre-req of Terraform.

This code is also available on my GitHub, here.

$RESOURCE_GROUP_NAME    ="Resource-Group-To-Host-Storage-Account"
 $STORAGE_ACCOUNT_NAME   ="Storage-Account-Name"
 $CONTAINER_NAME         ="Container-Name"
 Create resource group
 az group create --name $RESOURCE_GROUP_NAME --location "West Europe"
 Create storage account
 az storage account create --resource-group $RESOURCE_GROUP_NAME --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --encryption-services blob
 Get storage account key
 $ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query [0].value -o tsv)
 Create blob container
 az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --account-key $ACCOUNT_KEY
 Write-Output  "storage_account_name: $STORAGE_ACCOUNT_NAME"
 Write-Output "container_name: $CONTAINER_NAME"
 Write-Output "access_key: $ACCOUNT_KEY"

Note, there are different methods to authenticate to Azure for using Remote Backends, you can use Secrets stored in a Key Vault, Service Principles and Managed Identities just to name a few however for this example I’ll use the Azure CLI and authenticate manually using Az Login.

To configure Terraform to store the State file in your Storage Account you to add a specific block to your Terraform configuration with the following syntax:

terraform {   
     backend "azurerm"
    {     
            [CONFIG HERE…]   
       } 
   }  

To extend this further I’ve taken the Terraform configuration scripts  written for my previous post on deploying WVD but updated them to store the resultant State file in an Azure Storage Account.

This code is also available on my GitHub, here.

main.tf
 Get AzureRM Terraforn Provider
 provider "azurerm" {
   version = "2.31.1" #Required for WVD
   features {}
 }
 terraform {
   backend "azurerm" {
     storage_account_name = "vffwvdtfstate"
     container_name       = "tfstate"
     key                  = "terraform.tfstate"
     resource_group_name  = "VFF-USE-RG-WVD-REMOTE"
   }
 }
 Create "Pooled" WVD Host Pool
 resource "azurerm_virtual_desktop_host_pool" "pooleddepthfirst" {
   location            = var.region
   resource_group_name = var.rgname
 name                     = var.pooledhpname
   friendly_name            = var.pooledhpfriendlyname
   description              = var.pooledhpdescription
   type                     = "Pooled"
   maximum_sessions_allowed = 50
   load_balancer_type       = "DepthFirst"
 }
 Create RemoteApp Application Group
 resource "azurerm_virtual_desktop_application_group" "pooledremoteapp" {
   name                = var.pooledhpremoteappname
   location            = var.region
   resource_group_name = var.rgname
 type          = "RemoteApp"
   host_pool_id  = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
   friendly_name = var.pooledhpremoteappfriendlyname
   description   = var.pooledhpremoteappdescription
 }
 Create Desktop Application Group
 resource "azurerm_virtual_desktop_application_group" "pooleddesktopapp" {
   name                = var.pooledhpdesktopappname
   location            = var.region
   resource_group_name = var.rgname
 type          = "Desktop"
   host_pool_id  = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
   friendly_name = var.pooledhpdesktopappfriendlyname
   description   = var.pooledhpdesktopappdescription
 }
 Create Workspace
 resource "azurerm_virtual_desktop_workspace" "workspace" {
   name                = var.workspace
   location            = var.region
   resource_group_name = var.rgname
 friendly_name = var.workspacefriendlyname
   description   = var.workspacedesc
 }
 Associate RemoteApp Application Group with Workspace
 resource "azurerm_virtual_desktop_workspace_application_group_association" "workspaceremoteapp" {
   workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
   application_group_id = azurerm_virtual_desktop_application_group.pooledremoteapp.id
 }
 Associate Desktop Application Group with Workspace
 resource "azurerm_virtual_desktop_workspace_application_group_association" "workspacedesktop" {
   workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
   application_group_id = azurerm_virtual_desktop_application_group.pooleddesktopapp.id
 }
# variables.tf

variable "rgname" {
  description = "Resource Group Name"
  default     = "VFF-USE-RG-WVD-REMOTE"
}

variable "region" {
  description = "Region"
  default     = "West US 2"
}

variable "pooledhpname" {
  description = "Pooled Host Pool Name"
  default     = "VFF-WUS-TFRM-Pooled"
}

variable "pooledhpfriendlyname" {
  description = "Pooled Host Pool Friendly Name"
  default     = "VFF Pooled Host Pool"
}

variable "pooledhpdescription" {
  description = "Pooled Host Pool Description"
  default     = "VFF Pooled Host Pool"
}

variable "pooledhpremoteappname" {
  description = "Pooled Host Pool RemoteApp App Group Name"
  default     = "VFF-WUS-TFRM-Pooled-RA"
}

variable "pooledhpremoteappfriendlyname" {
  description = "Pooled Host Pool RemoteApp App Group Friendly Name"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpremoteappdescription" {
  description = "Pooled Host Pool RemoteApp App Group Description"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpdesktopappname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  default     = "VFF-WUS-TFRM-Pooled-DT"
}

variable "pooledhpdesktopappfriendlyname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpdesktopappdescription" {
  description = "Pooled Host Pool Desktop App Group Description"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "workspace" {
  description = "WVD Workspace Name"
  default     = "VFF-Terraform-Workspace"
}

variable "workspacefriendlyname" {
  description = "WVD Workspace Friendly Name"
  default     = "VFF-Terraform-Workspace"
}

variable "workspacedesc" {
  description = "WVD Workspace Description"
  default     = "VFF-Terraform-Workspace"
}

If you have any questions or queries please get in touch on Twitter

Thanks.

Learning Terraform > Deploying WVD

Learning Terraform Series
01. Deploying WVD [This Post]
02. Remote State
03. WVD-as-a-Module

I’ve heard Terraform mentioned numerous times in various tech circles over the past year but always chose to give it a wide birth, I knew it was the latest tool in the growing Infrastructure as Code space but as my own coding experience had never spanned outside of the Windows command line and PowerShell the idea of learning what I incorrectly assumed would be a wildly ARM like beast of a language conjured up memories of the time I accidently walked into the wrong lecture hall at Teesside University in the early 2000’s and sat through a mind-blowing and somewhat scaring lecture on C# when I should have been a few doors down learning the ins-and-outs of TCP/IP.

Anyway, cut to the modern day and the through dark art of YouTube algorithms I’m presented with a video by the highly respected John Savill on ‘Using Terraform with Azure’ which instantly dispels all my fears – Terraform was not the wildly ARM like beast I thought it was, in fact it was the opposite.

Sidebar: Around the same time I saw tweets from Neil McLoughlin and Jen Sheerin both sharing experiences with learning Terraform.  Neil recommended a great book which I went on to buy for myself (Terraform: Up & Running: Writing Infrastructure as Code by Yevgeniy Brikman) and Jen shared a great article detailing how she had used Terraform to deploy WVD, just as I went on to do, of which this post centres around so thanks to both.

So, what is Terraform and why is worth your time?

Actually, before delving into that let me just set the scene and explain the concept of Infrastructure as Code for those just starting their journey, as understanding this is key to understanding Terraform.

Since the dawn of modern IT there have been sysadmins deploying servers, storage and networks (commonly referred to as infrastructure in sysadmin parlance) in data centres around the world, this was often slow, expensive and prone to errors and configuration drift, that is, deploying 2 servers with the same application identically is pretty straight forward, however, multiply that by 2000 and you’re sure to end up with differing configurations and all the pain and sleepless nights that comes with it.

Spring forward in time and throw in the virtualisation revolution which rearchitected the servers, storage and networks we use to deploy as physical devices in cold data centres, abstracting their dependency on physical hardware and refactoring them to be purely “software-defined” thus opened the gates and spawning the concept of Infrastructure as Code, that is, being able to define and provision a full infrastructure architecture based purely on lines of code, and not only that, adding the ability to quickly and easily identify and remediate configuration drift.

So, looping back to the original question of what is Terraform?

Terraform is an open-source tool developed by HashiCorp that allows you to provision Infrastructure as Code, and it is very good at it.

Provisioning versus Configuration Management

A quick note on the subtle differences between provisioning and configuration management tools and why they work best when combined.

Terraform is a provisioning tool, it is used to provision the infrastructure such as the servers you would need to host an application, strictly speaking it does not deploy the application. I say strictly speaking as Terraform can deploy custom server images which contain pre-installed applications but that it just semantics.

Products like Chef, Puppet and Ansible are configuration management tools, they install and manage software on existing servers, they don’t provision the servers themselves.

So, easy to grasp why companies will often combine the a provisioning tool such as Terraform with a configuration management tool such as Chef, Puppet or Ansible to give them end to end control of the infrastructure and application stacks.

Terraform 101 – Deploying WVD

I wont delve any deeper into Terraform as a coding language in this post as to be honest I’m still very much a beginner and I recommend you grab the book I mentioned earlier as well as watching John’s video above, that said, as I get more proficient I will start sharing more tips and tricks but for now, what follows in this article should be seen as a 101, a Hello World for Terraform.

This below scripts will deploy a single WVD hostpool of the pooled variant, create and associate RemoteApp and Desktop application groups and finally create and associate a WVD Workspace.

What it won’t do as yet, and where I’m hoping to take this, is deploy WVD Session Hosts (ideally from a Shared Image Gallery) and manage user assignment.

Following Terraform standards there are two scripts, a main.tf which contains the Terraform code for the resources to be deployed and a variables.tf which contains all the referenced variables.

I’ve written this so that the main.tf should remain pretty much untouched and all values such as resource names are referenced from the variables.tf file.

Update: This code is also available on my GitHub, here.

main.tf

Get AzureRM Terraforn Provider
 provider "azurerm" {
   version = "2.31.1" #Required for WVD
   features {}
 }
 Create Resource Group - This will host all subsequent deployed resources
 resource "azurerm_resource_group" "default" {
   name     = var.rgname
   location = var.region
 }
 Create "Pooled" WVD Host Pool
 resource "azurerm_virtual_desktop_host_pool" "pooleddepthfirst" {
   location            = var.region
   resource_group_name = azurerm_resource_group.default.name
 name                     = var.pooledhpname
   friendly_name            = var.pooledhpfriendlyname
   description              = var.pooledhpdescription
   type                     = "Pooled"
   maximum_sessions_allowed = 50
   load_balancer_type       = "DepthFirst"
 }
 Create RemoteApp Application Group
 resource "azurerm_virtual_desktop_application_group" "pooledremoteapp" {
   name                = var.pooledhpremoteappname
   location            = var.region
   resource_group_name = azurerm_resource_group.default.name
 type          = "RemoteApp"
   host_pool_id  = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
   friendly_name = var.pooledhpremoteappfriendlyname
   description   = var.pooledhpremoteappdescription
 }
 Create Desktop Application Group
 resource "azurerm_virtual_desktop_application_group" "pooleddesktopapp" {
   name                = var.pooledhpdesktopappname
   location            = var.region
   resource_group_name = azurerm_resource_group.default.name
 type          = "Desktop"
   host_pool_id  = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
   friendly_name = var.pooledhpdesktopappfriendlyname
   description   = var.pooledhpdesktopappdescription
 }
 Create Workspace
 resource "azurerm_virtual_desktop_workspace" "workspace" {
   name                = var.workspace
   location            = var.region
   resource_group_name = azurerm_resource_group.default.name
 friendly_name = var.workspacefriendlyname
   description   = var.workspacedesc
 }
 Associate RemoteApp Application Group with Workspace
 resource "azurerm_virtual_desktop_workspace_application_group_association" "workspaceremoteapp" {
   workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
   application_group_id = azurerm_virtual_desktop_application_group.pooledremoteapp.id
 }
 Associate Desktop Application Group with Workspace
 resource "azurerm_virtual_desktop_workspace_application_group_association" "workspacedesktop" {
   workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
   application_group_id = azurerm_virtual_desktop_application_group.pooleddesktopapp.id
 }

variables.tf

variable "rgname" {
  description = "Resource Group Name"
  default     = "VFF-WUS-RG-TFWVD"
}

variable "region" {
  description = "Region"
  default     = "West US 2"
}

variable "pooledhpname" {
  description = "Pooled Host Pool Name"
  default     = "VFF-WUS-TF-Pooled"
}

variable "pooledhpfriendlyname" {
  description = "Pooled Host Pool Friendly Name"
  default     = "VFF Pooled Host Pool"
}

variable "pooledhpdescription" {
  description = "Pooled Host Pool Description"
  default     = "VFF Pooled Host Pool"
}

variable "pooledhpremoteappname" {
  description = "Pooled Host Pool RemoteApp App Group Name"
  default     = "VFF-WUS-TF-Pooled-RA"
}

variable "pooledhpremoteappfriendlyname" {
  description = "Pooled Host Pool RemoteApp App Group Friendly Name"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpremoteappdescription" {
  description = "Pooled Host Pool RemoteApp App Group Description"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpdesktopappname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  default     = "VFF-WUS-TF-Pooled-DT"
}

variable "pooledhpdesktopappfriendlyname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpdesktopappdescription" {
  description = "Pooled Host Pool Desktop App Group Description"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "workspace" {
  description = "WVD Workspace Name"
  default     = "VFF-Terraform-Workspace"
}

variable "workspacefriendlyname" {
  description = "WVD Workspace Friendly Name"
  default     = "VFF-Terraform-Workspace"
}

variable "workspacedesc" {
  description = "WVD Workspace Description"
  default     = "VFF-Terraform-Workspace"
}

If you’re intending to run this code yourself you’ll need to prepare your environment, again following the instructions in John’s video above, that is, download and copy the Terraform executable to your local device, install the AZ PowerShell module and connect to your Azure account.

I’d also recommend VSCode as a code editor if you’re not already using it.

Once all of this is in place you can run the below commands to initialise, plan and apply Terraform, then sit back and watch it go!

terraform fmt  #make my files formatted correctly and will fix all tf files in this folder

terraform init
terraform plan
terraform apply -auto-approve

What you will end up with should look like the below:

If you have any questions or queries please get in touch on Twitter

Thanks.