Learning Terraform Series
01. Deploying WVD
02. Remote State [This Post]
03. WVD-as-a-Module
This is the second article in a series I’m enjoying writing on my journey to learn Terraform, in this post I’m going to cover the concept of State within Terraform and more importantly why its location should be carefully considered if you’re using Terraform in a production environment.
If you’re just starting out with Terraform and Infrastructure as Code, it might be worth spending a few minutes reading my post in which I cover the fundamentals – you can find that post here.
So, what is State and why is it so important?
Terraform keeps a detailed record of everything it creates, every network, subnet, VM, everything! That way if you needed to update a particular infrastructure, for example, change a VM size or deploy a new subnet Terraform would know exactly which resources it had previously provisioned, thus knowing which it had to destroy in order to recreate.
Remember Terraform is a declarative language, it only knows desired end-state, that is, if you need to change the size of a Terraform created VM from Ds4_v4 to Ds8_V4 Terraform will not simply scale it as you would from the Azure Portal manually, via PowerShell or from the Azure CLI, instead it will destroy and redeploy it from scratch using the new size.
This detailed historical record is known as the Terraform State.
Terraform records its State in a custom JSON format in the same folder as the main Terraform files are executed from, saving it as terraform.tfstate
This file contains values that map the Terraform resources declared in your configuration files (eg. main.tf) to the resultant resources it created in Azure.
Below is an example of a subset of JSON from a terraform.tfstate file showing the Azure Resource Group created by Terraform
{
"version": 4,
"terraform_version": "0.13.5",
"serial": 28,
"lineage": "6xxxxxx2e-cxx0-7xx9-e68e-ecxxxxxxx50",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "azurerm_resource_group",
"name": "default",
"provider": "provider[\"registry.terraform.io/hashicorp/azurerm\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"id": "/subscriptions/xxxxxxxxx/resourceGroups/VFF-WUS-RG-TFWVD",
"location": "westus2",
"name": "VFF-WUS-RG-TFWVD",
"tags": {},
"timeouts": null
},
"private": "xxxxxx"
}
]
}
One of the key benefits to Terraform, and Infrastructure as Code in general, is the ability to control how an infrastructure changes over time, also known as configuration shift.
As mentioned early Terraform is only concerned with desired end-state and through the using the State file Terraform can compare what should be deployed with what is deployed.
For example, if through Terraform you deploy 10 VM’s and an over-eager penny-pinching admin deletes 2 without your knowledge, using Terraform Plan, which uses the State file to recall what was previously provisioned resources versus resources that are running live you can quickly and easily remediate the infrastructure and return it to the desired end-state.
Remember, every time you run Terraform Plan, it will fetch the latest status of deployed resources in Azure and compare that to what is in your Terraform configuration to determine what changes need to be applied. In other words, the output of the plan command is the difference between the code in your configuration files and the infrastructure deployed in Azure as discovered via the state file.
Needless to say, the Terraform State file is hugely important and should be vigorously protected as mismanagement, such as overwriting, corruption or loss of this file can be disastrous, especially in a production environment.
That said, if like me you’re running Terraform from your personal computer in a Dev/Test subscription in Azure for learning purposes you’re not going to be distraught if you accidently overwrite the state file through a moment of copy and paste madness.
But what if you’re working collaboratively as part of a wider DevOps team, especially in a Dev/Test environment were you’re consistently and dynamically iterating the infrastructure, you’re working on one set of functionality and your colleague is working on something else but within the same Terraform configuration, this is where it gets tricky.
We’ve all been in those situations before the days of modern SharePoint and OneDrive were you’ve opened a Word document from a mapped drive to a server share to find someone else had it open, but you’re too impatient to wait for that person to close it so you create a copy of it on your desktop, modify it and copy back into the mapped drive overwriting previous copies thus awakening a fury in your usually sedate colleague of which you’ve never seen before – well image that same scenario with a shared set of Terraform files including the State file!
There are options to host the files in shared version controlled locations but none are ideal as the very nature of those systems will have you working on an offline, locked or checked out copy at the same time as a colleague and you’ll be playing beat the clock as to who checks there copy back in last.
Thankfully Terraform does offer a more elegant solution to this problem, that is its built-in support for remote backends.
A Terraform backend determines how Terraform stores state.
The default backend is the local backend which stores the state file on your local disk.
Remote backends however allow you to store the state file in a remote shared storage location, in the case of this example, an Azure Storage account.
Using an Azure Storage Account to store the State file solves several issues, I’ll cover a few of the main ones below.
Resiliency > Azure Storage Accounts can be utilise the plethora of replication and resiliency benefits of Azure, such as LRS, ZRS and GRS.
Locking > Blob Storage Containers natively support locking, this means that when Terraform is either reading or writing to the state file it locks it, this mitigates the chances of two people writing to the same file at the same time.
Encryption > Blob Storage Containers are encrypted at rest by default.
Versioning > Versioning can be enabled to retain a history of file versions.
Access Control > Azure Network Security including Private Link, Azure RBAC, Access Keys and Shared Access Signatures can be used to secure access to the State file to only authorised users and networks.
Cheap > Even the most complex State files are rarely significantly large so the cost of storing them in Azure is minimal.
To summarise, storing State files in a local backend (your own personal computer’s local disk) is fine for single user dev test projects but is not suitable for projects involving a team, nor is it suitable for production environments – instead it is recommended to store the file on a durable resilient enterprise scale storage solution, such as in an Azure Storage Account.
So, let look at how to configure your Terraform configuration files to use a Remote Backend.
First you’re going to need a Storage Account to store the State File, you can provision this manually, via PowerShell or using Terraform.
Note, I’d recommend creating this outside of the main Terraform configuration file so you don’t subsequently delete it when you use Terraform Destroy to remove your provisioned resources.
To create a Storage Account using Azure CLI execute the below script from the Azure Cloud CLI or locally as you should already have the Az CLI tools installed as they’re a pre-req of Terraform.
This code is also available on my GitHub, here.
$RESOURCE_GROUP_NAME ="Resource-Group-To-Host-Storage-Account"
$STORAGE_ACCOUNT_NAME ="Storage-Account-Name"
$CONTAINER_NAME ="Container-Name"
Create resource group
az group create --name $RESOURCE_GROUP_NAME --location "West Europe"
Create storage account
az storage account create --resource-group $RESOURCE_GROUP_NAME --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --encryption-services blob
Get storage account key
$ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query [0].value -o tsv)
Create blob container
az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --account-key $ACCOUNT_KEY
Write-Output "storage_account_name: $STORAGE_ACCOUNT_NAME"
Write-Output "container_name: $CONTAINER_NAME"
Write-Output "access_key: $ACCOUNT_KEY"
Note, there are different methods to authenticate to Azure for using Remote Backends, you can use Secrets stored in a Key Vault, Service Principles and Managed Identities just to name a few however for this example I’ll use the Azure CLI and authenticate manually using Az Login.
To configure Terraform to store the State file in your Storage Account you to add a specific block to your Terraform configuration with the following syntax:
terraform {
backend "azurerm"
{
[CONFIG HERE…]
}
}
To extend this further I’ve taken the Terraform configuration scripts written for my previous post on deploying WVD but updated them to store the resultant State file in an Azure Storage Account.
This code is also available on my GitHub, here.
main.tf
Get AzureRM Terraforn Provider
provider "azurerm" {
version = "2.31.1" #Required for WVD
features {}
}
terraform {
backend "azurerm" {
storage_account_name = "vffwvdtfstate"
container_name = "tfstate"
key = "terraform.tfstate"
resource_group_name = "VFF-USE-RG-WVD-REMOTE"
}
}
Create "Pooled" WVD Host Pool
resource "azurerm_virtual_desktop_host_pool" "pooleddepthfirst" {
location = var.region
resource_group_name = var.rgname
name = var.pooledhpname
friendly_name = var.pooledhpfriendlyname
description = var.pooledhpdescription
type = "Pooled"
maximum_sessions_allowed = 50
load_balancer_type = "DepthFirst"
}
Create RemoteApp Application Group
resource "azurerm_virtual_desktop_application_group" "pooledremoteapp" {
name = var.pooledhpremoteappname
location = var.region
resource_group_name = var.rgname
type = "RemoteApp"
host_pool_id = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
friendly_name = var.pooledhpremoteappfriendlyname
description = var.pooledhpremoteappdescription
}
Create Desktop Application Group
resource "azurerm_virtual_desktop_application_group" "pooleddesktopapp" {
name = var.pooledhpdesktopappname
location = var.region
resource_group_name = var.rgname
type = "Desktop"
host_pool_id = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
friendly_name = var.pooledhpdesktopappfriendlyname
description = var.pooledhpdesktopappdescription
}
Create Workspace
resource "azurerm_virtual_desktop_workspace" "workspace" {
name = var.workspace
location = var.region
resource_group_name = var.rgname
friendly_name = var.workspacefriendlyname
description = var.workspacedesc
}
Associate RemoteApp Application Group with Workspace
resource "azurerm_virtual_desktop_workspace_application_group_association" "workspaceremoteapp" {
workspace_id = azurerm_virtual_desktop_workspace.workspace.id
application_group_id = azurerm_virtual_desktop_application_group.pooledremoteapp.id
}
Associate Desktop Application Group with Workspace
resource "azurerm_virtual_desktop_workspace_application_group_association" "workspacedesktop" {
workspace_id = azurerm_virtual_desktop_workspace.workspace.id
application_group_id = azurerm_virtual_desktop_application_group.pooleddesktopapp.id
}
# variables.tf
variable "rgname" {
description = "Resource Group Name"
default = "VFF-USE-RG-WVD-REMOTE"
}
variable "region" {
description = "Region"
default = "West US 2"
}
variable "pooledhpname" {
description = "Pooled Host Pool Name"
default = "VFF-WUS-TFRM-Pooled"
}
variable "pooledhpfriendlyname" {
description = "Pooled Host Pool Friendly Name"
default = "VFF Pooled Host Pool"
}
variable "pooledhpdescription" {
description = "Pooled Host Pool Description"
default = "VFF Pooled Host Pool"
}
variable "pooledhpremoteappname" {
description = "Pooled Host Pool RemoteApp App Group Name"
default = "VFF-WUS-TFRM-Pooled-RA"
}
variable "pooledhpremoteappfriendlyname" {
description = "Pooled Host Pool RemoteApp App Group Friendly Name"
default = "VFF Pooled Host Pool Remote Apps"
}
variable "pooledhpremoteappdescription" {
description = "Pooled Host Pool RemoteApp App Group Description"
default = "VFF Pooled Host Pool Remote Apps"
}
variable "pooledhpdesktopappname" {
description = "Pooled Host Pool Desktop App Group Friendly Name"
default = "VFF-WUS-TFRM-Pooled-DT"
}
variable "pooledhpdesktopappfriendlyname" {
description = "Pooled Host Pool Desktop App Group Friendly Name"
default = "VFF Pooled Host Pool Remote Apps"
}
variable "pooledhpdesktopappdescription" {
description = "Pooled Host Pool Desktop App Group Description"
default = "VFF Pooled Host Pool Remote Apps"
}
variable "workspace" {
description = "WVD Workspace Name"
default = "VFF-Terraform-Workspace"
}
variable "workspacefriendlyname" {
description = "WVD Workspace Friendly Name"
default = "VFF-Terraform-Workspace"
}
variable "workspacedesc" {
description = "WVD Workspace Description"
default = "VFF-Terraform-Workspace"
}
If you have any questions or queries please get in touch on Twitter
Thanks.