Azure Bastion: Connecting to a VM over RDP in [a tad over] 200 Crappy Words

Preface: Why only 200 crappy words?

In its Basic SKU, Azure Bastion provides the ability to securely connect to VMs running in Azure via the portal without the need for a public IP address, I say in its Basic SKU because Microsoft has recently released a more feature-rich variant in the form of Standard SKU.

The Standard SKU adds:

This post will focus on the ability to connect MSTSC

If you are going to follow along, you will first need to deploy a Bastion, ensuring it’s of the Standard SKU – click here for the KB describing the process in more detail

You’ll also need to install Azure CLI if you haven’t already, or if you have it installed already I’d suggest upgrading to the latest version to ensure you have the latest command libraries available – – AZ CLI KB here

After deploying a new Standard SKU Bastion and before attempting to connect to a VM you must enable Native Client Support. To do that navigate to your Bastion in the Azure portal, select Configuration from the navigation pane and click “Native Client Support” – note, that this took a few minutes to process.

Once you have completed the above you’re ready to connect to a VM, thankfully there are only a few simple AZ CLI commands to execute:

#The three lines below authenticate you to Azure and select the appropriate subscription which hosts the Bastion and target VM

az login
az account list
az account set --subscription "<subscription ID>"

#The line below initiates the Bastion session to the target VM

az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>"

You’ll be challenged for credentials as expected with any RDP connection and once successfully authenticated you’ll be presented with your remote session!

Word Count: 304! (Minus code excepts)

Azure VM Trusted Launch in 200 Crappy Words

Preface: Why only 200 crappy words?

Trusted Launch provides the ability to improve the security of Generation 2 Azure VMs (as they use UEFI instead of BIOS, thus supporting Secure Boot) and supports several VM SKUs and variants of Windows and Linux.

Trusted Launch provides the capability to deploy Azure VMs with verified boot loaders, OS kernels, and drivers as well as the ability to validate the integrity of a VMs entire boot chain to ensure no root kits have been maliciously inserted. 

Trusted Launch is made up of several technologies that can be enabled independently, they are:

  • Secure Boot protects against the installation of malware-based rootkits and boot kits by ensuring that only signed operating systems and drivers can boot
  • vTPM which is a virtualized version of a hardware Trusted Platform Module, compliant with the TPM2.0 spec, which enables attestation by measuring the entire boot chain of your VM including UEFI, OS and drivers
  • Virtualization Based Security is a secure and isolated region of memory that Windows uses to run various security solutions with increased protection against vulnerabilities and malicious exploits

There is no additional cost or performance overhead to using Trusted Launch!

Trusted Launch can be used for Azure Virtual Desktop session host VMs.

Word Count: 200

After Action Report: AVD Azure AD Join Checklist in 200 Crappy Words

Preface: Why only 200 crappy words?

I’m seeing a growing number of AVD deployments going AAD-only thus decoupling from the technical debt of legacy Active Directory, and in many cases opting for Intune over on-premises SCCM etc for device management.

Thankfully Microsoft makes the process of deploying AAD-only AVD session hosts pretty straightforward, however, the below calls out a few lessons from the field to be mindful of….

  1. Microsoft only supports a limited number of use cases for AAD-joined VMs
  2. There are additional RBAC roles (Virtual Machine User Login and Virtual Machine Administrator User Login) to be considered for AAD joined VMs, correct placement of these roles is key to ensuring ease of management at scale
  3. If the device a user is connecting to an AAD joined AVD VM from is not joined to the same Azure AD as AVD, add targetisaadjoined:i:1 as a custom RDP property to the host pool
  4. If you’re deploying Windows 10 version 1511 or earlier you’ll need to enable the PKU2U protocol, this is enabled by default in version 1607 or later
  5. When configuring Conditional Access for AAD joined AVD, disable legacy per-user MFA and exclude the Azure Windows VM Sign-In app
Word Count: 196

200 Crappy Words

As a short exercise to help me fight procrastination, taking inspiration from the below excerpt from Mark Manson’s brilliant book “The Subtle Art of Not Giving a F*ck” (which I highly recommend), I’m going to attempt to write and share a daily blog about a given subject matter in Azure in 200 crappy words! Word economy is going to be key. Wish me luck.

“I recently heard a story about a novelist who had written over 70 novels. Someone asked him how he was able to write so consistently and remain inspired and motivated every day, as writers are notorious for procrastination and for fighting through bouts of “writer’s block”. The novelist said, “200 crappy words per day, that’s it.” The idea is that if he forced himself to write 200 crappy words, more often than not, the act of writing would inspire him and before he knew it he’d have thousands down on the page.”

Mark Manson

Authors Note: It took me 56 minutes to write this intro, during which time I swapped between Mac and Windows laptop, went from my standing to sitting desk and switched from Word to OneNote

Word Count: 192

Enrolling Terraform Deployed AVD Session Hosts into Intune

Background / Requirements:

This post will describe the recent problem my team faced with enrolling Terraform deployed AVD session hosts into Intune.

Below is a summary of the high-level requirements for the wider AVD deployment.

  • Deploying AVD programmatically using Terraform through Azure DevOps Pipelines
  • Personal host pool only
  • All session hosts deployed directly from an Azure Marketplace Windows 10 Multisesson image (no custom images)
  • All session hosts are to be Azure AD joined only
  • All session hosts are to be enrolled in Intune for MDM (including app deployment)

Problem

The deployed session hosts would join Azure AD without issues, however, would not enrol in Intune.

Solution

The solution was simple in hindsight, however, admittedly took some head-scratching to get there.

To get to the solution we deployed a session host manually from the Azure portal and compared the resultant JSON from the Overview pane of the virtual machine, see below, to that of a session host deployed using Terraform.

In comparing the JSON output we found that the VM Extension used for the AAD Login for Windows had an additional setting block defined for MDM.

We updated the Terraform code block for the same VM Extension to include the missing settings block and deployed the session hosts, thankfully each session host auto-enrolled in Intune!

resource "azurerm_virtual_machine_extension" "AADLoginForWindows" {
    depends_on  = [
    azurerm_virtual_machine_extension.registersessionhost,
    ]
  name                 = "AADLoginForWindows"
  virtual_machine_id   = azurerm_windows_virtual_machine.vm.id
  publisher            = "Microsoft.Azure.ActiveDirectory"
  type                 = "AADLoginForWindows"
  type_handler_version = "1.0"
  auto_upgrade_minor_version = true

  settings = <<SETTINGS
    {
        "mdmId" : "0000000a-0000-0000-c000-000000000000"
    }
SETTINGS
}

Notable thanks to Chris Aitken, my AVD and DevOps SME for his efforts, and the hours sitting on Teams calls to get this fixed!

If you have any queries or questions, please reach out on Twitter or LinkedIn.

Thanks!

Learning Terraform > WVD-as-a-Module

Learning Terraform Series
01. Deploying WVD
02. Remote State
03. WVD-as-a-Module [This Post]

In this third post in my Learning Terraform series I’ll explore the concept of Modules.

What is a Module?

“With Terraform, you can put your code inside of a Terraform module and reuse that module in multiple places throughout your code. Instead of having the same code copy/pasted in the staging and production environments, you’ll be able to have both environments reuse code from the same module.

This is a big deal. Modules are the key ingredient to writing reusable, maintainable, and testable Terraform code. Once you start using them, there’s no going back. You’ll start building everything as a module, creating a library of modules to share within your company, start leveraging modules you find online, and start thinking of your entire infrastructure as a collection of reusable modules.”

Source: https://blog.gruntwork.io/how-to-create-reusable-infrastructure-with-terraform-modules-25526d65f73d

Learning about Modules has completely changed how I approach Terraform, now rather than thinking of every Terraform file as a standalone entity I’m instead looking at what common elements I can make into a module.

I’ve used Windows Virtual Desktop (WVD) as a common theme for learning Terraform in my previous posts, and this fits extremely well into the model of a Module as the architecture of WVD is static, that is to say, the relationship between a Workspace, Application Group and Host Pool doesn’t change.

The Anatomy of a Module

The beauty of a Module is in its simplicity.

In short, any Terraform file is pretty much a module by default.

There is no discernible difference between the syntax and structure of a standard configuration file and that of a module other than when calling a module you pass in all unique resource values from the main configuration file rather than a variables file.

I’ve tried to show this in the figure below, in a standard Terraform configuration you would create a folder for your code and within it store the main and variable files. The main.tf file contain the Terraform provider (Azure, AWS etc) and the resources to create, you could pass in values from a variables.tf file in the same folder.

Figure 1 – Standard Terraform Configuration Architecture

When calling or referencing a module however you would specify the variable values within the main configuration file, shown below in blue.

The module would commonly reside in its own folder structure, a central module library perhaps, the structure of which is identical to a standard Terraform configuration.

The biggest notable difference, and this will become evident in the code, is that modules’ variable file (shown in green) doesn’t contain any default values as those are passed in from outside.

Figure 2 – Terraform Module Architecture

WVD-as-a-Module

Building on from the code in my previous posts I’ve now converted the code to deploy WVD into a reusable module.

This code is available from my GitHub repo, here

Firstly, the below code is the main.tf that will call the module, in figure 2 above this is the script in blue.

# Get AzureRM Terraforn Provider
provider "azurerm" {
  version = "2.31.1" #Required for WVD
  features {}
}

# Remote State, replace with your resource group, storage account and container name
terraform {
  backend "azurerm" {
    storage_account_name = "vfftfstateusw2"
    container_name       = "tfstate"
    key                  = "terraform.tfstate"
    resource_group_name  = "VFF-USE-RG-WVD-REMOTE"
  }
}

# Create resource group
resource "azurerm_resource_group" "default" {
name     = "VFF-USW-RG-WVD-FromMod"
location = "West US 2"
}

# Call WVD-as-a-Module and pass in variables
module "WVD-as-a-Module" {
  source                         = "../Modules/WVD-as-a-Module"
  rgname                         = azurerm_resource_group.default.name
  region                         = azurerm_resource_group.default.location
  pooledhpname                   = "VFF-WUS-TFRM-Mod"
  pooledhpfriendlyname           = "VFF Pooled Host Pool"
  pooledhpdescription            = "VFF Pooled Host Pool"
  pooledhpremoteappname          = "VFF-WUS-TFRM-Mod-RA"
  pooledhpremoteappfriendlyname  = "VFF Pooled Host Pool Remote Apps"
  pooledhpremoteappdescription   = "VFF Pooled Host Pool Remote Apps"
  pooledhpdesktopappname         = "VFF-WUS-TFRM-Mod-DT"
  pooledhpdesktopappfriendlyname = "VFF Pooled Host Pool Remote Apps"
  pooledhpdesktopappdescription  = "VFF Pooled Host Pool Remote Apps"
  workspace                      = "VFF-Terraform-Wkspc-Mod"
  workspacefriendlyname          = "VFF-Terraform-Workspace"
  workspacedesc                  = "VFF-Terraform-Workspace"
  pooledhpmaxsessions            = 50
}

This next code is the WVD-as-a-Module main configuration File, in figure 2 this is show in orange.

Note, I recommend create a new folder structure for your modules.

terraform {
  required_version = ">=0.12"
}

# Create "Pooled" WVD Host Pool
resource "azurerm_virtual_desktop_host_pool" "pooleddepthfirst" {
  location                 = var.region
  resource_group_name      = var.rgname
  name                     = var.pooledhpname
  friendly_name            = var.pooledhpfriendlyname
  description              = var.pooledhpdescription
  type                     = "Pooled"
  maximum_sessions_allowed = var.pooledhpmaxsessions
  load_balancer_type       = "DepthFirst"
}

#Create RemoteApp Application Group
resource "azurerm_virtual_desktop_application_group" "pooledremoteapp" {
  name                = var.pooledhpremoteappname
  location            = var.region
  resource_group_name = var.rgname
  type                = "RemoteApp"
  host_pool_id        = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
  friendly_name       = var.pooledhpremoteappfriendlyname
  description         = var.pooledhpremoteappdescription
}

#Create Desktop Application Group
resource "azurerm_virtual_desktop_application_group" "pooleddesktopapp" {
  name                = var.pooledhpdesktopappname
  location            = var.region
  resource_group_name = var.rgname
  type                = "Desktop"
  host_pool_id        = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
  friendly_name       = var.pooledhpdesktopappfriendlyname
  description         = var.pooledhpdesktopappdescription
}

# Create Workspace
resource "azurerm_virtual_desktop_workspace" "workspace" {
  name                = var.workspace
  location            = var.region
  resource_group_name = var.rgname
  friendly_name       = var.workspacefriendlyname
  description         = var.workspacedesc
}

# Associate RemoteApp Application Group with Workspace
resource "azurerm_virtual_desktop_workspace_application_group_association" "workspaceremoteapp" {
  workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
  application_group_id = azurerm_virtual_desktop_application_group.pooledremoteapp.id
}

# Associate Desktop Application Group with Workspace
resource "azurerm_virtual_desktop_workspace_application_group_association" "workspacedesktop" {
  workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
  application_group_id = azurerm_virtual_desktop_application_group.pooleddesktopapp.id
}

Lastly, this is the associated variables file for the module, this is shown in green in figure 2.

variable "rgname" {
  description = "Resource Group Name"
  type        = string
}

variable "region" {
  description = "Region"
  type        = string
}

variable "pooledhpname" {
  description = "Pooled Host Pool Name"
  type        = string
}

variable "pooledhpmaxsessions" {
  description = "Max sessions per pooled host"
  type        = number
}

variable "pooledhpfriendlyname" {
  description = "Pooled Host Pool Friendly Name"
  type        = string
}

variable "pooledhpdescription" {
  description = "Pooled Host Pool Description"
  type        = string
}

variable "pooledhpremoteappname" {
  description = "Pooled Host Pool RemoteApp App Group Name"
  type        = string
}

variable "pooledhpremoteappfriendlyname" {
  description = "Pooled Host Pool RemoteApp App Group Friendly Name"
  type        = string
}

variable "pooledhpremoteappdescription" {
  description = "Pooled Host Pool RemoteApp App Group Description"
  type        = string
}

variable "pooledhpdesktopappname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  type        = string
}

variable "pooledhpdesktopappfriendlyname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  type        = string
}

variable "pooledhpdesktopappdescription" {
  description = "Pooled Host Pool Desktop App Group Description"
  type        = string
}

variable "workspace" {
  description = "WVD Workspace Name"
  type        = string
}

variable "workspacefriendlyname" {
  description = "WVD Workspace Friendly Name"
  type        = string
}

variable "workspacedesc" {
  description = "WVD Workspace Description"
  type        = string
}

This code is available from my GitHub repo, here

If you have any questions or queries please get in touch on Twitter

Thanks.

Learning Terraform > Remote State

Learning Terraform Series
01. Deploying WVD
02. Remote State [This Post]
03. WVD-as-a-Module

This is the second article in a series I’m enjoying writing on my journey to learn Terraform, in this post I’m going to cover the concept of State within Terraform and more importantly why its location should be carefully considered if you’re using Terraform in a production environment.

If you’re just starting out with Terraform and Infrastructure as Code, it might be worth spending a few minutes reading my post in which I cover the fundamentals – you can find that post here.

So, what is State and why is it so important?

Terraform keeps a detailed record of everything it creates, every network, subnet, VM, everything!  That way if you needed to update a particular infrastructure, for example, change a VM size or deploy a new subnet Terraform would know exactly which resources it had previously provisioned, thus knowing which it had to destroy in order to recreate.

Remember Terraform is a declarative language, it only knows desired end-state, that is, if you need to change the size of a Terraform created VM from Ds4_v4 to Ds8_V4 Terraform will not simply scale it as you would from the Azure Portal manually, via PowerShell or from the Azure CLI, instead it will destroy and redeploy it from scratch using the new size.

This detailed historical record is known as the Terraform State.

Terraform records its State in a custom JSON format in the same folder as the main Terraform files are executed from, saving it as terraform.tfstate

This file contains values that map the Terraform resources declared in your configuration files (eg. main.tf) to the resultant resources it created in Azure.

Below is an example of a subset of JSON from a terraform.tfstate file showing the Azure Resource Group created by Terraform

{
   "version": 4,
   "terraform_version": "0.13.5",
   "serial": 28,
   "lineage": "6xxxxxx2e-cxx0-7xx9-e68e-ecxxxxxxx50",
   "outputs": {},
   "resources": [
     {
       "mode": "managed",
       "type": "azurerm_resource_group",
       "name": "default",
       "provider": "provider[\"registry.terraform.io/hashicorp/azurerm\"]",
       "instances": [
         {
           "schema_version": 0,
           "attributes": {
             "id": "/subscriptions/xxxxxxxxx/resourceGroups/VFF-WUS-RG-TFWVD",
             "location": "westus2",
             "name": "VFF-WUS-RG-TFWVD",
             "tags": {},
             "timeouts": null
           },
           "private": "xxxxxx"
         }
       ]
     }

One of the key benefits to Terraform, and Infrastructure as Code in general, is the ability to control how an infrastructure changes over time, also known as configuration shift.

As mentioned early Terraform is only concerned with desired end-state and through the using the State file Terraform can compare what should be deployed with what is deployed.

For example, if through Terraform you deploy 10 VM’s and an over-eager penny-pinching admin deletes 2 without your knowledge, using Terraform Plan, which uses the State file to recall what was previously provisioned resources versus resources that are running live you can quickly and easily remediate the infrastructure and return it to the desired end-state.

Remember, every time you run Terraform Plan, it will fetch the latest status of deployed resources in Azure and compare that to what is in your Terraform configuration to determine what changes need to be applied. In other words, the output of the plan command is the difference between the code in your configuration files and the infrastructure deployed in Azure as discovered via the state file.

Needless to say, the Terraform State file is hugely important and should be vigorously protected as mismanagement, such as overwriting, corruption or loss of this file can be disastrous, especially in a production environment.

That said, if like me you’re running Terraform from your personal computer in a Dev/Test subscription in Azure for learning purposes you’re not going to be distraught if you accidently overwrite the state file through a moment of copy and paste madness.

But what if you’re working collaboratively as part of a wider DevOps team, especially in a Dev/Test environment were you’re consistently and dynamically iterating the infrastructure, you’re working on one set of functionality and your colleague is working on something else but within the same Terraform configuration, this is where it gets tricky.

We’ve all been in those situations before the days of modern SharePoint and OneDrive were you’ve opened a Word document from a mapped drive to a server share to find someone else had it open, but you’re too impatient to wait for that person to close it so you create a copy of it on your desktop, modify it and copy back into the mapped drive overwriting previous copies thus awakening a fury in your usually sedate colleague of which you’ve never seen before – well image that same scenario with a shared set of Terraform files including the State file!

There are options to host the files in shared version controlled locations but none are ideal as the very nature of those systems will have you working on an offline, locked or checked out copy at the same time as a colleague and you’ll be playing beat the clock as to who checks there copy back in last.

Thankfully Terraform does offer a more elegant solution to this problem, that is its built-in support for remote backends.

A Terraform backend determines how Terraform stores state.

The default backend is the local backend which stores the state file on your local disk.

Remote backends however allow you to store the state file in a remote shared storage location, in the case of this example, an Azure Storage account.

Using an Azure Storage Account to store the State file solves several issues, I’ll cover a few of the main ones below.

Resiliency > Azure Storage Accounts can be utilise the plethora of replication and resiliency benefits of Azure, such as LRS, ZRS and GRS.

Locking > Blob Storage Containers natively support locking, this means that when Terraform is either reading or writing to the state file it locks it, this mitigates the chances of two people writing to the same file at the same time.

Encryption > Blob Storage Containers are encrypted at rest by default.

Versioning > Versioning can be enabled to retain a history of file versions.

Access Control > Azure Network Security including Private Link, Azure RBAC, Access Keys and Shared Access Signatures can be used to secure access to the State file to only authorised users and networks.

Cheap > Even the most complex State files are rarely significantly large so the cost of storing them in Azure is minimal.

To summarise, storing State files in a local backend (your own personal computer’s local disk) is fine for single user dev test projects but is not suitable for projects involving a team, nor is it suitable for production environments – instead it is recommended to store the file on a durable resilient enterprise scale storage solution, such as in an Azure Storage Account.

So, let look at how to configure your Terraform configuration files to use a Remote Backend.

First you’re going to need a Storage Account to store the State File, you can provision this manually, via PowerShell or using Terraform.

Note, I’d recommend creating this outside of the main Terraform configuration file so you don’t subsequently delete it when you use Terraform Destroy to remove your provisioned resources.

To create a Storage Account using Azure CLI execute the below script from the Azure Cloud CLI or locally as you should already have the Az CLI tools installed as they’re a pre-req of Terraform.

This code is also available on my GitHub, here.

$RESOURCE_GROUP_NAME    ="Resource-Group-To-Host-Storage-Account"
 $STORAGE_ACCOUNT_NAME   ="Storage-Account-Name"
 $CONTAINER_NAME         ="Container-Name"
 Create resource group
 az group create --name $RESOURCE_GROUP_NAME --location "West Europe"
 Create storage account
 az storage account create --resource-group $RESOURCE_GROUP_NAME --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --encryption-services blob
 Get storage account key
 $ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query [0].value -o tsv)
 Create blob container
 az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --account-key $ACCOUNT_KEY
 Write-Output  "storage_account_name: $STORAGE_ACCOUNT_NAME"
 Write-Output "container_name: $CONTAINER_NAME"
 Write-Output "access_key: $ACCOUNT_KEY"

Note, there are different methods to authenticate to Azure for using Remote Backends, you can use Secrets stored in a Key Vault, Service Principles and Managed Identities just to name a few however for this example I’ll use the Azure CLI and authenticate manually using Az Login.

To configure Terraform to store the State file in your Storage Account you to add a specific block to your Terraform configuration with the following syntax:

terraform {   
     backend "azurerm"
    {     
            [CONFIG HERE…]   
       } 
   }  

To extend this further I’ve taken the Terraform configuration scripts  written for my previous post on deploying WVD but updated them to store the resultant State file in an Azure Storage Account.

This code is also available on my GitHub, here.

main.tf
 Get AzureRM Terraforn Provider
 provider "azurerm" {
   version = "2.31.1" #Required for WVD
   features {}
 }
 terraform {
   backend "azurerm" {
     storage_account_name = "vffwvdtfstate"
     container_name       = "tfstate"
     key                  = "terraform.tfstate"
     resource_group_name  = "VFF-USE-RG-WVD-REMOTE"
   }
 }
 Create "Pooled" WVD Host Pool
 resource "azurerm_virtual_desktop_host_pool" "pooleddepthfirst" {
   location            = var.region
   resource_group_name = var.rgname
 name                     = var.pooledhpname
   friendly_name            = var.pooledhpfriendlyname
   description              = var.pooledhpdescription
   type                     = "Pooled"
   maximum_sessions_allowed = 50
   load_balancer_type       = "DepthFirst"
 }
 Create RemoteApp Application Group
 resource "azurerm_virtual_desktop_application_group" "pooledremoteapp" {
   name                = var.pooledhpremoteappname
   location            = var.region
   resource_group_name = var.rgname
 type          = "RemoteApp"
   host_pool_id  = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
   friendly_name = var.pooledhpremoteappfriendlyname
   description   = var.pooledhpremoteappdescription
 }
 Create Desktop Application Group
 resource "azurerm_virtual_desktop_application_group" "pooleddesktopapp" {
   name                = var.pooledhpdesktopappname
   location            = var.region
   resource_group_name = var.rgname
 type          = "Desktop"
   host_pool_id  = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
   friendly_name = var.pooledhpdesktopappfriendlyname
   description   = var.pooledhpdesktopappdescription
 }
 Create Workspace
 resource "azurerm_virtual_desktop_workspace" "workspace" {
   name                = var.workspace
   location            = var.region
   resource_group_name = var.rgname
 friendly_name = var.workspacefriendlyname
   description   = var.workspacedesc
 }
 Associate RemoteApp Application Group with Workspace
 resource "azurerm_virtual_desktop_workspace_application_group_association" "workspaceremoteapp" {
   workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
   application_group_id = azurerm_virtual_desktop_application_group.pooledremoteapp.id
 }
 Associate Desktop Application Group with Workspace
 resource "azurerm_virtual_desktop_workspace_application_group_association" "workspacedesktop" {
   workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
   application_group_id = azurerm_virtual_desktop_application_group.pooleddesktopapp.id
 }
# variables.tf

variable "rgname" {
  description = "Resource Group Name"
  default     = "VFF-USE-RG-WVD-REMOTE"
}

variable "region" {
  description = "Region"
  default     = "West US 2"
}

variable "pooledhpname" {
  description = "Pooled Host Pool Name"
  default     = "VFF-WUS-TFRM-Pooled"
}

variable "pooledhpfriendlyname" {
  description = "Pooled Host Pool Friendly Name"
  default     = "VFF Pooled Host Pool"
}

variable "pooledhpdescription" {
  description = "Pooled Host Pool Description"
  default     = "VFF Pooled Host Pool"
}

variable "pooledhpremoteappname" {
  description = "Pooled Host Pool RemoteApp App Group Name"
  default     = "VFF-WUS-TFRM-Pooled-RA"
}

variable "pooledhpremoteappfriendlyname" {
  description = "Pooled Host Pool RemoteApp App Group Friendly Name"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpremoteappdescription" {
  description = "Pooled Host Pool RemoteApp App Group Description"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpdesktopappname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  default     = "VFF-WUS-TFRM-Pooled-DT"
}

variable "pooledhpdesktopappfriendlyname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpdesktopappdescription" {
  description = "Pooled Host Pool Desktop App Group Description"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "workspace" {
  description = "WVD Workspace Name"
  default     = "VFF-Terraform-Workspace"
}

variable "workspacefriendlyname" {
  description = "WVD Workspace Friendly Name"
  default     = "VFF-Terraform-Workspace"
}

variable "workspacedesc" {
  description = "WVD Workspace Description"
  default     = "VFF-Terraform-Workspace"
}

If you have any questions or queries please get in touch on Twitter

Thanks.

Learning Terraform > Deploying WVD

Learning Terraform Series
01. Deploying WVD [This Post]
02. Remote State
03. WVD-as-a-Module

I’ve heard Terraform mentioned numerous times in various tech circles over the past year but always chose to give it a wide birth, I knew it was the latest tool in the growing Infrastructure as Code space but as my own coding experience had never spanned outside of the Windows command line and PowerShell the idea of learning what I incorrectly assumed would be a wildly ARM like beast of a language conjured up memories of the time I accidently walked into the wrong lecture hall at Teesside University in the early 2000’s and sat through a mind-blowing and somewhat scaring lecture on C# when I should have been a few doors down learning the ins-and-outs of TCP/IP.

Anyway, cut to the modern day and the through dark art of YouTube algorithms I’m presented with a video by the highly respected John Savill on ‘Using Terraform with Azure’ which instantly dispels all my fears – Terraform was not the wildly ARM like beast I thought it was, in fact it was the opposite.

Sidebar: Around the same time I saw tweets from Neil McLoughlin and Jen Sheerin both sharing experiences with learning Terraform.  Neil recommended a great book which I went on to buy for myself (Terraform: Up & Running: Writing Infrastructure as Code by Yevgeniy Brikman) and Jen shared a great article detailing how she had used Terraform to deploy WVD, just as I went on to do, of which this post centres around so thanks to both.

So, what is Terraform and why is worth your time?

Actually, before delving into that let me just set the scene and explain the concept of Infrastructure as Code for those just starting their journey, as understanding this is key to understanding Terraform.

Since the dawn of modern IT there have been sysadmins deploying servers, storage and networks (commonly referred to as infrastructure in sysadmin parlance) in data centres around the world, this was often slow, expensive and prone to errors and configuration drift, that is, deploying 2 servers with the same application identically is pretty straight forward, however, multiply that by 2000 and you’re sure to end up with differing configurations and all the pain and sleepless nights that comes with it.

Spring forward in time and throw in the virtualisation revolution which rearchitected the servers, storage and networks we use to deploy as physical devices in cold data centres, abstracting their dependency on physical hardware and refactoring them to be purely “software-defined” thus opened the gates and spawning the concept of Infrastructure as Code, that is, being able to define and provision a full infrastructure architecture based purely on lines of code, and not only that, adding the ability to quickly and easily identify and remediate configuration drift.

So, looping back to the original question of what is Terraform?

Terraform is an open-source tool developed by HashiCorp that allows you to provision Infrastructure as Code, and it is very good at it.

Provisioning versus Configuration Management

A quick note on the subtle differences between provisioning and configuration management tools and why they work best when combined.

Terraform is a provisioning tool, it is used to provision the infrastructure such as the servers you would need to host an application, strictly speaking it does not deploy the application. I say strictly speaking as Terraform can deploy custom server images which contain pre-installed applications but that it just semantics.

Products like Chef, Puppet and Ansible are configuration management tools, they install and manage software on existing servers, they don’t provision the servers themselves.

So, easy to grasp why companies will often combine the a provisioning tool such as Terraform with a configuration management tool such as Chef, Puppet or Ansible to give them end to end control of the infrastructure and application stacks.

Terraform 101 – Deploying WVD

I wont delve any deeper into Terraform as a coding language in this post as to be honest I’m still very much a beginner and I recommend you grab the book I mentioned earlier as well as watching John’s video above, that said, as I get more proficient I will start sharing more tips and tricks but for now, what follows in this article should be seen as a 101, a Hello World for Terraform.

This below scripts will deploy a single WVD hostpool of the pooled variant, create and associate RemoteApp and Desktop application groups and finally create and associate a WVD Workspace.

What it won’t do as yet, and where I’m hoping to take this, is deploy WVD Session Hosts (ideally from a Shared Image Gallery) and manage user assignment.

Following Terraform standards there are two scripts, a main.tf which contains the Terraform code for the resources to be deployed and a variables.tf which contains all the referenced variables.

I’ve written this so that the main.tf should remain pretty much untouched and all values such as resource names are referenced from the variables.tf file.

Update: This code is also available on my GitHub, here.

main.tf

Get AzureRM Terraforn Provider
 provider "azurerm" {
   version = "2.31.1" #Required for WVD
   features {}
 }
 Create Resource Group - This will host all subsequent deployed resources
 resource "azurerm_resource_group" "default" {
   name     = var.rgname
   location = var.region
 }
 Create "Pooled" WVD Host Pool
 resource "azurerm_virtual_desktop_host_pool" "pooleddepthfirst" {
   location            = var.region
   resource_group_name = azurerm_resource_group.default.name
 name                     = var.pooledhpname
   friendly_name            = var.pooledhpfriendlyname
   description              = var.pooledhpdescription
   type                     = "Pooled"
   maximum_sessions_allowed = 50
   load_balancer_type       = "DepthFirst"
 }
 Create RemoteApp Application Group
 resource "azurerm_virtual_desktop_application_group" "pooledremoteapp" {
   name                = var.pooledhpremoteappname
   location            = var.region
   resource_group_name = azurerm_resource_group.default.name
 type          = "RemoteApp"
   host_pool_id  = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
   friendly_name = var.pooledhpremoteappfriendlyname
   description   = var.pooledhpremoteappdescription
 }
 Create Desktop Application Group
 resource "azurerm_virtual_desktop_application_group" "pooleddesktopapp" {
   name                = var.pooledhpdesktopappname
   location            = var.region
   resource_group_name = azurerm_resource_group.default.name
 type          = "Desktop"
   host_pool_id  = azurerm_virtual_desktop_host_pool.pooleddepthfirst.id
   friendly_name = var.pooledhpdesktopappfriendlyname
   description   = var.pooledhpdesktopappdescription
 }
 Create Workspace
 resource "azurerm_virtual_desktop_workspace" "workspace" {
   name                = var.workspace
   location            = var.region
   resource_group_name = azurerm_resource_group.default.name
 friendly_name = var.workspacefriendlyname
   description   = var.workspacedesc
 }
 Associate RemoteApp Application Group with Workspace
 resource "azurerm_virtual_desktop_workspace_application_group_association" "workspaceremoteapp" {
   workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
   application_group_id = azurerm_virtual_desktop_application_group.pooledremoteapp.id
 }
 Associate Desktop Application Group with Workspace
 resource "azurerm_virtual_desktop_workspace_application_group_association" "workspacedesktop" {
   workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
   application_group_id = azurerm_virtual_desktop_application_group.pooleddesktopapp.id
 }

variables.tf

variable "rgname" {
  description = "Resource Group Name"
  default     = "VFF-WUS-RG-TFWVD"
}

variable "region" {
  description = "Region"
  default     = "West US 2"
}

variable "pooledhpname" {
  description = "Pooled Host Pool Name"
  default     = "VFF-WUS-TF-Pooled"
}

variable "pooledhpfriendlyname" {
  description = "Pooled Host Pool Friendly Name"
  default     = "VFF Pooled Host Pool"
}

variable "pooledhpdescription" {
  description = "Pooled Host Pool Description"
  default     = "VFF Pooled Host Pool"
}

variable "pooledhpremoteappname" {
  description = "Pooled Host Pool RemoteApp App Group Name"
  default     = "VFF-WUS-TF-Pooled-RA"
}

variable "pooledhpremoteappfriendlyname" {
  description = "Pooled Host Pool RemoteApp App Group Friendly Name"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpremoteappdescription" {
  description = "Pooled Host Pool RemoteApp App Group Description"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpdesktopappname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  default     = "VFF-WUS-TF-Pooled-DT"
}

variable "pooledhpdesktopappfriendlyname" {
  description = "Pooled Host Pool Desktop App Group Friendly Name"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "pooledhpdesktopappdescription" {
  description = "Pooled Host Pool Desktop App Group Description"
  default     = "VFF Pooled Host Pool Remote Apps"
}

variable "workspace" {
  description = "WVD Workspace Name"
  default     = "VFF-Terraform-Workspace"
}

variable "workspacefriendlyname" {
  description = "WVD Workspace Friendly Name"
  default     = "VFF-Terraform-Workspace"
}

variable "workspacedesc" {
  description = "WVD Workspace Description"
  default     = "VFF-Terraform-Workspace"
}

If you’re intending to run this code yourself you’ll need to prepare your environment, again following the instructions in John’s video above, that is, download and copy the Terraform executable to your local device, install the AZ PowerShell module and connect to your Azure account.

I’d also recommend VSCode as a code editor if you’re not already using it.

Once all of this is in place you can run the below commands to initialise, plan and apply Terraform, then sit back and watch it go!

terraform fmt  #make my files formatted correctly and will fix all tf files in this folder

terraform init
terraform plan
terraform apply -auto-approve

What you will end up with should look like the below:

If you have any questions or queries please get in touch on Twitter

Thanks.

WVD > Outlook running on Windows 10 Multi-Session displays “Need Password”

This post describes an issue that under certain circumstances can affect MS Outlook running on Windows 10 Multi-Session.

The issue manifests itself as MS Outlook constantly prompting the user for credentials, and even after entering their correct username and password the prompt constantly loops which to a user this gives the same experience as if they’d entered a wrong password.

Note, this was not all users, on every WVD session host, all of the time, it was certain users, intermittently, on differing hosts.

WVD-Outlook-Creds

I too did the usual dance of looking to diagnose issues with the user’s credentials, such as changing the password, forcing AD Connect syncs, checking issues with Basic versus Modern Auth, AAD App Passwords, disabling MFA – nothing worked.

Just to set the scene and give a bit more background info this particular customer uses ADFS to proxy authentication from Azure AD to Active Directory, they use Windows 10 Multi-Session (Build 1909) and MS Office 365 (Build 2004).

I read through dozens of articles with one consistent theme of disabling the use of Web Account Manager – the below paragraph from Duo Support added great context:

“By default, Microsoft Office 365 ProPlus (2016 version) uses Azure Active Directory Authentication Library (ADAL) framework-based authentication. Starting in build 16.0.7967, Office uses Web Account Manager (WAM) for sign-in workflows on Windows builds that are later than 15000 (Windows 10, version 1703, build 15063.138). There are generally two problems we see WAM causing:

Users unable to authenticate (particularly after a password reset)

WAM introduces new requirements for Identity Providers (IdP) used to federate Office 365 (O365) logins. When a Windows 10 workstation is joined to an on-premise Active Directory, WAM/O365 requires the IdP to support the WS-Trust protocol.  When a user’s access/refresh tokens become invalid, such as after a password reset, the WAM framework tries to re-authenticate the user. The expected end-user experience is a popup window showing the login page of the IdP asking the user to re-authenticate. When the IdP is the DAG, this process will fail causing the user to be unable to re-connect to O365 with applications such as Microsoft Outlook. The user will see the authentication window open briefly then immediately close while Outlook continues to show the message Need Password.”

The article suggests disabling WAM using the below registry key:

HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Common\Identity
Create Dword Value DISABLEAADWAM
Set a value of 1

Now, this did resolve the issue however I always felt it was a workaround and not a fix as I was mindful of the effect disabling this service had on other dependent applications and Windows services.

That’s when a colleague shared this post from Pieter Wegleven, WVD Product Manager at Microsoft.

Pieter’s post states that the issue could be caused when session hosts are registered in the same Azure AD tenancy as they are domain-joined, that is, domain joined to the same AD that syncs to the AAD in which they are registered – this is caused when a user selects the “use this account everywhere” prompt from an Office app which can be done by standard (non-admin) users.

I checked this in the customer’s AAD (Azure Portal > Azure Active Directory > Devices) which found that to be the case, see below.  Note how a single session host VM is registered multiple times with different owners, subsequently these are all the same users who initially reported the issue with Outlook.

WVD-Office-Reg

Pieter’s post advises Microsoft are “making changes to the Windows 10 multi-session image in the Azure gallery to prevent users from registering VMs” however in the immediate time suggests two methods to resolve this issue.

First, and the Microsoft preferred method is to configure Hybrid AAD which would allow VM’s to be joined to AAD, rather than registered.

Below is a quick reminder on the differences between an AAD registered device versus joined.

Azure AD Registered > Devices that are Azure AD registered are typically personally owned or mobile devices, and are signed in with a personal Microsoft account or another local account.

Azure AD Joined > Devices that are Azure AD joined are owned by an organisation, and are signed in with an Azure AD account belonging to that organisation.

The second option is to prevent the VM from registering in Azure AD, this is the option I opted for on this occasion, namely because shifting to a hybrid Azure AD was a significant design decision and presented a change in architecture for the customer, and not something they would opt into quickly, nor should they.

Preventing VM’s from registering in Azure AD can be achieved by adding the below registry key:

HKLM\SOFTWARE\Policies\Microsoft\Windows\WorkplaceJoin
Create Dword Value BlockAADWorkplaceJoin
Set a value of 1

Note, you could add the above registry key to existing machines that had previously registered in AAD, you would simply have to delete them from AAD.

I hope you found this post useful, if you’re interested in engaging more with others using and learning more about WVD please consider following the WVD Community on Twitter and joining the Slack channel, both ran by Neil and Stefan, and both great sources of info!

Thanks.

Troubleshooting connectivity from the RD Client to WVD Part 2 – Log Analytics

A few weeks ago I introduced an issue I was working on with a customer who was reporting intermittent issues connecting to Windows Virtual Desktop from the RD Client on their corporate devices, you can read that initial post here.

The focus of that post was doing some preliminary checks, namely comparing connectivity from the RD Client against the Web Interface at the time the user was reporting the issue, this would help narrow down whether the problem was environmental, as in, within the boundary of the users’ device, the customer VPN etc as a successful connection from the web interface would go some way to proving the health of WVD control plane, host pool and session host.

A quick recap on activities and results since that initial post…

Tests confirmed that the user could indeed connect using the web interface at the exact same time the RD Client would not connect, note the user in question was no longer working from home at the time, he was in the corporate office connected to their LAN.

Also note that at the time the user could not connect I confirmed that other users were connected to the same WVD resource, and I myself could connect using the RD Client.

We then worked with the customers’ network and security engineer (who maintains their Cisco Umbrella deployment) to systematically walk through the network topology to ensure traffic from that source IP was successfully traversing each node without error.

We also used WireShark to capture traffic from the users’ device, however, I’ll save that particular deep-dive for a follow-up post as I’m still analysing the results. What I’m hoping to do is compare the resultant capture with the WVD connectivity flow (from part 1) and correlate each step to a particular selection of the capture to hopefully show which step is having problems – this is still work in progress.


The below is an example WireShark capture I used with the customer as a comparison, I started the capture and then opened the RD Client, as mentioned in part 1 the RD Client will always refresh your subscriptions on start-up, this is shown from line 203 to 205 in which the initial connection is made after the RDWEB.WVD.MICROSOFT.COM hostname is resolved in DNS, from line 206 a secure connection is initiated to the WVD control plane which validates the TLS certificate against the GlobalSign certificate authority at line 217 to 221, this is one of the reasons internet access is required for WVD as the certificate authority is accessed from its public endpoint.

WVD-WireShark-Capture

I’m hoping to have this analysis and deep-dive completed soon, I’ll document it in a follow-up post after I’ve had it reviewed for and verified for accuracy by the WVD Community and Microsoft WVD Global Black Belts.

 


Back to the troubleshooting…

Frustratingly during the analysis of traffic logs on the customers’ perimeter firewall and having made zero changes the RD Client was then able to connect to WVD!

This now has us, me and the customers tech team, scratching our heads – are we now looking for something dynamic in an otherwise static local architecture that under certain conditions cause these connectivity problems, such as asymmetric routing? Again, more on that in a later post as I’m working with the customer and their other supporting partners to build a diagram of their holistic network architecture.

After a productive conversation with the customers’ aligned Microsoft DSE it was agreed that whilst we wait for the issue to reoccur we would configure the WVD tenant to send logs to an Azure Log Analytics workspace to see if that could shed any light on the situation.

Setting up a new Log Analytics Workspace is very straight forward, from the search bar in the Azure Portal enter Log Analytics, from the results select Log Analytics Workspaces.

WVD-Log-Anal-Setup-3

Select Add.

WVD-Log-Anal-Setup-4

Select an appropriate Subscription and Resource Group, then provide a suitable name for the Workspace and a region to host it, ideally use the same region as the WVD session hosts are deployed.

WVD-Log-Anal-Setup-1

Click ‘Next: Pricing Tier’ to proceed.

Select a pricing tier, details on pricing tiers can be found here.

WVD-Log-Anal-Setup-2

Proceed through the next two options entering appropriate tags and finally review and create to create the Log Analytics workspace.

Once created the Log Analytics workspace will display a Workspace ID, note this down as you’ll need it later.

Note, the next steps to connect the WVD tenant apply to the Fall 2019 release only, the setup for Spring 2020 release differs.

Binding your WVD tenant quick and straight forward following this Microsoft article.

Before you start you’ll need the below info:

* WVD tenant name
* Azure subscription ID
* Log Analytics Workspace ID (available from Overview pane on the workspace itself)
* Log Analytics Primary Key (available from Advanced Settings > Connected Sources)

Once you have the above open PowerShell and run the below commands:

Add-RdsAccount -DeploymentUrl https://rdbroker.wvd.microsoft.com
Set-RdsTenant -Name  -AzureSubscriptionId  -LogAnalyticsWorkspaceId  -LogAnalyticsPrimaryKey

This will bind your WVD tenant to the Log Analytics workspace.

Be aware it can take up to an hour before log data will start to appear so be patient.

Note, connecting the WVD tenant to the Log Analytics Workspace will send log data at the tenant only, such as user-driven connection events, it will not send VM or guest OS performance stats, for that you must install the Windows Agent on the VM following this article.

WVD creates the three below custom logs in the workspace.

WVD-Log-Anal-Work-1

Like any log managment solution the key is often knowing what you’re looking for and how to find it, Log Analytics is no different, Microsoft provides two great example queries to get you started, here.

To query the workspace select Logs under General, this will open a new pane for the query editor.

I adapted one Microsoft’s example queries to show just failed connections from the RD Client where ClientType_s == “com.microsoft.rdc.windows.msrdc.x64” denotes connections from the RD Client, see below:

WVDActivityV1_CL

| where Type_s == "Connection"

| where Outcome_s == "Failure"

| where ClientType_s == "com.microsoft.rdc.windows.msrdc.x64"

| join kind=leftouter

(

WVDErrorV1_CL

| summarize Errors = makelist(pack('Time', Time_t, 'Code', ErrorCode_s , 'CodeSymbolic', ErrorCodeSymbolic_s, 'Message', ErrorMessage_s, 'ReportedBy', ReportedBy_s , 'Internal', ErrorInternal_s )) by ActivityId_g

)

on $left.Id_g  == $right.ActivityId_g

| join  kind=leftouter

(

WVDCheckpointV1_CL

| summarize Checkpoints = makelist(pack('Time', Time_t, 'ReportedBy', ReportedBy_s, 'Name', Name_s, 'Parameters', Parameters_s) ) by ActivityId_g

)

on $left.Id_g  == $right.ActivityId_g

|project-away ActivityId_g, ActivityId_g1

Select a time range, highlight the entire query (Control+A) and click Run.

The results will be displayed as below, this shows two failed attempts from the RD Client in the past 24 hours, this also shows the user is running Windows 10 build 1809 (11763) and version 1.2.945.0 of the RD Client – both extremely useful pieces of the information when analysing data and looking for trends.

WVD-Log-Anal-Work-2

Drilling down into each of those failed connections though is where the gold is!

From the left-most side of each log is an arrow to expand, the first tier provides a summary of the connection.

WVD-Log-Anal-Work-3

Afterwhich is a subset of the log displaying any errors, you can see from the below that this session failed with error code 2055, SSL_ERR_LOGON_FAILURE.

Now, I’m not going to profess to know exactly what that particular error is caused by but it is certainly enough to either focus and further troubleshooting or raise a support request with Microsoft.

WVD-Log-Anal-Work-4

Lastly, the final hive in the log in extremely interesting in that it walks you through the checkpoints of the connection flow, you can view the result of each checkpoint.

WVD-Log-Anal-Work-5

If you’re interested in engaging more with others using and learning more about WVD please consider following the WVD Community on Twitter and joining the Slack channel, both ran by Neil and Stefan, and both great sources of info!

Thanks.