QuickFix > Azure VM Run Command (A Life Saver)

As a long time SysAdmin managing estates of hundreds, if not thousands of servers, especially in the pre-virtualisation days, it wasn’t uncommon to look at your taskbar and see dozens of open RDP sessions, one for every ticket you’d dealt with that day, or week if you weren’t the most disciplined at logging off and became ‘that guy’ who hogged all the sessions, you know who you are ūüôā

But what if RDP wasn’t available, the session didn’t connect, what then?

Thankfully in those days, and even now, we typically have the safety net of an out-of-band management console, such as Dell iDRAC or HP iLO to use, but what if that was unavailable too, we would have to resort to remote execution tools such as PSExec to get us out of dodge.

This can be a common scenario with server workloads running in public clouds too, given the options around remote connectivity and management are greatly improving, negating the need to use protocols such as RDP as a primary means to connect to a server GUI, Azure Bastion for example, but what if you come across that same scenario, total lack of a GUI on a VM that has no public-facing interfaces?

I had that exact scenario this week, I span up a Windows 10 Azure VM, nonchalantly enrolled it into Azure AD, restarted and could no longer connect via RDP.

As it happens the Intune Configuration Policies and taken effect, removed the standard local administrator account but for some reason failed to create the new admin account meaning I was locked out.

Now, in hindsight, I could have just as easy deleted the VM and recreated it after I’d rectified the issue with the policies, but instead, I used the hidden gem that is the Run Command to fix it.

What is the Run Command?

“The Run Command feature uses the virtual machine (VM) agent to run PowerShell scripts within an Azure Windows VM. You can use these scripts for general machine or application management. They can help you to quickly diagnose and remediate VM access and network issues and get the VM back to a good state”

In short, it’s a quick and easy way to access an administrative console on a Windows VM, even if you do have remote connectivity into the GUI, if the task you need to run, or the question you need to answer is answerable from the command line it’s often quicker to use the Run Command.¬†

The Run Command is found under Operations.

WVD-RunCommand

Microsoft provide a number of handy preset commands for performing common tasks such as executing an IPConfig or enabling the local admin account, however, if you simply need access to the command line there is the top option to RunPowerShellScript.  

WVD-RunCommand-Menu

This presents a text box to enter commands, clicking Run to execute.

Note, commands are run as NT Authority\System. 

WVD-RunCommand-WhoAmI

It was using this option that I resolved my connectivity issues, I used the below hash of command and Powershell to create a temporary admin account to allow me to logon to the VM.

net user /add TempAdmin 7ABLT*C#4!Lz
Set-LocalUser -Name TempAdmin -PasswordNeverExpires $True
Set-LocalUSer -Name TempAdmin -Description "Azure Run Command Created Temp Admin User"
net localgroup Administrators /add TempAdmin

The Output dialog shows the results of the script.

WVD-RunCommand-Fix
Lastly, for the wider Windows Virtual Desktop community, especially those managing Fall 2019 workloads, I’ve found it far quicker to run QWINSTA from the Run Command on a WVD VM to report on session states than it is to launch Powershell, connect to Azure AD, connect to RDWeb and query the hostpool, with the obvious limitation you’re querying a single host.

WVD-RunCommand-QWINSTA

Using Intune to remotely install Powershell modules on enrolled devices

A few weeks ago I shared a post detailing how you could write the resultant output of an Intune pushed Powershell script to Azure Tables, you can read that post here, the use case that drove that post was a customer asking for explicit evidence that a particular Microsoft hotfix had been installed on all devices in their estate.

The main function of that script used the Az module to connect to the Azure table and write the data, however, in that script I made what is in hindsight a pretty significant oversight in that I assumed and therefore didn’t check that the Az module had been installed and imported, this meant it failed when running on the majority of users’ devices as they didn’t have the module installed.

Thank you to Nathan Cook who commented on that post asking that very question and making me realise the mistake, I’ve now added an addendum to that post advising as such.

Intune-InstallPSModules-Comment

To that end see the below script from Nickolaj Andersen’s Github repo that I’ve adapted to suit being deployed from Intune.

This script can either be used at the start of an individual script to check for the presence of any required modules or deployed separately, say as part of an Autopilot post-deployment sequence to push out commonly used modules.

#Start logging
Start-Transcript -Path "C:\Logs\InstallAzNew - $(((get-date).ToUniversalTime()).ToString("yyyyMMddThhmmssZ")).log" -Force

# Determine if the Az module needs to be installed
try {
Write-Host "Attempting to locate Az module"
$AzModule = Get-InstalledModule -Name Az -ErrorAction Stop -Verbose:$false
if ($AzModule -ne $null) {
Write-Host "Authentication module detected, checking for latest version"
$LatestModuleVersion = (Find-Module -Name Az -ErrorAction Stop -Verbose:$false).Version
if ($LatestModuleVersion -gt $AzModule.Version) {
Write-Host "Latest version of Az module is not installed, attempting to install: $($LatestModuleVersion.ToString())"
$UpdateModuleInvocation = Update-Module -Name Az -Scope CurrentUser -Force -ErrorAction Stop -Confirm:$false -Verbose:$false
}
}
}
catch [System.Exception] {
Write-Host "Unable to detect Az module, attempting to install from PSGallery"
try {
# Install NuGet package provider
$PackageProvider = Install-PackageProvider -Name NuGet -Force -Verbose:$false

# Install Az module
Install-Module -Name Az -Scope AllUsers -Force -ErrorAction Stop -Confirm:$false -Verbose:$false
Write-Host "Successfully installed Az"
}
catch [System.Exception] {
Write-Host "An error occurred while attempting to install Az module. Error message: $($_.Exception.Message)" ; break
}
}

# Stop Logging
Stop-Transcript

This script should be run as the logged-on user, ensure this is set when creating the task in Intune, as below.

Intune-InstallPSModules-RunAsUser

Note, if you do use this script to deploy the entire Az module and not a subset such as Az.Network, be aware that it is pretty big and may take a while to download depending on environmental factors, such as available bandwidth etc.

The below screenshot shows the output of the transcript log written to the local device, note it shows that neither the Az module or NuGet package provider was installed so they were both pulled from the PSGallery and installed.

Intune-InstallPSModules-Log

For confidence during testing, you can see the script installing the many Az modules into C:\Program Files\WindowsPowershell\Modules

Intune-InstallPSModules-ModulesDir

A few quick tips for troubleshooting, not just for this script but for any you deploy via Intune.

1 > Don’t Wait

The default refresh and pull cycle of Intune (think GP refresh time for AD GPO’s) is 60 minutes but during development you’re going to want to push that script out fast. There are several ways to force a sync between Windows 10 and Intune, the quickest is definitely to restart the Microsoft Intune Management Extension service, this will force an immediate sync.

Intune-InstallPSModules-Service

2 > Run Script Locally

This may sound like an obvious one, but say you’re running a script like the one above to install a certain module and you’ll want to keep the testing environment that same, that is, you don’t want to run it on a different build of Windows 10, where you have full admin rights etc as this will increase the likelihood of false-positives.

To that end, Intune caches and executes a local copy of the script in C:\Program Files (x86)\Microsoft Intune Management Extension\Policies\Scripts – run that as the locally logged on user, maybe add the -WhatIf switch to simulate the results.

Note, the scripts won’t have the same friendly and informative name you saved them as, instead, they’re given the GUID name of the task in Azure.

Intune-InstallPSModules-SavedDir

3 > In The Registry We Trust

If you don’t subscribe to the practice of using Start and Stop-Transcript for logging you can use the registry to get the results of the script.

The key is HKLM\Software\Microsoft\MicrosoftIntuneManagementExtension\Policies

Again, as with the locally cached script, the key adopts the name of the task GUID, from there you can view the Result and ResultDetails values.

This is handy for development, however, I strongly suggest writing the out to verbose logs for Production.

Intune-InstallPSModules-Registry

Troubleshooting connectivity from the RD Client to WVD Part 1 – Getting Started

This week I’ve been working with a customer that is experiencing intermittent issues connecting into Windows Virtual Desktop from the RD Client installed on their corporate Windows 10 devices and was asked to formulate a list of troubleshooting steps that their IT team could follow to help find and resolve the root cause.

I reached out to Jim Moyle, our aligned WVD Global Black Belt for his thoughts, and in this post, I want to share those initial troubleshooting steps, along with the rationale for each, put them out to the wider WVD community for feedback and over the coming weeks update this article with the identified root cause and the steps taken to resolve the issue.

For info and early clarification, this customer is using the Fall 2019 release of Windows Virtual Desktop.

So, what is the problem?

As introduced, the customer is reporting that certain users, not all, are intermittently unable to connect to their WVD resource using the RD Client installed on their corporate Windows 10 device, furthermore, they never receive any errors (such as resources not available) nor have they indicated that the connection attempt times out (I’ll double-check this and update if required), it simply doesn’t connect.

The below screenshot was provided by an affected user, this shows the RD Client attempting to connect to a selected remote desktop.

WVD-Connection-Issue

Before delving any deeper and starting to define the tests to be undertaken lets quickly recap on how a user connects to WVD, that way once we do start defining a particular test we better understand the rationale behind it, that is, what are we trying to prove or disprove.

The diagram below shows the WVD connection flow.

WVD-Traffic-Flow

Step 1 > The user launches the RD Client which connects to Azure AD, user signs in, and Azure AD returns token.
Step 2 > The RD Client uses the previously generated token and authenticates to Web Access, the Broker then queries the database to determine the resources (Remote Apps and Desktops) that the user is assigned to.
Step 3 > The user selects a resource (Remote App or Desktop) and the RD Client connects to the Gateway.
Step 4 > Finally, the Broker orchestrates the connection from the WVD instance (the Azure VM) to the Gateway (aka Reverse Connect).

Note, on start-up the RD Client will always refresh your feed, as below, this is the RD Client running through steps 1 and 2 in the above connection flow.

WVD-Refreshing-Feed

Now, let’s quickly cover some basic assumptions before we get into the testing, again I’ll update these as I speak with the customer IT team and understand more of the nuances of the issue.

Assumption 1 > WVD is a global service, as such, for resilience, it operates many instances of the WVD control plane (the backend services, such as Web Access, Broker and Gateway shown in the connection flow diagram) in each region, however, based on availability at the time the control plane managing your user’s connections into WVD may not be running in the same region as the WVD VM’s themselves. Azure use their Front Door (and Traffic Manager) service to provide a resilient and optimised connection to the control plane – let’s assume a control plane is always up and available.

However, if we wanted to clarify the control plane you’re using is healthy we could use the below Powershell commands.

# Import Fall 2019 WVD module
Import-Module -Name Microsoft.RDInfra.RDPowerShell

# Connect to WVD
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"

# Get control plane info
Invoke-RestMethod -Uri "https://rdweb.wvd.microsoft.com/api/health"

The below shows the results of the script, note the service is reporting as healthy and more importantly the Region URL shows that actual control plane you’re using.

WVD-Get-Control-Plane

Assumption 2 > This is only affecting certain users, other users can successfully connect to the same WVD resource at the same time others cannot indicating that the WVD VM’s themselves are healthy.

Again, if we want to verify the health of the WVD session hosts within a given host pool we could use the below Powershell commands.

# Import Fall 2019 WVD module
Import-Module -Name Microsoft.RDInfra.RDPowerShell

# Connect to WVD
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"

# Set Session Host Status
Get-RdsSessionHost -TenantName WVD-Tenant-Name -HostPoolName WVD-Host-Pool-Name | Select SessionHostName, Status

The results will be shown as below.

WVD-Session-Health

Assumption 3 > This is only affecting users on their corporate devices, I’ll double-check this with the customer IT team and update if needed.

Assumption 4 > The customer has all the WVD backend services opened and available through the corporate firewall and web proxies.  I know they use Cisco Umbrella for web proxy services, so something to be mindful of.

Assumption 5 > The user sees the same issue if they are on their corporate LAN, VPN or connected to the open internet from their home broadband as the majority of their staff are working from home.

So, what tests are we going to run when a user reports they cannot connect and why?

Test 1 > Can the user access the same WVD resource from the HTML5 interface at https://aka.ms/wvdweb?

Why? We need to narrow down whether the issues is only with the connection initiated from the RD Client, testing from the WVD Web Interface should help prove this.  If the user is able to authenticate to the web interface, see all of their assigned WVD resources and then successfully logon to a desktop it proves the issue is not with the control plane, their assignment or WVD session host.

Test 2 > Can the user resolve the WVD Global URL in DNS?

Why?¬†¬†I’m almost certain that whatever the route cause is it will be environmental, that is, something occurring at that exact time on that device that is hindering the connection attempt. This is a very simple test to ensure that the device is able to resolve the WVD Global URL that is used to forward to the regional control plane instance and initiate the connections.

C:\Users\DeanLawrence>nslookup rdweb.wvd.microsoft.com
Server:  SkyRouter.Home
Address:  192.168.0.1

Non-authoritative answer:
Name:    waws-prod-db3-3dec2181.cloudapp.net
Address:  137.116.248.148
Aliases:  rdweb.wvd.microsoft.com
          rdweb-prod-geo.trafficmanager.net
          mrs-neur1c100-rdweb-prod.wvd-ase-neur1c100-prod.p.azurewebsites.net
          waws-prod-db3-3dec2181.sip.p.azurewebsites.windows.net

We could build on that test slightly using the Test-NetConnection cmdlet to test connectivity to the Regional URL (from test 2) over HTTP.

PS C:\Users\DeanLawrence> Test-NetConnection rdweb-neu-r1.wvd.microsoft.com -CommonTCPPort HTTP -InformationLevel Detailed

ComputerName : rdweb-neu-r1.wvd.microsoft.com
RemoteAddress : 137.116.248.148
RemotePort : 80
NameResolutionResults : 137.116.248.148
MatchingIPsecRules : 
NetworkIsolationContext : Internet
IsAdmin : False
InterfaceAlias : WiFi
SourceAddress : 192.168.0.124
NetRoute (NextHop) : 192.168.0.1
TcpTestSucceeded : True

Test 3 > Clear RD Client Subscriptions and Re-Subscribe

Why? As mentioned earlier, on start-up the RD Client performed a refresh of the feed, this then caches the subscription details in the registry.  This test is looking at whether there could potentially be an issue either with the cached settings or the authentication token. You can unsubscribe from the RD Client itself by clicking the 3 dot menu next to the tenant name and selecting Unsubscribe or running the below from a command prompt.

msrdcw.exe /reset /f

Well, that’s it for now (16/05/20), if you have any suggestions please send them my way on Twitter, I’ll update this post as soon as I’ve had a chance to work with the customers IT team and investigate these issues closer.

Thanks.

Part 2 in this blog series is up now, click here.

QuickFix > Starting and Stopping Azure VM’s using a Powershell Menu

This is another quick fix to help automate a somewhat monotonous task, this time it’s using Powershell to build a simple menu to give the ability to start and stop a set of Azure virtual machines using a Service Principal.

The script is pretty straight forward and can be easily adapted to add additional VM’s should you need to power up, or down, a larger set of virtual machines.

For ease and to keep this as hands-off as possible this script uses a Service Principal to authenticate to Azure AD and perform the task setting the VM power state, this saves having to manually enter credentials and responding to MFA challenges.
I’ve borrowed and adapted the Powershell commands directly from this Microsoft Windows Virtual Desktop article which can be used to create a Service Principal if you don’t have one created already.
# This script creates a Service Principal within Azure AD

# Install and import AzureAD PS Module
Install-Module AzureAD
Install-Module AzureAD
# Connect to Azure AD with a Global Admin account.
$aadContext = Connect-AzureAD

# Create Service Principal
$svcPrincipal = New-AzureADApplication -AvailableToOtherTenants $true -DisplayName "VM Power Menu Service Principal"

# Get Service Principal ID and Key 
$svcPrincipalCreds = New-AzureADApplicationPasswordCredential -ObjectId $svcPrincipal.ObjectId

# Return Service Principal App ID
$svcPrincipal.AppId

# Return Service Principal Secret Key
$svcPrincipalCreds.Value

# Return Azure AD Tenant ID
$aadContext.TenantId.Guid

Before proceeding please ensure that the newly created Service Principal has the adequate permissions to start and stop the required virtual machines, this can be set directly on the VM’s themselves or on the Resource Group.

Note, if you’re setting these permissions in a production environment or an environment that is particularly security-sensitive please be mindful of the principle of least privileges, that is, avoid using elevated roles such as Owner or Contributor role if a lesser role would suffice, such as Virtual Machine Contributor.

Once the Service Principal has been created copy and paste the below into your code editor of choice, you will need to set the initial variables in the header, you can use the resultant output from the script above for the Azure AD Tenant ID  and Service Principal details.  You will also need to populate the two Azure VM variables and the associated hosting Resource Group.

Import-Module Az.Compute

$AADTenant = [Azure-AD-Tenant-ID-Here]
$SPAppID = [Service-Principal-App-ID]
$SPAppSecret = [Service-Principal-Secret-Key-Here]
$WVDVM1 = [Second-VM-To Start]
$WVVM2 = [Second-VM-To Start]
$AzRG = [Resource-Group-Hosting-VMs-Here]
$passwd = ConvertTo-SecureString $SPAppSecret -AsPlainText -Force
$pscredential = New-Object System.Management.Automation.PSCredential('$SPAppID', $passwd)

Connect-AzAccount -ServicePrincipal -Credential $pscredential -Tenant $AADTenant

function Show-Menu
{
     param (
           [string]$Title = 'Start-Stop WVD Lab'
     )
     cls
     Write-Host "================ $Title ================"
     Write-Host "1: Press '1' to start WVD."
     Write-Host "2: Press '2' to stop WVD."
     Write-Host "Q: Press 'Q' to quit."
}
do
{
     Show-Menu
     $input = Read-Host "Please make a selection"
     switch ($input)
     {
           '1' {
                cls
                'Starting WVD'
                    Start-AzVM -Name $WVDVM1 -ResourceGroupName $AzRG
                    Start-AzVM -Name $WVDWM2 -ResourceGroupName $AzRG
           } '2' {
                cls
                'Stopping WVD'
                    Stop-AzVM -Name $WVDVM1 -ResourceGroupName $AzRG -Force
                    Stop-AzVM -Name $WVDWM2 -ResourceGroupName $AzRG -Force
           } 'q' {
                return
           }
     }
     #pause
}
until ($input -eq 'q')

When executed the script presents a simple menu with 3 choices as below.

WVD-Start-Script

Note, after executing either options 1 or 2 the script will intentionally not exit and close, instead, it will await a response of Q to quit – this was done intentionally as a gentle reminder to the user to stop the VM’s they’d originally started.

Lastly, as a matter of good practice, for lab environments that don’t need to spin around the clock, for subscriptions that use free credit, or you’re just trying to keep costs down I’d always recommend setting an Auto-Shutdown time on non-production VM’s as a backup.

Writing the results from an Intune executed Powershell script to Azure Table Storage

Edit 23/05/20: The script in this post omits a check for the presence of the required Powershell modules required to run certain cmdlets, please see new accompanying post for details on how to check for the presence, and install required modules remotely using Intune.

Anyone who has worked as a SysAdmin over the years managing either server or client estates, or both, and likely remotely, will have undoubtedly had to hack up a script of varying complexity to push out some sort of workaround or fix, be it adding a registry key or deleting old profiles to recoup some disk space.

That was all straight forward in ‘traditional’ Windows-based server-client environments were connectivity to the source was over a reliable LAN or WAN and the hardest part was remembering the correct syntax for PSEXEC, but that is no longer the case in the more modern of workplaces were devices are Azure AD joined and use the internet as their default communication medium.

The is where Intune (Endpoint Manager) comes in, the ability to remotely execute Powershell scripts to enrolled devices isn’t new and I’ve used it increasingly over the months to bridge a few small gaps, usually where a particular policy setting I needed wasn’t natively available, such as setting the properties of a local account that was created using an Intune policy, such as ‘password does not expire’ but I’ll cover that end-to-end in another post as it’s handy to know.

So, as mentioned, the actual pushing out of a script in Intune is pretty straight forward however centrally capturing the output isn’t. The Intune portal provides a very basic reporting function that displays whether the script was successfully deployed or not, that is to say, did it exit with defined exit code, but you do not get anything remotely verbose in the portal, for example, it does not capture any directed output.

So, in a decentralised environment with no common LAN or server infrastructure to write results too what are the options?

I did once unsuccessfully experiment trying to write to Azure Blob Storage and have contemplated similar using Azure Files and mapping drives within the script but for now, after consulting the MDM community on Twitter I came across a post by Travis Roberts that provided a great article on writing the output of a Powershell script to Azure Table Storage.

Note, the script that follows is directly adapted from Travis Roberts’ original blog post, so please check that out. Thanks, Travis.

Right, so let’s quickly cover the actual use-case here, why do I need to write a Powershell script and push it out via Intune and why am I so concerned with capturing the results?

The answer is I was contacted by a customer who as part of a wider Microsoft 365 adoption we deployed Intune for and was asked whether we could use Intune to quickly report on whether a particular Microsoft hotfix had been installed on all of their Windows 10 devices in response to a recent threat – the short answer was no we couldn’t.¬† Yes, we were using Intune to define Windows Updates but again the reporting was not detailed enough.

The result was the below, this script when executed searches for the presence of a defined hotfix by the KB it was wrapped in and writes the results to a very simple Table in Azure which I could use Storage Explorer to monitor and report from.

In readiness to utilise this script you must have already created a Table within an Azure Storage Account and generated a SAS key for a defined window.

I’ve used a generic Partition Key of KBCheck to satisfy the data integrity constraints.

# Step 1, use Start-Transcript to capture the execution to a text file and set variables for connecting to Azure Table

Start-Transcript -Path "C:\TempLogs\KB4541338Check-$(((get-date).ToUniversalTime()).ToString("yyyyMMddThhmmssZ")).log"

$storageAccountName = 'StorageAccountName'
$tableName = 'TableName'
$sasToken = 'SASKeyHere' 
$dateTime = get-date
$partitionKey = 'KBCheck'

# Step 2, Connect to Azure Table Storage
$storageCtx = New-AzureStorageContext -StorageAccountName $storageAccountName -SasToken $sasToken
$table = (Get-AzureStorageTable -Name $tableName -Context $storageCtx).CloudTable

# Step 3, Check for presence of hotfix
Write-Host "Checking for KB4541338"

if (get-hotfix -Id KB4541338) {
$PatchCheck = "Installed"
Write-Host "KB4541338 Installed"
} 
else {
$PatchCheck = "Not Installed"
Write-Host "KB4541338 Missing"
}
# Step 4, Write data to Table Storage and end transcript.

Write-Host "Writing to Azure Table $table"

Add-StorageTableRow -table $table -partitionKey $partitionKey -rowKey ([guid]::NewGuid().tostring()) -property @{
'LocalHostname' = $env:computername
'PatchStatus' = $PatchCheck
} | Out-Null

Stop-Transcript

The output of the script when viewed in Azure Storage Explorer is very simple, it shows the device hostname and either ‘Installed’ or ‘Not Installed’ depending on whether the KB was found.

This script can easily be used as a framework for capturing other simple text-based outputs using Intune and Powershell, simply update the body of the script in step 3.

Thanks again to Travis Roberts for the original script.

QuickFix > Set Working Directory on WVD RemoteApp

This one might be old news to some but I thought I’d share it as it got me out of jam earlier this week.

I needed to create a RemoteApp in WVD (Spring Release) for a proof of concept I was working on, however, this application required a working directory to be set for it to launch correctly.

I’ll stress this was a quick fix and there may be a more elegant way to achieve the same result but I was against the clock at the time.

The quick fix was to create a batch file locally on the WVD instance with the below syntax:

start /d c:\App-Working-Directory-Here c:\Executable-To-Launch-Here.exe

eg.

start /d c:\EstGrp\CM\830\CM\USER c:\EstGrp\CM\BINW\cmw32.exe

Once the batch file is saved, from the WVD Management UI create a new RemoteApp in the relevant Application Group and enter the path the batch file in the Application Path, continue to enter the Display Name and set the Icon Path to the path to the executable as normal:

WVDv2-Batch

As expected when you launch the application from the RD Client or WVD Web Interface (from the correct URL, thanks to Tom and Marcel ūüôā the batch file will launch, albeit quickly and then launch the required app.

Note, the same could be achieved using PowerShell with the below syntax:

Start-Process -FilePath C:\EstGrp\CM\BINW\cmw32.exe -WorkingDirectory C:\EstGrp\CM\830\CM\USER

If you have been in a similar situation in needing to set the Working Directory, or some other attribute and have a more elegant solution please feel free to share it, either in the comments below or on Twitter.

WVD Spring Release > What’s New?

This post will focus on the changes made to the Windows Virtual Desktop (WVD) service by way of the Spring Release that went into Public Preview on Thursday 30th April 2020.

For ease, I’ll refer to the pre Spring Release WVD as version 1, and post Spring Release WVD as version 2.

Firstly, I’ve been a massive fan of WVD from the early days and was lucky enough to attend the Microsoft Airlift event in Seattle back in late September 2019 in which Microsoft officially announced its general availability – since then I’ve spent a considerable amount of time working with WVD, deploying it not only for internal use but for a number of large organisations, all with global workforces, who were looking to take advantage of its capabilities.

Now, I’m not going to say WVD version 1 was perfect, far from it, however, mindful that it was a first-generation product and Microsoft made it clear that their initial efforts were to build the multi-session capabilities into Windows 10 and additional functionality and supporting services, such as a much-requested native Azure portal-based management interface would follow soon.

Like many others in a similar position, I am extremely grateful to Marcel Meurer for producing his fantastic WVD Admin tool, this has saved me several hundred hours over the past months – to be honest, even with the newly released management interface (which I’ll cover later) I’m going to be hard pushed to not use WVD Admin going forward, it’s truly that good.

I’ve kept an eager-eye on the WVD Roadmap for months now,¬† hoping that the product team would make the new management UI available sooner rather than later, apologies to the guys at Microsoft who I’ve pestered no-end, especially Jim and Tom ūüôā

So on Thursday 30th April Microsoft made the announcement that the Spring Release of WVD had moved out of private, and into public preview!

All I was expecting was the aforementioned new management interface, what we received, however, was essentially an entirely new WVD architecture!

My initial thoughts around the rearchitecting were, why? Why did WVD, a product that was less than 12 months old need such an overhaul? Now having spent a few days digging into version 2 it’s evidently clear, and in hindsight, it’s likely that I’d used WVD version 1 so much I’d become very robotic in the way I managed some of the less than elegant nuances, saying that, I’ve written close to 20 individual PowerShell scripts in that time to address these nuances so that should have given me some food for thought…

Anyway, what’s changed and why is such good news for those deploying, managing and using WVD going forward?

Firstly, WVD is now a first-party Azure service, to those who have deployed version 1 that means no more having to go through the steps of registering the app service in Azure granting consent (twice!) and having to run lines of PowerShell to create a tenant, in-fact Microsoft has done away with the concept of a tenant altogether!

Christian Brinkhoff provided a great diagram, below, showing how the WVD model has evolved in version 2, note the lack of tenant group and tenant.

wvd-v2-arch

In version 2 Microsoft has removed the hard-dependency between the host pool and apps or desktop groups, each component of the WVD architecture, that is, the workspace, host pool, app group, in line with Azure Resource Manager (ARM) is now modular and can be linked together to present resources to end-users (and groups, the Spring Release also brought this long-awaited functionality!)

Christian’s latest blog post on the Spring Update is essential reading so please check it out, Freek Berson‘s post on LinkedIn is brilliant too, not to forget Dean Cefola‘s Azure Academy YouTube channel – these guys provide an immense amount of content, information and inspiration for the wider WVD, EUC and Azure tech communities.

In place of the deprecated Tenant is now the Workspace, a Workspace is a collection of application and desktop resources presented to the end-user, in the screenshot below Core Apps, Finance Team and Tech Team are all Workspaces that the test user is assigned to.

wvd-v2-workspace-2

Where this differs from version 1 is that in version 2 a single host pool can be associated to an application group and a desktop group at the same time, whereas in version 1 if you wanted to present applications and a desktop to the same user you would have had to deploy two separate host pools and associated virtual machines.

The relationship between components in the version 2 architecture is shown below.

wvd-v2-basic

As mentioned previously the Spring Release also brought the ability to use Azure AD groups (sync’d from AD) to assign users to WVD resources.

Users or groups are assigned access to the application or desktop group, those are associated with a hostpool (and underpinning sessions hosts, not shown in the diagram) and it is the hostpool that is assigned to the Workspace.

Even better than that, a single Workspace can be made up of multiple host pools!

Say you create a Workspace for the Graphic Design team at Company X and amongst the standard productivity tools the team use a resource-intensive application, you spin up an F series VM or similar and install the application on it, you want that application presented to the team in the same Workspace as their other apps but you don’t want to pay for all F series VM’s – with WVD version 2 you can spin up a host pool of cheaper general purpose session hosts for hosting MS Office etc and present the collective application stack to the team from a single Workspace, as below.

wvd-v2-basic2

Note, multiple app groups can also be assigned to a single host pool.

So that concludes this article on the Spring Release of WVD however I’ll share more as I dig further into version 2.

Lastly, for those wondering as I did how to officially migrate your version 1 WVD workloads to WVD version 2, the answer is to sit tight for now, Microsoft will be releasing tooling soon to uplift the non-ARM deployments into ARM and version 2, as per Christiaan’s tweet below.

WVD-v2-Tweet

…that said, using version 1.5.0.0 of WVD Admin it is quick and easy to use existing VM images to deploy into version 2 host pools, just follow this post.

Bulk Updating UPN’s

As part of a recent project I had to migrate 1500 user objects between two AD domains and then sync them with Azure AD – I’ll cover that entire project and its challenges, of which there was a handful in another post.

As part of that migration, I needed to bulk update the UPN’s of the newly created accounts to align with the custom domain name configured in Azure AD, that begs the question why didn’t I just stand up the new AD domain with the appropriate suffix in the first instance, and the honest answer is this project was extremely fast-moving and we had to make decisions on the fly whilst we were working on the strategic architecture, again, this will become clearer when I cover the project in more depth.

Anyway, with 1500 user objects to update I certainly wasn’t doing that manually, I used the below script to target all users in a specific OU and change their UPN.

Note, if you are to use this please ensure you have created the new UPN in AD Domains and Trusts first, details on how to do that here.

Import-Module ActiveDirectory

$oldSuffix = "oldsuffix.local"
$newSuffix = "newsuffix.co.uk"
$ou = "OU=New-Accounts,DC=oldsuffix,DC=local"

Get-ADUser -SearchBase $ou -filter * | ForEach-Object {
$newUpn = $_.UserPrincipalName.Replace($oldSuffix,$newSuffix)
$_ | Set-ADUser -UserPrincipalName $newUpn
}

Understanding Intune Policies

This¬†blog post will address, and hopefully, demystify a topic I struggled with when first starting out with Intune or Endpoint Manager to use its new moniker, and that is the difference between Configuration, Compliance and Security Policies and in which scenarios to use them.¬† So, let’s dig into it, I‚Äôll cover each policy type in turn and in an order that should hopefully help tie the relationship between the policies together.

Intune-Policies

Configuration Policies

The best way to think of a Configuration Policy is as Intune’s implementation of Group Policy, in fact, Microsoft has engineered Configuration Policies in such a way as to allow you to import and utilise ADMX files in the same way you would with a traditional Group Policy Object.

Intune-Config

Configuration Policies are therefore what you would use to apply predefined settings to a user or device, such as defining a set homepage or other browser settings in IE and Edge (and even Chrome and other browsers, but that’s for another blog!) or enforce a custom desktop wallpaper or lock screen behaviour in Windows 10 and like Group Policy Objects, Configuration Policies can be applied to a targeted set of users or devices using groups within Azure AD.

Intune-Config2

Security Policies

Security Policies or Security Baselines as they are interchangeably referred to are pre-configured Windows settings that help you apply a known group of settings and default values that are recommended by Microsoft, that is to say, when you create a security baseline, you’re creating a template that consists of hundreds of individual Configuration Policies.

Intune-SecurityBaseline

Microsoft routinely releases a new Security Baseline which is a thorough pre-defined set of policies covering all facets of the target technology, such as Windows 10, that can be quickly and easily deployed to secure your environment.

Note, Security Baseline are extremely exhaustive and I would advise caution over adding them without careful testing, they are, however, extremely useful at locking down an environment to a given standard quickly.

Compliance Policies

Compliance Policies are used to evaluate a device’s compliance against a pre-defined baseline, such as the requirement for a device to be encrypted or to be within a defined minimum OS version.

Intune-Compliance2

Compliance Policies are a good tool for alerting on configuration drift, and when deployed alongside Conditional Access Policies can control what a device can and cannot access should it be deemed non-compliant, for example, non-compliant devices can be blocked from accessing corporately owned data.

Intune-Compliance

Summary

Each policy type when individually deployed correctly can add great value in securing a plethora of OS and device types, however, when configured and deployed together they can not only enforce an entire collection of settings championed by Microsoft but also provide the assurance that should a device fall foul of the required compliance baseline that device and the user using it would not be able to access and potentially but inadvertently open the company up to malicious exploit.

Finally, I’d highly recommend following Intune Training on YouTube where Steve and Adam (and others) share some great content on all things Intune.

Intune-Training

I also maintain a List on Twitter for the key folk I follow in the MDM space, feel free to follow that here.