Azure DevOps Dataverse solution deployment - Solution upgrade vs solution import

What's the difference and how can you automate doing the right thing?

cover

Setting the scene

In a lot of guides to setting up Azure DevOps pipelines to deploy Dataverse solutions, you’ll see a simple “Import Solution” step like this:

What does this do?

It imports the solution using the default mode. In this default mode everything works just fine if the solution doesn’t already exist in the target environment, but what happens if the solution is already present?

If the solution is already present, this task uses the Update method by default. Here’s how Microsoft describe that in the Dataverse docs :

Update
 This option replaces your solution with this version. Components that aren’t in the newer solution won’t be deleted and remains in the system. Be aware that the source and destination environment may differ if components were deleted in the source environment.

Note that these docs state that Upgrade is the default, but this is not the case for Build Tools - only for the Maker Portals.

So here we can see the problem in an upgrade scenario - where the newer version of the solution no longer contains something the older version did (because we deleted it during development), that component will not be deleted by the deployment!

Here’s an example:

Component / StepSolution v1.0Solution v2.0
Cloud Flow 1PresentPresent
Cloud Flow 2Present- (no longer present)

We had 2 flows in v1.0, but we deleted one of them in v2.0.

Here’s what happens in a target environment where v1.0 and then v2.0 are deployed using Update:

Component / StepEnvironment after deployment of Solution v1.0Environment after deployment of Solution v2.0 using Update
Cloud Flow 1PresentPresent
Cloud Flow 2PresentStill present, but should not be!

The environments differing like this can cause all sorts of issues and differences in behaviour. For example, old processes and business rules that no longer apply continue to be executed. Furthermore, each environment might even vary depending on the exact sequence of versions that have been deployed. So obviously, we don’t want that!

Side note:

You’ll sometimes see teams who understand that this is the case and they will use a convention to rename and ‘disarm’ things (e.g. remove all steps in a Cloud Flow and then change the name to ‘OBSOLETE…’) instead of deleting during development. Historically, this was the only easy solution, but it leaves behind a mess. We can do better!

So what is the solution?

The Power Platform Import Solution Build Tools task has a parameter we can set labelled as

Stage and Upgrade - Import the managed solution and immediately apply it as an upgrade.

So we just tick this box and we’re done?

Unfortunately not. If you enable this option, your action will no longer work in the case that the target environment doesn’t already have a version of this solution installed. Now upgrading works correctly, but an initial install doesn’t!

So what are the options:

  1. We could edit the pipeline every time we need to change this.
    That’s not good practice if you’ve spent days testing. Changing anything in your deployment process after you tested it adds risk.
  2. We could try to make this parameter dynamic so it’s on when needed and off when not.
    This is the best solution because our pipeline will always do the right thing.

How do we make it dynamic?

PowerShell to the rescue!

We can retrieve the record from the solution table by uniquename and see if the solution is already present in the current environment:

# Always use this in PS scripts!
$ErrorActionPreference="Stop"

Install-Module -Scope CurrentUser Rnwood.Dataverse.Data.PowerShell

# This will work for running the script interactively. See post below for how to use in AzDO Pipelines.
# https://rnwood.co.uk/posts/getting-started-with-power-shell-in-az-do-for-power-platform/
$connection = Get-DataverseConnection -url https://someenv.crm11.dynamics.com -interactive

# Get the details of the solution by uniquename if it's already installed, or $null
$installedsolution = Get-DataverseRecord -connection $connection -TableName solution -filter @{uniquename="SomeSolution"}

# We need to use solution upgrade if solution is already installed
$usesolutionupgrade = $installedsolution -ne $null

Once we have the $usesolutionupgrade flag, we can then use PAC to do the right thing:

if ($usesolutionupgrade) {
   pac solution import --async --path somesolution.zip --activate-plugins --publish-changes --stage-and-upgrade
} else {
   pac solution import --async --path somesolution.zip --activate-plugins --publish-changes 
}

Finally, every time you run a native command (an.exe) in PS, you are responsible for checking if it was successful:

if ($LASTEXITCODE -ne $0) {
   throw "Solution import failed"
}

This will ensure that the script stops and reports an error whatever is running it - AzDO in this case.

Footnote: Why not do this in the Pipeline steps directly?

To my knowledge, it’s not possible to do this using the Power Platform Build tools standard tasks. No task can determine the status of a solution.

It is likely possible with third-party tasks. But consider, that this isn’t likely to be the only conditions and extra logic that you’re going to need. Using a PS script allows you to easily express a wide range of such.