Power Platform AzDO Pipelines - Export, Build, Import and Deploy

Setting up complete Azure DevOps pipelines for exporting, building, importing, and deploying Power Platform solutions across multiple environments

cover

This article builds on the foundations covered in Getting started with PowerShell in AzDO for Power Platform. Make sure you’ve read that first to understand how to connect to your environments using service connections.

Now you’ve got the basics of running PowerShell in Azure DevOps pipelines and connecting to your Dataverse environments, it’s time to put together a complete ALM (Application Lifecycle Management) solution. We’ll be using the Rnwood.Dataverse.Data.PowerShell module which provides comprehensive cmdlets for solution management as well as data and the other things we need to make a powerful end-to-end process.

Beyond the Basics

Most Power Platform ALM examples you’ll find online cover only the fundamentals and have significant gaps that will cause problems in real projects. This article builds a production-ready foundation that addresses these common shortcomings:

Common Issue Basic Examples This Article
Import vs Upgrade Use import which leaves orphaned components Use staged upgrades that cleanly remove deleted components
Multiple Solutions Handle one solution at a time Support multiple solutions with correct dependency ordering
Environment Variables Hardcode values or store in source control Pull from Azure DevOps variable libraries per environment
Connection References Manual configuration after deployment Automatic mapping from variable libraries
Branching Assume single main branch only Support feature branches and release branches with separate environments
Version Numbers No versioning or manual updates Automatic version incrementing with branch-based base versions
Deleted Components PAC CLI unpack doesn’t detect deletions Clean folder before unpack so git detects removed files
Deployment Scripts Inline YAML that drifts from tested builds Scripts included in build artifacts for reproducibility

By the end of this article, you’ll have pipelines that handle these scenarios correctly, avoiding the painful surprises that come from simpler approaches.

Reminder - Why Azure DevOps instead of Power Platform Pipelines? If you’re wondering why we’re using Azure DevOps Pipelines rather than the built-in Power Platform Pipelines, it’s because PPP has significant limitations for pro projects - it can only handle one solution at a time, can’t manage non-solution-aware components, lacks source control integration, and offers limited extensibility. Read more about why Power Platform Pipelines isn’t powerful enough for many pro projects.

Reminder Why PowerShell? You might be wondering why we use PowerShell scripts rather than just the Power Platform Build Tools tasks or PAC CLI directly. While those tools are great for simple scenarios, PowerShell gives us the flexibility to express complex logic, handle multiple solutions with dependencies, and automate tasks that would otherwise require manual intervention. Learn more about why PowerShell is essential for Power Platform automation.

Contents

The Big Picture

We’re going to cover four key pipelines:

  1. Export Pipeline - Export and unpack solutions from your development environment into source control
  2. Build Pipeline - Automatically triggered when changes are committed, packing solutions into deployable assets
  3. Import Pipeline - Import solutions from source control into your development environment (the reverse of export)
  4. Deploy Pipeline - Deploy your solutions across multiple environments with staged approvals

Before diving into the details, let’s visualise how these four pipelines work together:

flowchart LR
    subgraph DEV["Development Environment"]
        D1[("Dataverse")]
    end
    
    subgraph EXPORT["Export Pipeline"]
        E1[Export Solutions]
        E2[Unpack to Files]
        E3[Commit to Git]
    end
    
    subgraph GIT["Source Control"]
        G1[("Git Repository")]
    end
    
    subgraph IMPORT["Import Pipeline"]
        I1[Pack Solutions]
        I2[Import to Dev]
    end
    
    subgraph BUILD["Build Pipeline"]
        B1[Pack Solutions]
        B2[Include Scripts]
        B3[Publish Artifacts]
    end
    
    subgraph ARTIFACTS["Build Artifacts"]
        A1[("Solution ZIPs + Deploy Scripts")]
    end
    
    subgraph DEPLOY["Deploy Pipeline"]
        direction TB
        T[Deploy to Test]
        T1[("Test DB")]
        U[Deploy to UAT]
        U1[("UAT DB")]
        P[Deploy to Prod]
        P1[("Production DB")]
        T --> T1
        T --> U
        U --> U1
        U --> P
        P --> P1
    end
    
    D1 --> E1 --> E2 --> E3 --> G1
    G1 --> I1 --> I2 --> D1
    G1 -->|commit triggers| B1 --> B2 --> B3 --> A1
    A1 --> T

And here’s the deployment flow within each environment:

flowchart TD
    subgraph STAGE["Stage Phase (dependency order)"]
        S1[Stage CoreSolution]
        S2[Stage ExtensionSolution]
        S3[Stage IntegrationSolution]
    end
    
    subgraph UPGRADE["Upgrade Phase (reverse order)"]
        U1[Upgrade IntegrationSolution]
        U2[Upgrade ExtensionSolution]
        U3[Upgrade CoreSolution]
    end
    
    subgraph POST["Post-Deployment"]
        P1[Publish Customisations]
        P2[Activate Processes]
    end
    
    S1 --> S2 --> S3 --> U1 --> U2 --> U3 --> P1 --> P2

Why Four Separate Pipelines?

You might wonder: why not just have a single pipeline that exports, builds, and deploys all in one go? There are several important reasons for separating these concerns:

Separation of Concerns

Each pipeline has a distinct purpose and trigger:

Pipeline When it runs Who triggers it Purpose
Export On demand Developer Capture changes from dev into source control
Build Automatically Source control commits Create deployable artifacts
Import On demand Developer Import changes from source control into dev
Deploy Automatically or on demand Build completion or release manager Deploy to target environments

Why Do We Need an Import Pipeline?

The import pipeline is essential for several scenarios:

  • Branch switching - When you switch to a different branch to work on a new feature, you need to get your dev environment in sync with that branch’s version of the solutions
  • Merge conflict resolution - After resolving merge conflicts in source control, you need to import the merged changes back to your dev environment before continuing work
  • Team synchronisation - When another team member’s changes are merged into your branch, you need to import them to continue development
  • Recovery - If someone accidentally breaks the dev environment, you can import a known-good state from source control

Without an import pipeline, developers would have to manually import solutions, which is error-prone and time-consuming.

Version Control Benefits

By separating export from deployment:

  • Reviewable changes - The export pipeline commits changes to source control, allowing you to see exactly what changed before you deploy it, as well as looking back to see exactly what changed when and why
  • Rollback capability - You can deploy any previous build artifact, not just the latest export
  • Audit trail - Clear separation between “what changed” (commits) and “what was deployed” (releases)

Build Once, Deploy Many

The build pipeline produces artifacts that can be deployed to multiple environments:

  • No need to re-export or re-pack for each environment
  • Guarantees the exact same artifact is deployed everywhere
  • Reduces the risk of “it worked in test but not in prod”

Independent Scaling

Different teams and processes can own different pipelines:

  • Export - Owned by developers, run whenever they complete work
  • Build - Fully automated, runs on every commit
  • Import - Owned by developers, run when syncing from source control
  • Deploy - Controlled by release managers with approval gates

Branching Strategy

Starting Simple: If you’re just getting started, you can use just the main branch with a single development environment. Everything in this article works with that simpler setup - just ignore the feature and release branch sections until you need them. The multi-branch strategies described below are for teams that need parallel development or long-term support of multiple versions.

For more complex projects, you may need a branching strategy that supports parallel development and maintenance of released versions. Here’s a recommended approach:

gitGraph
    commit id: "Initial"
    commit id: "Feature A"
    branch feature/widgets
    commit id: "Add widget component"
    commit id: "Widget styling"
    checkout main
    commit id: "Feature B"
    merge feature/widgets id: "Merge widgets"
    commit id: "Feature C"
    branch release/1.0
    commit id: "1.0 Release prep"
    checkout main
    commit id: "Feature D (v2.0 work)"
    checkout release/1.0
    commit id: "1.0 Hotfix"
    checkout main
    commit id: "Feature E"

Branch Types and Their Purposes

Branch Type Pattern Purpose Dev Environment Test Environment Deploys To
Main main Active development for next major release Dev-Main Test-Main (recommended) or shared Test Test → UAT → Prod
Feature feature/{name} Isolated development of new features Dev-{name} Test-{name} (optional) Usually dev only
Release release/{version} Maintenance of released versions Dev-release-{version} Test-release-{version} (recommended) or shared Test Test → UAT → Prod

Per-Branch Test Environments (Recommended): While you can share a single Test environment across all branches, having separate test environments per branch (e.g., Test-Main, Test-release-1.0) is strongly recommended for teams working on multiple releases. This prevents conflicts where a build from main overwrites a build from release/1.0 that’s still being tested, and allows parallel testing of different versions. Start with shared environments if you’re new to this, then add per-branch test environments as you scale.

Which Environments Go With Which Branch?

Understanding which environments are used by which branches is key to managing your ALM process:

flowchart TB
    subgraph BRANCHES["Branches"]
        MAIN["main<br/>(BaseVersion = 2.0)"]
        FEAT["feature/widgets"]
        REL["release/1.0<br/>(BaseVersion = 1.0)"]
    end
    
    subgraph DEV_ENVS["Development Environments"]
        DEV_MAIN["Dev-Main<br/>(Environment-Dev-Main)"]
        DEV_WIDGETS["Dev-widgets<br/>(Environment-Dev-widgets)"]
        DEV_REL["Dev-release-1.0<br/>(Environment-Dev-release-1.0)"]
    end
    
    subgraph TEST_ENVS["Test Environments (Per-Branch Recommended)"]
        TEST_MAIN["Test-Main<br/>(Environment-Test-Main)"]
        TEST_REL["Test-release-1.0<br/>(Environment-Test-release-1.0)"]
    end
    
    subgraph SHARED_ENVS["Shared Environments"]
        UAT["UAT<br/>(Environment-UAT)"]
        PROD["Prod<br/>(Environment-Prod)"]
    end
    
    MAIN -->|Export/Import| DEV_MAIN
    FEAT -->|Export/Import| DEV_WIDGETS
    REL -->|Export/Import| DEV_REL
    
    MAIN -->|Build & Deploy| TEST_MAIN
    REL -->|Build & Deploy| TEST_REL
    TEST_MAIN --> UAT
    TEST_REL --> UAT
    UAT --> PROD

Development environments are specific to each branch - the environment name is derived from the branch:

  • main branch → Dev-Main environment → Environment-Dev-Main variable group
  • feature/widgets branch → Dev-widgets environment → Environment-Dev-widgets variable group
  • release/1.0 branch → Dev-release-1.0 environment → Environment-Dev-release-1.0 variable group

Test environments are recommended to be per-branch for active releases:

  • main branch → Test-Main environment → Environment-Test-Main variable group
  • release/1.0 branch → Test-release-1.0 environment → Environment-Test-release-1.0 variable group
  • This allows parallel testing of builds from different branches without conflicts

UAT and Production environments are typically shared across all branches:

  • Both main and release/* branches deploy to the same UAT → Prod chain
  • The version number distinguishes which release the build came from (e.g., 2.0.456 from main vs 1.0.789 from release/1.0)
  • At any given time, only one version should be promoted through UAT to Production

Feature Branches

Feature branches (feature/{name}) are used for developing new features in isolation:

  1. Create the branch - Branch from main when starting work on a new feature
  2. Create a dev environment - Each feature branch gets its own Dataverse environment (e.g., Dev-widgets for feature/widgets)
  3. Work in isolation - Export and import against the feature’s dev environment
  4. Merge to main - When the feature is complete, merge back to main
  5. Clean up - Delete the feature branch and optionally the dev environment

Feature branches are useful when:

  • Multiple developers need to work on different features simultaneously
  • A feature requires significant changes that could disrupt other development
  • You want to review and test changes before they reach the main branch

Release Branches

Release branches (release/{version}) are used for maintaining released versions:

  1. Create when releasing - When you’re ready to release version 1.0, create release/1.0 from main
  2. Continue development on main - Bump the BaseVersion on main to 2.0 for the next release
  3. Hotfixes on release - Bug fixes for 1.0 are made on release/1.0 and deployed from there
  4. Cherry-pick if needed - Important fixes may be cherry-picked back to main

Release branches are useful when:

  • You need to support multiple versions simultaneously (e.g., 1.0 for current customers while developing 2.0)
  • Hotfixes need to be deployed without including unfinished 2.0 features
  • You want clear separation between maintenance work and new development

Simple vs Advanced: Start Where You Are

Scenario Recommended Approach Dev Environment(s)
Solo developer, single version Just use main branch Dev-Main
Small team, single version Just use main, share dev environment or use feature branches Dev-Main (shared) or Dev-{feature} per developer
Multiple parallel features Feature branches with separate dev environments per feature Dev-Main + Dev-{feature} per feature
Released product with ongoing development Release branches for maintenance, main for next version Dev-Main + Dev-release-{version} per release

You can always evolve your branching strategy as your project grows - the pipelines in this article support all of these approaches.

Setting Up Variable Libraries

Before we dive into the pipelines, we need to set up variable libraries in Azure DevOps. These will store environment-specific configuration like URLs and credentials, so we can keep our pipeline definitions clean and reusable.

In Azure DevOps, go to Pipelines > Library and create a variable group for each environment:

Variable Group Name Variables
Environment-Dev-Main EnvironmentUrl, plus environment variables and connection refs
Environment-Test EnvironmentUrl, plus environment variables and connection refs
Environment-UAT EnvironmentUrl, plus environment variables and connection refs
Environment-Prod EnvironmentUrl, plus environment variables and connection refs

For example, the Environment-Test group might contain:

  • EnvironmentUrl = https://myorg-test.crm11.dynamics.com
  • ENVVAR_new_apiurl = https://api.test.example.com (prefixed with ENVVAR_)
  • ENVVAR_new_apikey = test-api-key-12345 (🔒 mark as secret!)
  • CONNREF_new_sharepointconnection = 12345678-1234-1234-1234-123456789012 (prefixed with CONNREF_)

Naming Convention: We use uppercase prefixes ENVVAR_ and CONNREF_ followed by the schema name. Azure DevOps converts all variable names to uppercase in the environment, so using uppercase in the library makes the pattern clearer.

Tip: Using variable libraries means you only need to update URLs and configuration in one place when environments change, rather than hunting through multiple pipeline files.

Security note: Mark secret values as secret variables in Azure DevOps. For production environments, consider linking your variable group to Azure Key Vault for enhanced security.

Branch-Based Naming Convention for Dev Environments

For pipelines that work with development environments (Export and Import), we use a branch-based naming convention to automatically determine which environment to use. This is especially useful when you have multiple development branches, each with its own Dataverse environment.

The convention works like this:

Branch Name Environment Name Variable Group Service Connection
main Dev-Main Environment-Dev-Main Dev-Main Environment Connection
feature/widgets Dev-Widgets Environment-Dev-Widgets Dev-Widgets Environment Connection
feature/mobile Dev-Mobile Environment-Dev-Mobile Dev-Mobile Environment Connection

The scripts derive the environment name from the branch by:

  1. Taking the last segment of the branch name (e.g., widgets from feature/widgets, main from main)
  2. Prefixing with Dev- (e.g., Dev-Widgets, Dev-Main)

All branches use Environment-{EnvironmentName} for their variable group.

This approach has several benefits:

  • Self-service branching - Developers can create a new feature branch and matching environment without modifying pipeline files
  • Isolation - Each feature branch can have its own dev environment, preventing conflicts between parallel development streams
  • Convention over configuration - No need to update the pipeline every time you add a new development environment
  • Consistency - The naming pattern is predictable, making it easy to understand which environment a pipeline run will target

Setting up a new feature environment: When creating a new feature branch:

  1. Create a new Dataverse environment for the feature
  2. Create a variable group named Environment-Dev-{FeatureName}
  3. Create a service connection named Dev-{FeatureName} Environment Connection
  4. The export and import pipelines will automatically use the correct environment when run on that branch

Version Variable Libraries for Build Pipelines

In addition to environment-specific variable groups, we also use branch-based variable groups for build versioning. Each branch can have its own base version number, stored in a variable library that the build pipeline references.

Branch Name Variable Group Variables
main Branch-Main BaseVersion = 2.0
release/1.0 Branch-release-1.0 BaseVersion = 1.0
release/1.1 Branch-release-1.1 BaseVersion = 1.1

The build pipeline dynamically determines the variable group name from the branch name (replacing / with -), then uses the BaseVersion variable in that group to construct the full version number:

Build Version = $(BaseVersion).$(Build.BuildId)

For example:

  • A build on main with BaseVersion = 2.0 and Build ID 456 → version 2.0.456
  • A build on release/1.0 with BaseVersion = 1.0 and Build ID 457 → version 1.0.457

The main branch has a higher base version (2.0) because it represents the next major release being actively developed. Release branches (1.0, 1.1) have lower versions because they maintain older, stable releases with bug fixes only.

This approach has several benefits:

  • Semantic versioning by branch - Major/minor versions are tied to release branches, while the build ID provides uniqueness
  • No code changes for version bumps - When you create a new release branch, just create a matching variable group with the new base version
  • Visible in pipeline names - Both build and deploy pipelines incorporate the version number in their run names, making it easy to identify which version is currently executing
  • Consistent numbering - The same base version applies to all builds from that branch

Setting up a new release branch: When creating release/1.0:

  1. Create a variable group named Branch-release-1.0
  2. Add a variable BaseVersion with value 1.0
  3. All builds from that branch will now use 1.0.{BuildId} as their version

The Export Pipeline

The export pipeline is typically run manually when you want to capture the current state of your development environment into source control. It exports one or more solutions and unpacks them into a folder structure that’s friendly for source control.

Why Unpack Solutions?

When you export a solution as a .zip file, it’s essentially a binary blob - you can’t see what changed between versions. By unpacking the solution into its component files (XML, JavaScript, etc.), you get:

  • Readable diffs - See exactly what changed in each commit
  • Merge conflict resolution - Handle conflicts at the component level
  • Code reviews - Review changes before they’re merged

Automatic Version Incrementing

When you export a solution that has changes, the script automatically increments the solution’s version number. This happens in two places:

  1. In the Dataverse environment - The solution version is incremented before export, so the environment reflects the new version
  2. In the unpacked solution files - The exported files contain the new version number

This ensures that:

  • Deployments are trackable - You can see which version is deployed to each environment
  • Upgrades work correctly - Dataverse requires a higher version number to perform an upgrade
  • History is preserved - Each export creates a new version, making it easy to trace changes

The version format follows semantic versioning: Major.Minor.Build.Revision (e.g., 1.0.0.51.0.0.6). The script increments the revision number (the last segment) for each export with changes.

The Export Script

First, create the export script at scripts/export.ps1. This keeps the logic in a maintainable PowerShell script:

scripts/export.ps1
param(
[Parameter(Mandatory=$true)]
[string]$EnvironmentUrl,
[Parameter(Mandatory=$true)]
[string]$ClientId,
[Parameter(Mandatory=$true)]
[string]$ClientSecret,
[Parameter(Mandatory=$true)]
[string]$OutputPath,
[Parameter(Mandatory=$true)]
[string]$SourcesPath
)
$ErrorActionPreference = "Stop"
# Install the PowerShell module
Install-Module -Name Rnwood.Dataverse.Data.PowerShell -Force -Scope CurrentUser
# Connect to Dataverse and set as default
Get-DataverseConnection `
-url $EnvironmentUrl `
-ClientId $ClientId `
-ClientSecret $ClientSecret `
-SetAsDefault
# Define solutions to export
$solutions = @(
@{ Name = 'CoreSolution'; Folder = 'solutions/CoreSolution' },
@{ Name = 'ExtensionSolution'; Folder = 'solutions/ExtensionSolution' },
@{ Name = 'IntegrationSolution'; Folder = 'solutions/IntegrationSolution' }
)
foreach ($solution in $solutions) {
Write-Host "Exporting $($solution.Name)..."
# Export solution to a temp zip file
$tempZip = "$OutputPath/$($solution.Name)_temp.zip"
Export-DataverseSolution `
-SolutionName $solution.Name `
-OutFile $tempZip
# Clear the solution folder first to remove deleted components
# (PAC unpack only overwrites - it doesn't remove files that no longer exist)
$solutionFolder = "$SourcesPath/$($solution.Folder)"
if (Test-Path $solutionFolder) {
Remove-Item -Path $solutionFolder -Recurse -Force
}
# Unpack to the solution folder
pac solution unpack `
--zipfile $tempZip `
--folder $solutionFolder `
--packagetype Both `
--allowWrite true
# Use git to check if there are any changes (including deleted files)
Push-Location $SourcesPath
$gitStatus = git status --porcelain $($solution.Folder)
$hasChanges = $null -ne $gitStatus -and $gitStatus.Length -gt 0
Pop-Location
if ($hasChanges) {
Write-Host "Changes detected in $($solution.Name), incrementing version..."
# Get current solution record from Dataverse
$solutionRecord = Get-DataverseRecord -TableName solution -Filter @{
uniquename = $solution.Name
} | Select-Object -First 1
if (-not $solutionRecord) {
Write-Host "Warning: Solution $($solution.Name) not found in Dataverse, skipping version increment"
continue
}
# Parse and increment the version (format: Major.Minor.Build.Revision)
$versionParts = $solutionRecord.version -split '\.'
# Ensure we have at least 4 parts (pad with zeros if needed)
while ($versionParts.Count -lt 4) {
$versionParts += '0'
}
$versionParts[3] = [int]$versionParts[3] + 1
$newVersion = $versionParts -join '.'
Write-Host "Updating version from $($solutionRecord.version) to $newVersion"
# Update the solution version in Dataverse
$solutionRecord | Set-DataverseRecord -Values @{ version = $newVersion }
# Re-export with the new version
Export-DataverseSolution `
-SolutionName $solution.Name `
-OutFile "$OutputPath/$($solution.Name).zip"
Write-Host "Unpacking $($solution.Name) with new version..."
# Clear and unpack the solution with new version
Remove-Item -Path $solutionFolder -Recurse -Force
pac solution unpack `
--zipfile "$OutputPath/$($solution.Name).zip" `
--folder $solutionFolder `
--packagetype Both `
--allowWrite true
Write-Host "$($solution.Name) exported with version $newVersion"
} else {
Write-Host "No changes detected in $($solution.Name), skipping..."
}
# Clean up temp files
Remove-Item -Path $tempZip -Force -ErrorAction SilentlyContinue
}
Write-Host "Export complete!"

The Pipeline Definition

Create a file called export-pipeline.yml in your repository.

Creating the Pipeline in Azure DevOps: After creating the YAML file, you need to add it as a pipeline in Azure DevOps. Go to Pipelines > New Pipeline , select your repository, then choose Existing Azure Pipelines YAML file and select your YAML file. See Microsoft’s documentation for detailed instructions.

trigger: none # Manual trigger only
pool:
vmImage: 'ubuntu-latest'
variables:
# Derive environment name from branch (main -> Dev-Main, feature/widgets -> Dev-Widgets)
environmentName: Dev-${{ replace(replace(variables['Build.SourceBranchName'], 'feature/', ''), '/', '-') }}
stages:
- stage: Export
displayName: 'Export Solutions'
variables:
- group: Environment-$(environmentName)
jobs:
- job: ExportSolutions
displayName: 'Export and Unpack Solutions'
steps:
- checkout: self
persistCredentials: true
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.tool-installer.PowerPlatformToolInstaller@2
displayName: 'Install Power Platform Build Tools'
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.set-connection-variables.PowerPlatformSetConnectionVariables@2
displayName: 'Set Connection Variables'
name: connectionVariables
inputs:
authenticationType: 'PowerPlatformSPN'
PowerPlatformSPN: '$(environmentName) Environment Connection'
- pwsh: |
& "$(Build.SourcesDirectory)/scripts/export.ps1" `
-EnvironmentUrl "$(EnvironmentUrl)" `
-ClientId "$(connectionVariables.BuildTools.ApplicationId)" `
-ClientSecret "$(connectionVariables.BuildTools.ClientSecret)" `
-OutputPath "$(Build.ArtifactStagingDirectory)" `
-SourcesPath "$(Build.SourcesDirectory)"
displayName: 'Run Export Script'
- pwsh: |
git config user.email "pipeline@azuredevops.com"
git config user.name "Azure DevOps Pipeline"
git add -A
# Check if there are changes to commit
$changes = git status --porcelain
if ($changes) {
git commit -m "Export solutions from $(environmentName) environment (version incremented)"
git push origin HEAD:$(Build.SourceBranchName)
Write-Host "Changes committed and pushed"
} else {
Write-Host "No changes to commit"
}
displayName: 'Commit and Push Changes'
workingDirectory: $(Build.SourcesDirectory)

Key Points

  • Automatic Version Increment - If the exported solution has changes compared to the current source, the script increments the version number in both Dataverse and the exported files.
  • Git-Based Change Detection - The script uses git status to detect actual changes after unpacking, which is simpler and more reliable than file hash comparison. Git already knows how to compare files and handle line endings, timestamps, etc.
  • Clean Unpack - The solution folder is cleared before unpacking to ensure deleted components are removed from source control. PAC CLI’s unpack only overwrites existing files - it doesn’t remove files that no longer exist in the solution.
  • Branch-Based Environment Selection - The pipeline automatically determines which environment to use based on the branch name. Running on main uses Dev, running on feature/widgets uses Widgets.
  • PowerShell Script - The export logic is in scripts/export.ps1, keeping the pipeline YAML clean and the logic maintainable.
  • Service Connection Authentication - We use the Power Platform Build Tools PowerPlatformSetConnectionVariables task to extract credentials from the AzDO Service Connection, just like in the first article. This keeps credentials secure in the service connection rather than scattered across variable libraries.
  • PAC CLI for Unpacking - The PAC CLI is used for unpacking as it handles the complex solution structure.
  • Automatic Commit - The pipeline commits and pushes the unpacked solutions back to the branch it was run from, including the new version numbers.

Note: Make sure the build service account has permission to push to your repository. In Azure DevOps, you may need to grant “Contribute” permission to the project’s Build Service account.

The Build Pipeline

The build pipeline triggers automatically when changes are pushed to your repository. It packs the solution source files back into .zip files ready for deployment.

Note: The build pipeline uses PAC CLI for packing solutions. While Rnwood.Dataverse.Data.PowerShell is excellent for Dataverse operations, PAC CLI is the standard tool for packing/unpacking solution files locally without connecting to an environment.

Build Versioning and Source Tagging

Each build is assigned a unique version number in the format 1.0.$(Build.BuildId) (e.g., 1.0.123). This version number is:

  • Displayed in the build name - Making it easy to identify which build you’re looking at
  • Saved to a file in artifacts - So the deploy pipeline can read and display it
  • Used to tag the source commit - Creating a permanent link between the built artifact and the exact source code it was built from

Why a build version number rather than solution version numbers?

You might wonder why we use a single build version rather than the individual solution version numbers. There are several reasons:

  • Multiple solutions - A build typically contains several solutions, each with its own version. Using the build number gives us a single identifier for the entire release.
  • Other artifacts - Builds may include data files, scripts, and other non-solution artifacts that don’t have version numbers.
  • Selective deployment - In advanced scenarios, you might skip deploying unchanged solutions while still deploying other artifacts. The build version tracks the overall release, not individual components.
  • Simplicity - One version number is easier to communicate (“deploy build 1.0.456”) than listing all solution versions.

The individual solution versions still exist and are important for Dataverse’s internal upgrade tracking - they just aren’t the primary identifier for your CI/CD pipeline.

Tagging Source Commits

When a build completes, we tag the source commit with the build version (e.g., v1.0.123). This has several benefits:

  • Traceability - Given any deployed version, you can find exactly which commit it was built from
  • Reproducibility - You can check out the tagged commit to investigate issues or rebuild if needed
  • Release notes - Git tags make it easy to generate changelogs between versions
  • Rollback reference - When rolling back, you know exactly which code state you’re returning to

The Build Script

Create the build script at scripts/build.ps1:

scripts/build.ps1
param(
[Parameter(Mandatory=$true)]
[string]$SourcesPath,
[Parameter(Mandatory=$true)]
[string]$OutputPath,
[Parameter(Mandatory=$true)]
[string]$BuildVersion
)
$ErrorActionPreference = "Stop"
Write-Host "Building version: $BuildVersion"
# Install PAC CLI
dotnet tool install --global Microsoft.PowerApps.CLI.Tool
# Define solutions to pack
$solutions = @(
@{ Name = 'CoreSolution'; Folder = 'solutions/CoreSolution' },
@{ Name = 'ExtensionSolution'; Folder = 'solutions/ExtensionSolution' },
@{ Name = 'IntegrationSolution'; Folder = 'solutions/IntegrationSolution' }
)
foreach ($solution in $solutions) {
Write-Host "Packing $($solution.Name) (Unmanaged)..."
pac solution pack `
--zipfile "$OutputPath/$($solution.Name).zip" `
--folder "$SourcesPath/$($solution.Folder)" `
--packagetype Unmanaged
Write-Host "Packing $($solution.Name) (Managed)..."
pac solution pack `
--zipfile "$OutputPath/$($solution.Name)_managed.zip" `
--folder "$SourcesPath/$($solution.Folder)" `
--packagetype Managed
}
# Save build version to a file so deploy pipeline can read it
$BuildVersion | Out-File -FilePath "$OutputPath/build-version.txt" -NoNewline
Write-Host "All solutions packed successfully! Build version: $BuildVersion"

The Pipeline Definition

Create a file called build-pipeline.yml:

trigger:
branches:
include:
- main
- release/*
paths:
include:
- solutions/**
- scripts/**
pool:
vmImage: 'ubuntu-latest'
# Derive variable group name from branch (e.g., release/2.0 -> Branch-release-2.0)
variables:
versionGroupName: Branch-${{ replace(variables['Build.SourceBranchName'], '/', '-') }}
# Build name incorporates the base version from the variable library
# BaseVersion comes from the variable group (e.g., "1.0" or "2.0")
name: '$(BaseVersion).$(Build.BuildId)'
stages:
- stage: Build
displayName: 'Build Solutions'
variables:
- group: $(versionGroupName)
jobs:
- job: PackSolutions
displayName: 'Pack Solutions - $(Build.BuildNumber)'
steps:
- checkout: self
persistCredentials: true
- pwsh: |
& "$(Build.SourcesDirectory)/scripts/build.ps1" `
-SourcesPath "$(Build.SourcesDirectory)" `
-OutputPath "$(Build.ArtifactStagingDirectory)" `
-BuildVersion "$(Build.BuildNumber)"
displayName: 'Run Build Script'
- task: CopyFiles@2
displayName: 'Copy Deployment Scripts to Staging'
inputs:
SourceFolder: '$(Build.SourcesDirectory)/scripts'
Contents: '**'
TargetFolder: '$(Build.ArtifactStagingDirectory)/scripts'
- task: PublishBuildArtifacts@1
displayName: 'Publish Solution Artifacts'
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'solutions'
publishLocation: 'Container'
# Tag the source commit with the build version
- pwsh: |
git config user.email "pipeline@azuredevops.com"
git config user.name "Azure DevOps Pipeline"
$tagName = "v$(Build.BuildNumber)"
# Check if tag already exists
$existingTag = git tag -l $tagName
if ($existingTag) {
Write-Host "Tag $tagName already exists, skipping..."
} else {
git tag -a $tagName -m "Build $(Build.BuildNumber)"
git push origin $tagName
if ($LASTEXITCODE -ne 0) {
Write-Host "Warning: Failed to push tag, but continuing..."
} else {
Write-Host "Successfully created and pushed tag $tagName"
}
}
displayName: 'Tag Source Commit'
continueOnError: true

Key Points

  • Branch-Based Base Version - The pipeline uses a variable group based on the branch name (e.g., Branch-Main for main, Branch-release-1.0 for release/1.0). The BaseVersion variable in that group (e.g., 2.0 for main, 1.0 for release) provides the major.minor portion of the version.
  • Version in Pipeline Name - The name: property sets the build number to $(BaseVersion).$(Build.BuildId), so builds are easily identifiable (e.g., 2.0.456 from main, 1.0.457 from release). The job name also includes the version.
  • Version in Artifacts - The build script saves the version to build-version.txt in the artifacts, so the deploy pipeline can display it.
  • Source Tagging - After publishing artifacts, the pipeline creates a git tag (e.g., v2.0.456) pointing to the exact commit that was built. The step handles duplicate tags gracefully.
  • PowerShell Script - The build logic is in scripts/build.ps1, keeping the pipeline YAML clean and the logic maintainable.
  • Trigger on Changes - The pipeline triggers when changes are pushed to main or release/* branches, but only if files in the solutions/ or scripts/ folder have changed.
  • PAC CLI for Packing - The PAC CLI is used for packing solutions as it works locally without needing a Dataverse connection.
  • Both Managed and Unmanaged - We pack both versions. Unmanaged is typically used for dev environments, while managed is used for test and production.
  • Deployment Scripts Included - The deployment scripts from scripts/ are copied into the build artifacts alongside the solution ZIPs.
  • Publish Artifacts - The packed solutions and deployment scripts are published as build artifacts, ready to be consumed by the deploy pipeline.

The Import Pipeline

The import pipeline is the reverse of the export pipeline - it takes solutions from source control and imports them into your development environment. This is essential for keeping your dev environment in sync with the codebase.

Why Do We Need an Import Pipeline?

When working with Power Platform solutions in a team, you’ll frequently need to import changes from source control:

  • Switching branches - When you switch to work on a different feature branch, your dev environment needs to match that branch’s state
  • After merging - When you merge changes from another branch, you need to import the merged result
  • Team synchronisation - When teammates commit changes, you need to pull and import them before continuing your own work
  • Environment recovery - If your dev environment gets corrupted or accidentally modified, you can restore it from source control

Without a pipeline for this, developers would have to manually pack and import solutions, which is tedious and error-prone.

The Import Script

Create the import script at scripts/import.ps1:

scripts/import.ps1
param(
[Parameter(Mandatory=$true)]
[string]$EnvironmentUrl,
[Parameter(Mandatory=$true)]
[string]$ClientId,
[Parameter(Mandatory=$true)]
[string]$ClientSecret,
[Parameter(Mandatory=$true)]
[string]$SourcesPath,
[Parameter(Mandatory=$true)]
[string]$TempPath
)
$ErrorActionPreference = "Stop"
# Install the PowerShell module and PAC CLI
Install-Module -Name Rnwood.Dataverse.Data.PowerShell -Force -Scope CurrentUser
dotnet tool install --global Microsoft.PowerApps.CLI.Tool
# Connect to Dataverse and set as default
Get-DataverseConnection `
-url $EnvironmentUrl `
-ClientId $ClientId `
-ClientSecret $ClientSecret `
-SetAsDefault
# Define solutions to import (in dependency order)
$solutions = @(
@{ Name = 'CoreSolution'; Folder = 'solutions/CoreSolution' },
@{ Name = 'ExtensionSolution'; Folder = 'solutions/ExtensionSolution' },
@{ Name = 'IntegrationSolution'; Folder = 'solutions/IntegrationSolution' }
)
# Build environment variables hashtable from prefixed environment variables
$envVars = @{}
Get-ChildItem env: | Where-Object { $_.Name -like 'ENVVAR_*' } | ForEach-Object {
$schemaName = $_.Name -replace '^ENVVAR_', ''
$envVars[$schemaName] = $_.Value
Write-Host "Environment variable: $schemaName"
}
# Build connection references hashtable from prefixed environment variables
$connRefs = @{}
Get-ChildItem env: | Where-Object { $_.Name -like 'CONNREF_*' } | ForEach-Object {
$schemaName = $_.Name -replace '^CONNREF_', ''
$connRefs[$schemaName] = $_.Value
Write-Host "Connection reference: $schemaName"
}
foreach ($solution in $solutions) {
Write-Host "Packing $($solution.Name) from source..."
$zipFile = "$TempPath/$($solution.Name).zip"
# Pack the solution from source files (unmanaged for dev)
pac solution pack `
--zipfile $zipFile `
--folder "$SourcesPath/$($solution.Folder)" `
--packagetype Unmanaged
Write-Host "Importing $($solution.Name)..."
# Import to dev environment (unmanaged)
Import-DataverseSolution `
-InFile $zipFile `
-EnvironmentVariables $envVars `
-ConnectionReferences $connRefs `
-Verbose
}
# Publish all customisations
Write-Host "Publishing customisations..."
Publish-DataverseCustomizations
Write-Host "Import complete!"

The Pipeline Definition

Create a file called import-pipeline.yml:

trigger: none # Manual trigger only
pool:
vmImage: 'ubuntu-latest'
variables:
# Derive environment name from branch (main -> Dev-Main, feature/widgets -> Dev-Widgets)
environmentName: Dev-${{ replace(replace(variables['Build.SourceBranchName'], 'feature/', ''), '/', '-') }}
stages:
- stage: Import
displayName: 'Import Solutions'
variables:
- group: Environment-$(environmentName)
jobs:
- job: ImportSolutions
displayName: 'Import Solutions to $(environmentName)'
steps:
- checkout: self
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.tool-installer.PowerPlatformToolInstaller@2
displayName: 'Install Power Platform Build Tools'
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.set-connection-variables.PowerPlatformSetConnectionVariables@2
displayName: 'Set Connection Variables'
name: connectionVariables
inputs:
authenticationType: 'PowerPlatformSPN'
PowerPlatformSPN: '$(environmentName) Environment Connection'
- pwsh: |
& "$(Build.SourcesDirectory)/scripts/import.ps1" `
-EnvironmentUrl "$(EnvironmentUrl)" `
-ClientId "$(connectionVariables.BuildTools.ApplicationId)" `
-ClientSecret "$(connectionVariables.BuildTools.ClientSecret)" `
-SourcesPath "$(Build.SourcesDirectory)" `
-TempPath "$(Build.ArtifactStagingDirectory)"
displayName: 'Run Import Script'

Key Points

  • Branch-Based Environment Selection - Like the export pipeline, this automatically targets the correct environment based on the branch name.
  • PowerShell Script - The import logic is in scripts/import.ps1, keeping the pipeline YAML clean.
  • Unmanaged Import - We import as unmanaged to the dev environment, allowing further development.
  • Environment Variables and Connection References - These are automatically collected from the variable library and applied during import.
  • Manual Trigger - Developers run this when they need to sync their dev environment with source control.

The Deploy Pipeline

The deploy pipeline takes the build artifacts and deploys them across your environments. Each environment is a separate stage with its own approval gates that run in the order they are defined.

Running Deployment Scripts from Artifacts

A key design principle here is that the deployment scripts are included in the build artifacts and run from there , rather than being read from the repository at deployment time. This approach has several important benefits:

  • Versioned deployment logic - The deployment script is versioned alongside the solutions it deploys. When you look at build #123, you know exactly what deployment logic will be used.
  • Reproducibility - You can redeploy any historical build and it will use the same deployment script that was tested with that build, not whatever is currently in the repository.
  • Consistency - If you fix a bug in your deployment script, older builds still use their original script, which is important if you need to roll back.
  • No drift - There’s no risk of the deployment script in the repository drifting out of sync with what was actually tested in the build.

YAML Templates vs Scripts in Artifacts: You might wonder why we don’t use YAML templates (like templates/deploy-environment.yml). The problem is that YAML templates are resolved at pipeline compile time from the repository, not from artifacts. This means if you trigger a deployment of build #100 after the repository has changed, you’d get the new template code but the old solution artifacts - a mismatch that can cause subtle bugs.

The Deployment Script

First, create the deployment script at scripts/deploy.ps1. This script will be included in the build artifacts and called by the deploy pipeline:

scripts/deploy.ps1
param(
[Parameter(Mandatory=$true)]
[string]$EnvironmentUrl,
[Parameter(Mandatory=$true)]
[string]$ClientId,
[Parameter(Mandatory=$true)]
[string]$ClientSecret,
[Parameter(Mandatory=$true)]
[string]$SolutionsPath,
[Parameter(Mandatory=$true)]
[string]$EnvironmentName
)
$ErrorActionPreference = "Stop"
# Install the PowerShell module
Install-Module -Name Rnwood.Dataverse.Data.PowerShell -Force -Scope CurrentUser
# Connect and set as default
Get-DataverseConnection `
-url $EnvironmentUrl `
-ClientId $ClientId `
-ClientSecret $ClientSecret `
-SetAsDefault
# Solutions in dependency order (base solutions first)
$solutions = @('CoreSolution', 'ExtensionSolution', 'IntegrationSolution')
# Build environment variables hashtable from prefixed environment variables
$envVars = @{}
Get-ChildItem env: | Where-Object { $_.Name -like 'ENVVAR_*' } | ForEach-Object {
$schemaName = $_.Name -replace '^ENVVAR_', ''
$envVars[$schemaName] = $_.Value
Write-Host "Environment variable: $schemaName"
}
# Build connection references hashtable from prefixed environment variables
$connRefs = @{}
Get-ChildItem env: | Where-Object { $_.Name -like 'CONNREF_*' } | ForEach-Object {
$schemaName = $_.Name -replace '^CONNREF_', ''
$connRefs[$schemaName] = $_.Value
Write-Host "Connection reference: $schemaName"
}
# STEP 1: Stage all solutions as holding solutions (in dependency order)
Write-Host "=== STAGING PHASE ==="
foreach ($solution in $solutions) {
Write-Host "Staging $solution..."
Import-DataverseSolution `
-InFile "$SolutionsPath/${solution}_managed.zip" `
-Mode HoldingSolution `
-EnvironmentVariables $envVars `
-ConnectionReferences $connRefs `
-Verbose
}
# STEP 2: Apply upgrades in REVERSE dependency order
Write-Host "=== UPGRADE PHASE ==="
$reverseSolutions = $solutions[($solutions.Count-1)..0]
foreach ($solution in $reverseSolutions) {
Write-Host "Upgrading $solution..."
Import-DataverseSolution `
-InFile "$SolutionsPath/${solution}_managed.zip" `
-Mode StageAndUpgrade `
-Verbose
}
# Publish all customisations
Write-Host "Publishing customisations..."
Publish-DataverseCustomizations
# Activate any workflows/flows that should be active
Write-Host "Checking process activation..."
Get-DataverseRecord -TableName workflow -Filter @{
statecode = 0;
"category:In" = @(0, 5, 6) # Workflows, Business Rules, Modern Flows
} | ForEach-Object {
Write-Host "Activating: $($_.name)"
$_ | Set-DataverseRecord -Values @{ statecode = 1 }
}
Write-Host "Deployment to $EnvironmentName complete!"

The Pipeline Definition

Now create the main pipeline file deploy-pipeline.yml. The pipeline still uses YAML templates for the stage structure (to avoid repetition), but the core deployment logic runs from the script in the artifacts:

First, create the template at templates/deploy-environment.yml:

templates/deploy-environment.yml
parameters:
- name: environmentName
type: string
- name: dependsOn
type: string
default: ''
stages:
- stage: DeployTo${{ parameters.environmentName }}
displayName: 'Deploy to ${{ parameters.environmentName }}'
${{ if ne(parameters.dependsOn, '') }}:
dependsOn: DeployTo${{ parameters.dependsOn }}
variables:
# Convention-based variable group: Environment-{EnvironmentName}
- group: Environment-${{ parameters.environmentName }}
jobs:
- deployment: Deploy${{ parameters.environmentName }}
displayName: 'Deploy to ${{ parameters.environmentName }} Environment'
environment: '${{ parameters.environmentName }}'
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact@2
displayName: 'Download Solution Artifacts'
inputs:
buildType: 'specific'
project: '$(System.TeamProjectId)'
definition: '$(resources.pipeline.build.pipelineID)'
buildVersionToDownload: 'latest'
artifactName: 'solutions'
targetPath: '$(Pipeline.Workspace)/solutions'
# Display the build version being deployed
- pwsh: |
$version = Get-Content "$(Pipeline.Workspace)/solutions/build-version.txt"
Write-Host "##[section]Deploying Build Version: $version"
Write-Host "##vso[build.addbuildtag]$version"
displayName: 'Display Build Version'
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.tool-installer.PowerPlatformToolInstaller@2
displayName: 'Install Power Platform Build Tools'
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.set-connection-variables.PowerPlatformSetConnectionVariables@2
displayName: 'Set Connection Variables'
name: connectionVariables
inputs:
authenticationType: 'PowerPlatformSPN'
# Convention-based service connection: {EnvironmentName} Environment Connection
PowerPlatformSPN: '${{ parameters.environmentName }} Environment Connection'
- pwsh: |
# Run the deployment script FROM THE ARTIFACTS
& "$(Pipeline.Workspace)/solutions/scripts/deploy.ps1" `
-EnvironmentUrl "$(EnvironmentUrl)" `
-ClientId "$(connectionVariables.BuildTools.ApplicationId)" `
-ClientSecret "$(connectionVariables.BuildTools.ClientSecret)" `
-SolutionsPath "$(Pipeline.Workspace)/solutions" `
-EnvironmentName "${{ parameters.environmentName }}"
displayName: 'Run Deployment Script from Artifacts'

Then create the main pipeline file deploy-pipeline.yml:

trigger: none # Triggered by build completion or manually
resources:
pipelines:
- pipeline: build
source: 'Build Pipeline'
trigger:
branches:
include:
- main
- release/*
pool:
vmImage: 'ubuntu-latest'
# Deploy pipeline name incorporates the build version being deployed
# This is set dynamically in the first step after downloading artifacts
name: 'Deploy-$(resources.pipeline.build.runName)'
stages:
# Deploy to Test (no dependencies)
- template: templates/deploy-environment.yml
parameters:
environmentName: Test
# Deploy to UAT (depends on Test)
- template: templates/deploy-environment.yml
parameters:
environmentName: UAT
dependsOn: Test
# Deploy to Production (depends on UAT)
- template: templates/deploy-environment.yml
parameters:
environmentName: Prod
dependsOn: UAT

Notice how clean this is - we only specify the environment name and dependencies. The template automatically derives the variable group (Environment-{EnvironmentName}) and service connection ({EnvironmentName} Environment Connection) from the environment name.

Compare this to having the full deployment steps repeated three times! The template approach keeps the main pipeline file focused on what environments to deploy to, while the deploy.ps1 script in the artifacts handles the actual deployment logic.

Convention-Based Resource Naming

The deploy template now uses the same convention-based approach we use for dev environments. Given an environment name, it automatically determines:

Resource Type Naming Convention Example for “Test”
Variable Group Environment-{EnvironmentName} Environment-Test
Service Connection {EnvironmentName} Environment Connection Test Environment Connection
AzDO Environment {EnvironmentName} Test

This approach has several benefits:

  • Less configuration - You only specify the environment name, not three separate resource names
  • Consistency - All environments follow the same naming pattern, reducing confusion
  • Fewer mistakes - No risk of typos in service connection or variable group names
  • Easier onboarding - New team members can predict resource names

Adding a new environment: To add a new environment (e.g., “Staging”):

  1. Create a variable group named Environment-Staging
  2. Create a service connection named Staging Environment Connection
  3. Create an AzDO environment named Staging
  4. Add a new template call with environmentName: Staging

Why This Approach Works

The key insight is the separation between:

  1. YAML templates (templates/deploy-environment.yml) - These define the pipeline structure : stages, jobs, approval gates, and how to download artifacts. Changes here affect the pipeline flow but not what the deployment actually does.
  2. Deployment script (scripts/deploy.ps1) - This contains the actual deployment logic : which solutions to import, in what order, how to handle environment variables, etc. This is versioned with the build artifacts.

This means if you need to change how deployments work (e.g., add a new solution, change the import order), you modify scripts/deploy.ps1 and commit it. The next build will include the updated script in its artifacts, and deployments of that build will use the new logic.

Template Parameters Explained

The template takes two parameters:

Parameter Purpose
environmentName Environment name used for display, convention-based resource names, and passed to the deployment script
dependsOn (Optional) Name of the environment that must complete first (e.g., “Test” for UAT)

Pipeline Run Name

Notice the name: property in the deploy pipeline:

name: 'Deploy-$(resources.pipeline.build.runName)'

This sets the deploy pipeline run name to include the build version it’s deploying (e.g., Deploy-2.0.456). Combined with the build pipeline name that incorporates the version, this means:

  • Build pipeline run : 2.0.456 (from main) or 1.0.457 (from a release branch)
  • Deploy pipeline run : Deploy-2.0.456

This makes it immediately clear which version is being deployed when you look at the pipeline runs list in Azure DevOps.

Understanding the Import Process

Automatic Install vs Upgrade

The Import-DataverseSolution cmdlet is intelligent about how it imports solutions:

  • New Installation - If the solution doesn’t exist in the target environment, it performs a simple import
  • Upgrade (Managed) - If the solution already exists and is managed, it automatically uses stage-and-upgrade
  • Upgrade (Unmanaged) - If the solution exists and is unmanaged, it imports over the top

This means you don’t need separate logic for first-time deployments vs updates - the cmdlet handles it automatically.

Why is upgrade important? Using a simple import (update) instead of upgrade can leave behind deleted components in your target environments, causing inconsistencies and unexpected behaviour. Learn more about solution upgrade vs import.

Why Stage First, Then Upgrade in Reverse Order?

When you have multiple solutions with dependencies, the upgrade order matters. Here’s why we stage all solutions first, then apply upgrades in reverse dependency order:

  1. Stage Phase (dependency order) - Import all solutions as “holding” solutions. This stages the new versions without removing anything yet. We do this in dependency order (base solutions first) so all dependencies are available.
  2. Upgrade Phase (reverse order) - Apply the upgrades starting from the most dependent solution (IntegrationSolution) and working back to the base (CoreSolution). This ensures that when we remove old components from a base solution, dependent solutions have already been updated to not rely on them.

Example: If IntegrationSolution uses a workflow from CoreSolution, and you’re removing that workflow in the new version of CoreSolution, you need to:

  1. First update IntegrationSolution to stop using the workflow
  2. Then upgrade CoreSolution to remove the workflow

If you did it the other way around, the upgrade would fail because IntegrationSolution still references the workflow.

Key Points

  • Deployment Script in Artifacts - The core deployment logic lives in scripts/deploy.ps1 and is included in the build artifacts, ensuring version consistency
  • Default Connection - By using -SetAsDefault on Get-DataverseConnection, we don’t need to pass -Connection to every cmdlet
  • Service Connection Authentication - We use the Power Platform Build Tools tasks to securely extract credentials from AzDO Service Connections, which are then passed to the deployment script
  • Environment Variables from Library - Variables prefixed with ENVVAR_ in the library are automatically collected and passed to the import
  • Connection References from Library - Variables prefixed with CONNREF_ in the library are automatically collected and passed to the import
  • Process Activation - After import, we query the workflow table for draft processes and activate them
  • Publish-DataverseCustomizations - Ensures all customisations are published and active

Setting Up Environments and Approvals

To add approval gates to your deployments:

  1. Go to Pipelines > Environments in Azure DevOps
  2. Create environments matching those referenced in your pipeline (Test, UAT, Prod)
  3. Click on an environment and select Approvals and checks
  4. Add Approvals and specify who needs to approve deployments to that environment

This gives you a controlled release process where, for example:

  • Test deployments might require no approval (automated)
  • UAT deployments might require approval from a test lead
  • Production deployments might require approval from multiple stakeholders

Activating Processes After Deployment

Workflows, business rules, and cloud flows (modern flows) are stored in the workflow table in Dataverse. After importing a solution, these processes may be in a draft state and need to be activated.

The deployment script includes this step:

Activate any workflows/flows that should be active

Terminal window
Write-Host "Checking process activation..."
Get-DataverseRecord -TableName workflow -Filter @{
statecode = 0;
"category:In" = @(0, 5, 6) # Workflows, Business Rules, Modern Flows
} | ForEach-Object {
Write-Host "Activating: $($_.name)"
$_ | Set-DataverseRecord -Values @{ statecode = 1 }
}

This uses the PowerShell pipeline pattern we learned in the first article:

  1. Get-DataverseRecord queries for all draft processes, filtering by category
  2. Set-DataverseRecord updates each record to set statecode = 1 (activated)

Note: You might want to be more selective about which processes to activate. You could filter by solution to make this easy to maintain.

Setting Environment Variables and Connection References

The deployment script automatically collects environment-specific values from the variable library using prefixes:

How It Works

  1. In your variable library, add variables with these prefixes:

    • ENVVAR_new_apiurl → The environment variable value for new_apiurl
    • CONNREF_new_sharepointconnection → The connection reference value (connection ID) for new_sharepointconnection
  2. The script collects these at runtime:

Terminal window
# Collect environment variables from prefixed library variables
$envVars = @{}
Get-ChildItem env: | Where-Object { $_.Name -like 'ENVVAR_*' } | ForEach-Object {
$schemaName = $_.Name -replace '^ENVVAR_', ''
$envVars[$schemaName] = $_.Value
}
# Collect connection references from prefixed library variables
$connRefs = @{}
Get-ChildItem env: | Where-Object { $_.Name -like 'CONNREF_*' } | ForEach-Object {
$schemaName = $_.Name -replace '^CONNREF_', ''
$connRefs[$schemaName] = $_.Value
}
  1. These are passed to Import-DataverseSolution:
Terminal window
Import-DataverseSolution `
-InFile "solution.zip" `
-EnvironmentVariables $envVars `
-ConnectionReferences $connRefs

Benefits

  • No code changes for new variables - Just add them to the library with the right prefix
  • Environment-specific values - Each environment’s library has its own values
  • Secure handling - Secret values stay in the variable library and are marked as secrets

Summary

You now have a fairly complete ALM setup for your Power Platform solutions using the Rnwood.Dataverse.Data.PowerShell module:

Pipeline Trigger Purpose
Export Manual Capture development environment changes into source control (with auto-versioning)
Build Automatic (on commit) Pack solutions, create versioned artifacts, and tag source commit
Import Manual Import solutions from source control into your dev environment
Deploy Automatic/Manual Deploy through Test → UAT → Production using scripts from the build artifacts

The key benefits of this approach:

  • Source control - Full history of what changed and why
  • Automation - Reduce manual errors and save time
  • Consistency - Same process for every deployment
  • Traceability - Know exactly what version is in each environment
  • Build versioning - Each build has a unique version number displayed in deployments
  • Branch-based versioning - Base version comes from a variable library per branch, so main uses 2.0.x (next major release) while release/1.0 uses 1.0.x (maintenance releases)
  • Version in pipeline names - Both build and deploy pipelines show the version in their run names for easy identification
  • Source tagging - Build versions are tagged in git for easy traceability
  • Git-based change detection - Export uses git status to detect changes, which is simpler and more reliable than file hash comparison
  • Automatic solution versioning - Export increments solution versions when changes are detected
  • Gates and approvals - Controlled releases to production
  • Intelligent upgrades - Stage first, upgrade in reverse dependency order
  • Automatic configuration - Environment variables and connection references from library
  • Process activation - Workflows and flows activated automatically
  • Versioned deployment logic - Deployment scripts are included in build artifacts, ensuring the exact same script that was tested with a build is used to deploy it
  • Convention-based naming - All environments use consistent naming patterns for variable groups, service connections, and environments
  • Branch-based environments - Dev pipelines automatically target the correct environment based on branch name
  • PowerShell scripts - Core logic is in maintainable scripts, not inline YAML
  • Per-branch test environments - Recommended to avoid conflicts when testing multiple releases in parallel

Usage Guide

This section explains how to use the pipelines in your day-to-day workflow. Start with the basic workflow and add the advanced steps as your project grows.

Basic Daily Workflow (Just main Branch)

If you’re starting simple with just the main branch, here’s your daily workflow:

Making Changes

  1. Make your changes in the Power Apps maker portal, connected to your Dev-Main environment
  2. Run the Export pipeline on the main branch when you’re ready to save your work
  3. Review the commit in Azure DevOps - the export pipeline will have committed the unpacked solution files
  4. Build triggers automatically - the build pipeline runs when changes are pushed to main
  5. Deploy when ready - approve deployments through Test → UAT → Prod

Syncing from Source Control

When you need to get the latest changes (e.g., a teammate made changes):

  1. Pull the latest code to your local machine or let the pipeline do it
  2. Run the Import pipeline on the main branch
  3. Continue development - your dev environment now matches source control

Advanced: Working with Feature Branches

When to use feature branches: Feature branches are useful when multiple people need to work on different features simultaneously, or when a feature is experimental and shouldn’t disrupt main development. Skip this section if you’re working solo or on a small team with one active workstream.

Starting a New Feature

  1. Create the branch in Azure DevOps:

    • Go to Repos > Branches
    • Click New branch
    • Name it feature/widgets and base it on main
  2. Create the environment resources in Azure DevOps:

    • Dataverse environment: Dev-widgets
    • Variable group: Environment-Dev-widgets (with EnvironmentUrl and any env vars)
    • Service connection: Dev-widgets Environment Connection
  3. Run the Import pipeline on the feature/widgets branch to initialise the dev environment with the current state from main

  4. Develop your feature - export and import on the feature branch to save/load changes

Merging a Feature

  1. Ensure all changes are exported - run Export on the feature branch
  2. Create a Pull Request in Azure DevOps:
    • Go to Repos > Pull requests > New pull request
    • Select feature/widgetsmain
    • Add reviewers and complete the review process
  3. Complete the merge using the Azure DevOps web UI
  4. Sync the main dev environment - run Import on main to get the merged changes into Dev-Main
  5. Clean up (optional) - delete the feature branch and dev environment from the Branches page

Resolving Merge Conflicts

If the PR has merge conflicts, you can resolve them directly in Azure DevOps:

  1. Open the PR and click on the Conflicts tab
  2. Resolve each conflict using the built-in merge editor - Azure DevOps shows both versions side-by-side and lets you choose which changes to keep
  3. Complete the merge once all conflicts are resolved
  4. Run Import on main to sync Dev-Main with the merged result
  5. Verify in the maker portal that the solution looks correct
  6. Run Export on main if you made any fixes in the portal

Tip: For complex solution conflicts, the Pull Request Merge Conflict Extension provides enhanced conflict resolution capabilities directly in Azure DevOps.

Advanced: Working with Release Branches

When to use release branches: Release branches are for when you need to maintain a released version (e.g., patch bugs in v1.0) while continuing development on the next version (v2.0). Skip this section if you only have one version in production.

Creating a Release

  1. Create the release branch in Azure DevOps:

    • Go to Repos > Branches
    • Click New branch
    • Name it release/1.0 and base it on main
  2. Create the branch variable group in Azure DevOps:

    • Go to Pipelines > Library
    • Create variable group: Branch-release-1.0
    • Add variable: BaseVersion = 1.0
  3. Bump the main branch version :

    • Update Branch-Main variable group: BaseVersion = 2.0
  4. Create the environment resources for the release branch:

    • Dataverse environment: Dev-release-1.0
    • Variable group: Environment-Dev-release-1.0
    • Service connection: Dev-release-1.0 Environment Connection
    • (Recommended) Test environment: Test-release-1.0 with corresponding variable group and service connection
  5. Run Import on release/1.0 to initialise the release dev environment

Hotfixing a Released Version

  1. Switch to the release branch when running pipelines - select release/1.0 from the branch dropdown
  2. Run Import on release/1.0 to sync your release dev environment
  3. Make the fix in the Power Apps maker portal
  4. Run Export on release/1.0 to capture the fix
  5. Deploy - builds from release/1.0 will have version 1.0.x and deploy through Test → UAT → Prod
  6. Cherry-pick to main if the fix also applies to the next version:
    • Create a PR from release/1.0 to main containing just the fix commit
    • Or use the Azure DevOps Cherry-pick button on the commit in the Commits view

Quick Reference

Task Pipeline Branch Notes
Save my changes Export Current branch Commits to the branch you’re on
Build for testing (Automatic) main or release/* Triggers on push
Sync from teammates Import Current branch Updates dev env from source
Start new feature Import feature/* After creating branch and resources
Finish feature PR + Import main Merge PR, then import to main dev
Release a version Create branch release/* Then update main’s BaseVersion
Hotfix old version Export/Deploy release/* Cherry-pick to main if needed

Troubleshooting

Issue Cause Solution
Import fails with “solution dependency” Solutions imported in wrong order Check $solutions array in scripts - base solutions first
Export shows no changes No actual changes, or unpack location wrong Check the solution folder path matches
Deploy shows old version Pipeline cached or wrong build Check the build artifact being deployed
Feature branch has wrong solution state Didn’t import before starting Run Import on the feature branch first
Merge conflict in Solution.xml Two people changed version number Keep the higher version, or resolve manually

What’s Next?

Now that you have the basics in place up you can add almost any custom steps that your solution needs. For instance, managing important configuration data. See the versioning config data article for details.

Find out more about what you can do in the Rnwood.Dataverse.Data.PowerShell documentation.

I’ll post some more examples in future.

Happy automating!