Setting up complete Azure DevOps pipelines for exporting, building, importing, and deploying Power Platform solutions across multiple environments
This article builds on the foundations covered in Getting started with PowerShell in AzDO for Power Platform. Make sure you’ve read that first to understand how to connect to your environments using service connections.
Now you’ve got the basics of running PowerShell in Azure DevOps pipelines and connecting to your Dataverse environments, it’s time to put together a complete ALM (Application Lifecycle Management) solution. We’ll be using the Rnwood.Dataverse.Data.PowerShell module which provides comprehensive cmdlets for solution management as well as data and the other things we need to make a powerful end-to-end process.
Most Power Platform ALM examples you’ll find online cover only the fundamentals and have significant gaps that will cause problems in real projects. This article builds a production-ready foundation that addresses these common shortcomings:
| Common Issue | Basic Examples | This Article |
|---|---|---|
| Import vs Upgrade | Use import which leaves orphaned components | Use staged upgrades that cleanly remove deleted components |
| Multiple Solutions | Handle one solution at a time | Support multiple solutions with correct dependency ordering |
| Environment Variables | Hardcode values or store in source control | Pull from Azure DevOps variable libraries per environment |
| Connection References | Manual configuration after deployment | Automatic mapping from variable libraries |
| Branching | Assume single main branch only | Support feature branches and release branches with separate environments |
| Version Numbers | No versioning or manual updates | Automatic version incrementing with branch-based base versions |
| Deleted Components | PAC CLI unpack doesn’t detect deletions | Clean folder before unpack so git detects removed files |
| Deployment Scripts | Inline YAML that drifts from tested builds | Scripts included in build artifacts for reproducibility |
By the end of this article, you’ll have pipelines that handle these scenarios correctly, avoiding the painful surprises that come from simpler approaches.
Reminder - Why Azure DevOps instead of Power Platform Pipelines? If you’re wondering why we’re using Azure DevOps Pipelines rather than the built-in Power Platform Pipelines, it’s because PPP has significant limitations for pro projects - it can only handle one solution at a time, can’t manage non-solution-aware components, lacks source control integration, and offers limited extensibility. Read more about why Power Platform Pipelines isn’t powerful enough for many pro projects.
Reminder Why PowerShell? You might be wondering why we use PowerShell scripts rather than just the Power Platform Build Tools tasks or PAC CLI directly. While those tools are great for simple scenarios, PowerShell gives us the flexibility to express complex logic, handle multiple solutions with dependencies, and automate tasks that would otherwise require manual intervention. Learn more about why PowerShell is essential for Power Platform automation.
We’re going to cover four key pipelines:
Before diving into the details, let’s visualise how these four pipelines work together:
flowchart LR
subgraph DEV["Development Environment"]
D1[("Dataverse")]
end
subgraph EXPORT["Export Pipeline"]
E1[Export Solutions]
E2[Unpack to Files]
E3[Commit to Git]
end
subgraph GIT["Source Control"]
G1[("Git Repository")]
end
subgraph IMPORT["Import Pipeline"]
I1[Pack Solutions]
I2[Import to Dev]
end
subgraph BUILD["Build Pipeline"]
B1[Pack Solutions]
B2[Include Scripts]
B3[Publish Artifacts]
end
subgraph ARTIFACTS["Build Artifacts"]
A1[("Solution ZIPs + Deploy Scripts")]
end
subgraph DEPLOY["Deploy Pipeline"]
direction TB
T[Deploy to Test]
T1[("Test DB")]
U[Deploy to UAT]
U1[("UAT DB")]
P[Deploy to Prod]
P1[("Production DB")]
T --> T1
T --> U
U --> U1
U --> P
P --> P1
end
D1 --> E1 --> E2 --> E3 --> G1
G1 --> I1 --> I2 --> D1
G1 -->|commit triggers| B1 --> B2 --> B3 --> A1
A1 --> T
And here’s the deployment flow within each environment:
flowchart TD
subgraph STAGE["Stage Phase (dependency order)"]
S1[Stage CoreSolution]
S2[Stage ExtensionSolution]
S3[Stage IntegrationSolution]
end
subgraph UPGRADE["Upgrade Phase (reverse order)"]
U1[Upgrade IntegrationSolution]
U2[Upgrade ExtensionSolution]
U3[Upgrade CoreSolution]
end
subgraph POST["Post-Deployment"]
P1[Publish Customisations]
P2[Activate Processes]
end
S1 --> S2 --> S3 --> U1 --> U2 --> U3 --> P1 --> P2
You might wonder: why not just have a single pipeline that exports, builds, and deploys all in one go? There are several important reasons for separating these concerns:
Each pipeline has a distinct purpose and trigger:
| Pipeline | When it runs | Who triggers it | Purpose |
|---|---|---|---|
| Export | On demand | Developer | Capture changes from dev into source control |
| Build | Automatically | Source control commits | Create deployable artifacts |
| Import | On demand | Developer | Import changes from source control into dev |
| Deploy | Automatically or on demand | Build completion or release manager | Deploy to target environments |
The import pipeline is essential for several scenarios:
Without an import pipeline, developers would have to manually import solutions, which is error-prone and time-consuming.
By separating export from deployment:
The build pipeline produces artifacts that can be deployed to multiple environments:
Different teams and processes can own different pipelines:
Starting Simple: If you’re just getting started, you can use just the main branch with a single development environment. Everything in this article works with that simpler setup - just ignore the feature and release branch sections until you need them. The multi-branch strategies described below are for teams that need parallel development or long-term support of multiple versions.
For more complex projects, you may need a branching strategy that supports parallel development and maintenance of released versions. Here’s a recommended approach:
gitGraph
commit id: "Initial"
commit id: "Feature A"
branch feature/widgets
commit id: "Add widget component"
commit id: "Widget styling"
checkout main
commit id: "Feature B"
merge feature/widgets id: "Merge widgets"
commit id: "Feature C"
branch release/1.0
commit id: "1.0 Release prep"
checkout main
commit id: "Feature D (v2.0 work)"
checkout release/1.0
commit id: "1.0 Hotfix"
checkout main
commit id: "Feature E"
| Branch Type | Pattern | Purpose | Dev Environment | Test Environment | Deploys To |
|---|---|---|---|---|---|
| Main | main | Active development for next major release | Dev-Main | Test-Main (recommended) or shared Test | Test → UAT → Prod |
| Feature | feature/{name} | Isolated development of new features | Dev-{name} | Test-{name} (optional) | Usually dev only |
| Release | release/{version} | Maintenance of released versions | Dev-release-{version} | Test-release-{version} (recommended) or shared Test | Test → UAT → Prod |
Per-Branch Test Environments (Recommended): While you can share a single Test environment across all branches, having separate test environments per branch (e.g., Test-Main, Test-release-1.0) is strongly recommended for teams working on multiple releases. This prevents conflicts where a build from main overwrites a build from release/1.0 that’s still being tested, and allows parallel testing of different versions. Start with shared environments if you’re new to this, then add per-branch test environments as you scale.
Understanding which environments are used by which branches is key to managing your ALM process:
flowchart TB
subgraph BRANCHES["Branches"]
MAIN["main<br/>(BaseVersion = 2.0)"]
FEAT["feature/widgets"]
REL["release/1.0<br/>(BaseVersion = 1.0)"]
end
subgraph DEV_ENVS["Development Environments"]
DEV_MAIN["Dev-Main<br/>(Environment-Dev-Main)"]
DEV_WIDGETS["Dev-widgets<br/>(Environment-Dev-widgets)"]
DEV_REL["Dev-release-1.0<br/>(Environment-Dev-release-1.0)"]
end
subgraph TEST_ENVS["Test Environments (Per-Branch Recommended)"]
TEST_MAIN["Test-Main<br/>(Environment-Test-Main)"]
TEST_REL["Test-release-1.0<br/>(Environment-Test-release-1.0)"]
end
subgraph SHARED_ENVS["Shared Environments"]
UAT["UAT<br/>(Environment-UAT)"]
PROD["Prod<br/>(Environment-Prod)"]
end
MAIN -->|Export/Import| DEV_MAIN
FEAT -->|Export/Import| DEV_WIDGETS
REL -->|Export/Import| DEV_REL
MAIN -->|Build & Deploy| TEST_MAIN
REL -->|Build & Deploy| TEST_REL
TEST_MAIN --> UAT
TEST_REL --> UAT
UAT --> PROD
Development environments are specific to each branch - the environment name is derived from the branch:
main branch → Dev-Main environment → Environment-Dev-Main variable group feature/widgets branch → Dev-widgets environment → Environment-Dev-widgets variable group release/1.0 branch → Dev-release-1.0 environment → Environment-Dev-release-1.0 variable group Test environments are recommended to be per-branch for active releases:
main branch → Test-Main environment → Environment-Test-Main variable group release/1.0 branch → Test-release-1.0 environment → Environment-Test-release-1.0 variable group UAT and Production environments are typically shared across all branches:
main and release/* branches deploy to the same UAT → Prod chain 2.0.456 from main vs 1.0.789 from release/1.0) Feature branches (feature/{name}) are used for developing new features in isolation:
main when starting work on a new feature Dev-widgets for feature/widgets) main Feature branches are useful when:
Release branches (release/{version}) are used for maintaining released versions:
release/1.0 from main BaseVersion on main to 2.0 for the next release release/1.0 and deployed from there main Release branches are useful when:
| Scenario | Recommended Approach | Dev Environment(s) |
|---|---|---|
| Solo developer, single version | Just use main branch | Dev-Main |
| Small team, single version | Just use main, share dev environment or use feature branches | Dev-Main (shared) or Dev-{feature} per developer |
| Multiple parallel features | Feature branches with separate dev environments per feature | Dev-Main + Dev-{feature} per feature |
| Released product with ongoing development | Release branches for maintenance, main for next version | Dev-Main + Dev-release-{version} per release |
You can always evolve your branching strategy as your project grows - the pipelines in this article support all of these approaches.
Before we dive into the pipelines, we need to set up variable libraries in Azure DevOps. These will store environment-specific configuration like URLs and credentials, so we can keep our pipeline definitions clean and reusable.
In Azure DevOps, go to Pipelines > Library and create a variable group for each environment:
| Variable Group Name | Variables |
|---|---|
Environment-Dev-Main | EnvironmentUrl, plus environment variables and connection refs |
Environment-Test | EnvironmentUrl, plus environment variables and connection refs |
Environment-UAT | EnvironmentUrl, plus environment variables and connection refs |
Environment-Prod | EnvironmentUrl, plus environment variables and connection refs |
For example, the Environment-Test group might contain:
EnvironmentUrl = https://myorg-test.crm11.dynamics.com ENVVAR_new_apiurl = https://api.test.example.com (prefixed with ENVVAR_) ENVVAR_new_apikey = test-api-key-12345 (🔒 mark as secret!) CONNREF_new_sharepointconnection = 12345678-1234-1234-1234-123456789012 (prefixed with CONNREF_) Naming Convention: We use uppercase prefixes ENVVAR_ and CONNREF_ followed by the schema name. Azure DevOps converts all variable names to uppercase in the environment, so using uppercase in the library makes the pattern clearer.
Tip: Using variable libraries means you only need to update URLs and configuration in one place when environments change, rather than hunting through multiple pipeline files.
Security note: Mark secret values as secret variables in Azure DevOps. For production environments, consider linking your variable group to Azure Key Vault for enhanced security.
For pipelines that work with development environments (Export and Import), we use a branch-based naming convention to automatically determine which environment to use. This is especially useful when you have multiple development branches, each with its own Dataverse environment.
The convention works like this:
| Branch Name | Environment Name | Variable Group | Service Connection |
|---|---|---|---|
main | Dev-Main | Environment-Dev-Main | Dev-Main Environment Connection |
feature/widgets | Dev-Widgets | Environment-Dev-Widgets | Dev-Widgets Environment Connection |
feature/mobile | Dev-Mobile | Environment-Dev-Mobile | Dev-Mobile Environment Connection |
The scripts derive the environment name from the branch by:
widgets from feature/widgets, main from main) Dev- (e.g., Dev-Widgets, Dev-Main) All branches use Environment-{EnvironmentName} for their variable group.
This approach has several benefits:
Setting up a new feature environment: When creating a new feature branch:
Environment-Dev-{FeatureName} Dev-{FeatureName} Environment Connection In addition to environment-specific variable groups, we also use branch-based variable groups for build versioning. Each branch can have its own base version number, stored in a variable library that the build pipeline references.
| Branch Name | Variable Group | Variables |
|---|---|---|
main | Branch-Main | BaseVersion = 2.0 |
release/1.0 | Branch-release-1.0 | BaseVersion = 1.0 |
release/1.1 | Branch-release-1.1 | BaseVersion = 1.1 |
The build pipeline dynamically determines the variable group name from the branch name (replacing / with -), then uses the BaseVersion variable in that group to construct the full version number:
Build Version = $(BaseVersion).$(Build.BuildId)For example:
main with BaseVersion = 2.0 and Build ID 456 → version 2.0.456 release/1.0 with BaseVersion = 1.0 and Build ID 457 → version 1.0.457 The main branch has a higher base version (2.0) because it represents the next major release being actively developed. Release branches (1.0, 1.1) have lower versions because they maintain older, stable releases with bug fixes only.
This approach has several benefits:
Setting up a new release branch: When creating release/1.0:
Branch-release-1.0 BaseVersion with value 1.0 1.0.{BuildId} as their version The export pipeline is typically run manually when you want to capture the current state of your development environment into source control. It exports one or more solutions and unpacks them into a folder structure that’s friendly for source control.
When you export a solution as a .zip file, it’s essentially a binary blob - you can’t see what changed between versions. By unpacking the solution into its component files (XML, JavaScript, etc.), you get:
When you export a solution that has changes, the script automatically increments the solution’s version number. This happens in two places:
This ensures that:
The version format follows semantic versioning: Major.Minor.Build.Revision (e.g., 1.0.0.5 → 1.0.0.6). The script increments the revision number (the last segment) for each export with changes.
First, create the export script at scripts/export.ps1. This keeps the logic in a maintainable PowerShell script:
param( [Parameter(Mandatory=$true)] [string]$EnvironmentUrl,
[Parameter(Mandatory=$true)] [string]$ClientId,
[Parameter(Mandatory=$true)] [string]$ClientSecret,
[Parameter(Mandatory=$true)] [string]$OutputPath,
[Parameter(Mandatory=$true)] [string]$SourcesPath)
$ErrorActionPreference = "Stop"
# Install the PowerShell moduleInstall-Module -Name Rnwood.Dataverse.Data.PowerShell -Force -Scope CurrentUser
# Connect to Dataverse and set as defaultGet-DataverseConnection ` -url $EnvironmentUrl ` -ClientId $ClientId ` -ClientSecret $ClientSecret ` -SetAsDefault
# Define solutions to export$solutions = @( @{ Name = 'CoreSolution'; Folder = 'solutions/CoreSolution' }, @{ Name = 'ExtensionSolution'; Folder = 'solutions/ExtensionSolution' }, @{ Name = 'IntegrationSolution'; Folder = 'solutions/IntegrationSolution' })
foreach ($solution in $solutions) { Write-Host "Exporting $($solution.Name)..."
# Export solution to a temp zip file $tempZip = "$OutputPath/$($solution.Name)_temp.zip" Export-DataverseSolution ` -SolutionName $solution.Name ` -OutFile $tempZip
# Clear the solution folder first to remove deleted components # (PAC unpack only overwrites - it doesn't remove files that no longer exist) $solutionFolder = "$SourcesPath/$($solution.Folder)" if (Test-Path $solutionFolder) { Remove-Item -Path $solutionFolder -Recurse -Force }
# Unpack to the solution folder pac solution unpack ` --zipfile $tempZip ` --folder $solutionFolder ` --packagetype Both ` --allowWrite true
# Use git to check if there are any changes (including deleted files) Push-Location $SourcesPath $gitStatus = git status --porcelain $($solution.Folder) $hasChanges = $null -ne $gitStatus -and $gitStatus.Length -gt 0 Pop-Location
if ($hasChanges) { Write-Host "Changes detected in $($solution.Name), incrementing version..."
# Get current solution record from Dataverse $solutionRecord = Get-DataverseRecord -TableName solution -Filter @{ uniquename = $solution.Name } | Select-Object -First 1
if (-not $solutionRecord) { Write-Host "Warning: Solution $($solution.Name) not found in Dataverse, skipping version increment" continue }
# Parse and increment the version (format: Major.Minor.Build.Revision) $versionParts = $solutionRecord.version -split '\.'
# Ensure we have at least 4 parts (pad with zeros if needed) while ($versionParts.Count -lt 4) { $versionParts += '0' }
$versionParts[3] = [int]$versionParts[3] + 1 $newVersion = $versionParts -join '.'
Write-Host "Updating version from $($solutionRecord.version) to $newVersion"
# Update the solution version in Dataverse $solutionRecord | Set-DataverseRecord -Values @{ version = $newVersion }
# Re-export with the new version Export-DataverseSolution ` -SolutionName $solution.Name ` -OutFile "$OutputPath/$($solution.Name).zip"
Write-Host "Unpacking $($solution.Name) with new version..."
# Clear and unpack the solution with new version Remove-Item -Path $solutionFolder -Recurse -Force pac solution unpack ` --zipfile "$OutputPath/$($solution.Name).zip" ` --folder $solutionFolder ` --packagetype Both ` --allowWrite true
Write-Host "$($solution.Name) exported with version $newVersion" } else { Write-Host "No changes detected in $($solution.Name), skipping..." }
# Clean up temp files Remove-Item -Path $tempZip -Force -ErrorAction SilentlyContinue}
Write-Host "Export complete!" Create a file called export-pipeline.yml in your repository.
Creating the Pipeline in Azure DevOps: After creating the YAML file, you need to add it as a pipeline in Azure DevOps. Go to Pipelines > New Pipeline , select your repository, then choose Existing Azure Pipelines YAML file and select your YAML file. See Microsoft’s documentation for detailed instructions.
trigger: none # Manual trigger only
pool: vmImage: 'ubuntu-latest'
variables: # Derive environment name from branch (main -> Dev-Main, feature/widgets -> Dev-Widgets) environmentName: Dev-${{ replace(replace(variables['Build.SourceBranchName'], 'feature/', ''), '/', '-') }}
stages: - stage: Export displayName: 'Export Solutions' variables: - group: Environment-$(environmentName) jobs: - job: ExportSolutions displayName: 'Export and Unpack Solutions' steps: - checkout: self persistCredentials: true
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.tool-installer.PowerPlatformToolInstaller@2 displayName: 'Install Power Platform Build Tools'
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.set-connection-variables.PowerPlatformSetConnectionVariables@2 displayName: 'Set Connection Variables' name: connectionVariables inputs: authenticationType: 'PowerPlatformSPN' PowerPlatformSPN: '$(environmentName) Environment Connection'
- pwsh: | & "$(Build.SourcesDirectory)/scripts/export.ps1" ` -EnvironmentUrl "$(EnvironmentUrl)" ` -ClientId "$(connectionVariables.BuildTools.ApplicationId)" ` -ClientSecret "$(connectionVariables.BuildTools.ClientSecret)" ` -OutputPath "$(Build.ArtifactStagingDirectory)" ` -SourcesPath "$(Build.SourcesDirectory)" displayName: 'Run Export Script'
- pwsh: | git config user.email "pipeline@azuredevops.com" git config user.name "Azure DevOps Pipeline" git add -A
# Check if there are changes to commit $changes = git status --porcelain if ($changes) { git commit -m "Export solutions from $(environmentName) environment (version incremented)" git push origin HEAD:$(Build.SourceBranchName) Write-Host "Changes committed and pushed" } else { Write-Host "No changes to commit" } displayName: 'Commit and Push Changes' workingDirectory: $(Build.SourcesDirectory)git status to detect actual changes after unpacking, which is simpler and more reliable than file hash comparison. Git already knows how to compare files and handle line endings, timestamps, etc. main uses Dev, running on feature/widgets uses Widgets. scripts/export.ps1, keeping the pipeline YAML clean and the logic maintainable. PowerPlatformSetConnectionVariables task to extract credentials from the AzDO Service Connection, just like in the first article. This keeps credentials secure in the service connection rather than scattered across variable libraries. Note: Make sure the build service account has permission to push to your repository. In Azure DevOps, you may need to grant “Contribute” permission to the project’s Build Service account.
The build pipeline triggers automatically when changes are pushed to your repository. It packs the solution source files back into .zip files ready for deployment.
Note: The build pipeline uses PAC CLI for packing solutions. While Rnwood.Dataverse.Data.PowerShell is excellent for Dataverse operations, PAC CLI is the standard tool for packing/unpacking solution files locally without connecting to an environment.
Each build is assigned a unique version number in the format 1.0.$(Build.BuildId) (e.g., 1.0.123). This version number is:
Why a build version number rather than solution version numbers?
You might wonder why we use a single build version rather than the individual solution version numbers. There are several reasons:
The individual solution versions still exist and are important for Dataverse’s internal upgrade tracking - they just aren’t the primary identifier for your CI/CD pipeline.
When a build completes, we tag the source commit with the build version (e.g., v1.0.123). This has several benefits:
Create the build script at scripts/build.ps1:
param( [Parameter(Mandatory=$true)] [string]$SourcesPath,
[Parameter(Mandatory=$true)] [string]$OutputPath,
[Parameter(Mandatory=$true)] [string]$BuildVersion)
$ErrorActionPreference = "Stop"
Write-Host "Building version: $BuildVersion"
# Install PAC CLIdotnet tool install --global Microsoft.PowerApps.CLI.Tool
# Define solutions to pack$solutions = @( @{ Name = 'CoreSolution'; Folder = 'solutions/CoreSolution' }, @{ Name = 'ExtensionSolution'; Folder = 'solutions/ExtensionSolution' }, @{ Name = 'IntegrationSolution'; Folder = 'solutions/IntegrationSolution' })
foreach ($solution in $solutions) { Write-Host "Packing $($solution.Name) (Unmanaged)..." pac solution pack ` --zipfile "$OutputPath/$($solution.Name).zip" ` --folder "$SourcesPath/$($solution.Folder)" ` --packagetype Unmanaged
Write-Host "Packing $($solution.Name) (Managed)..." pac solution pack ` --zipfile "$OutputPath/$($solution.Name)_managed.zip" ` --folder "$SourcesPath/$($solution.Folder)" ` --packagetype Managed}
# Save build version to a file so deploy pipeline can read it$BuildVersion | Out-File -FilePath "$OutputPath/build-version.txt" -NoNewline
Write-Host "All solutions packed successfully! Build version: $BuildVersion" Create a file called build-pipeline.yml:
trigger: branches: include: - main - release/* paths: include: - solutions/** - scripts/**
pool: vmImage: 'ubuntu-latest'
# Derive variable group name from branch (e.g., release/2.0 -> Branch-release-2.0)variables: versionGroupName: Branch-${{ replace(variables['Build.SourceBranchName'], '/', '-') }}
# Build name incorporates the base version from the variable library# BaseVersion comes from the variable group (e.g., "1.0" or "2.0")name: '$(BaseVersion).$(Build.BuildId)'
stages: - stage: Build displayName: 'Build Solutions' variables: - group: $(versionGroupName) jobs: - job: PackSolutions displayName: 'Pack Solutions - $(Build.BuildNumber)' steps: - checkout: self persistCredentials: true
- pwsh: | & "$(Build.SourcesDirectory)/scripts/build.ps1" ` -SourcesPath "$(Build.SourcesDirectory)" ` -OutputPath "$(Build.ArtifactStagingDirectory)" ` -BuildVersion "$(Build.BuildNumber)" displayName: 'Run Build Script'
- task: CopyFiles@2 displayName: 'Copy Deployment Scripts to Staging' inputs: SourceFolder: '$(Build.SourcesDirectory)/scripts' Contents: '**' TargetFolder: '$(Build.ArtifactStagingDirectory)/scripts'
- task: PublishBuildArtifacts@1 displayName: 'Publish Solution Artifacts' inputs: PathtoPublish: '$(Build.ArtifactStagingDirectory)' ArtifactName: 'solutions' publishLocation: 'Container'
# Tag the source commit with the build version - pwsh: | git config user.email "pipeline@azuredevops.com" git config user.name "Azure DevOps Pipeline"
$tagName = "v$(Build.BuildNumber)"
# Check if tag already exists $existingTag = git tag -l $tagName if ($existingTag) { Write-Host "Tag $tagName already exists, skipping..." } else { git tag -a $tagName -m "Build $(Build.BuildNumber)" git push origin $tagName if ($LASTEXITCODE -ne 0) { Write-Host "Warning: Failed to push tag, but continuing..." } else { Write-Host "Successfully created and pushed tag $tagName" } } displayName: 'Tag Source Commit' continueOnError: trueBranch-Main for main, Branch-release-1.0 for release/1.0). The BaseVersion variable in that group (e.g., 2.0 for main, 1.0 for release) provides the major.minor portion of the version. name: property sets the build number to $(BaseVersion).$(Build.BuildId), so builds are easily identifiable (e.g., 2.0.456 from main, 1.0.457 from release). The job name also includes the version. build-version.txt in the artifacts, so the deploy pipeline can display it. v2.0.456) pointing to the exact commit that was built. The step handles duplicate tags gracefully. scripts/build.ps1, keeping the pipeline YAML clean and the logic maintainable. main or release/* branches, but only if files in the solutions/ or scripts/ folder have changed. scripts/ are copied into the build artifacts alongside the solution ZIPs. The import pipeline is the reverse of the export pipeline - it takes solutions from source control and imports them into your development environment. This is essential for keeping your dev environment in sync with the codebase.
When working with Power Platform solutions in a team, you’ll frequently need to import changes from source control:
Without a pipeline for this, developers would have to manually pack and import solutions, which is tedious and error-prone.
Create the import script at scripts/import.ps1:
param( [Parameter(Mandatory=$true)] [string]$EnvironmentUrl,
[Parameter(Mandatory=$true)] [string]$ClientId,
[Parameter(Mandatory=$true)] [string]$ClientSecret,
[Parameter(Mandatory=$true)] [string]$SourcesPath,
[Parameter(Mandatory=$true)] [string]$TempPath)
$ErrorActionPreference = "Stop"
# Install the PowerShell module and PAC CLIInstall-Module -Name Rnwood.Dataverse.Data.PowerShell -Force -Scope CurrentUserdotnet tool install --global Microsoft.PowerApps.CLI.Tool
# Connect to Dataverse and set as defaultGet-DataverseConnection ` -url $EnvironmentUrl ` -ClientId $ClientId ` -ClientSecret $ClientSecret ` -SetAsDefault
# Define solutions to import (in dependency order)$solutions = @( @{ Name = 'CoreSolution'; Folder = 'solutions/CoreSolution' }, @{ Name = 'ExtensionSolution'; Folder = 'solutions/ExtensionSolution' }, @{ Name = 'IntegrationSolution'; Folder = 'solutions/IntegrationSolution' })
# Build environment variables hashtable from prefixed environment variables$envVars = @{}Get-ChildItem env: | Where-Object { $_.Name -like 'ENVVAR_*' } | ForEach-Object { $schemaName = $_.Name -replace '^ENVVAR_', '' $envVars[$schemaName] = $_.Value Write-Host "Environment variable: $schemaName"}
# Build connection references hashtable from prefixed environment variables$connRefs = @{}Get-ChildItem env: | Where-Object { $_.Name -like 'CONNREF_*' } | ForEach-Object { $schemaName = $_.Name -replace '^CONNREF_', '' $connRefs[$schemaName] = $_.Value Write-Host "Connection reference: $schemaName"}
foreach ($solution in $solutions) { Write-Host "Packing $($solution.Name) from source..."
$zipFile = "$TempPath/$($solution.Name).zip"
# Pack the solution from source files (unmanaged for dev) pac solution pack ` --zipfile $zipFile ` --folder "$SourcesPath/$($solution.Folder)" ` --packagetype Unmanaged
Write-Host "Importing $($solution.Name)..."
# Import to dev environment (unmanaged) Import-DataverseSolution ` -InFile $zipFile ` -EnvironmentVariables $envVars ` -ConnectionReferences $connRefs ` -Verbose}
# Publish all customisationsWrite-Host "Publishing customisations..."Publish-DataverseCustomizations
Write-Host "Import complete!" Create a file called import-pipeline.yml:
trigger: none # Manual trigger only
pool: vmImage: 'ubuntu-latest'
variables: # Derive environment name from branch (main -> Dev-Main, feature/widgets -> Dev-Widgets) environmentName: Dev-${{ replace(replace(variables['Build.SourceBranchName'], 'feature/', ''), '/', '-') }}
stages: - stage: Import displayName: 'Import Solutions' variables: - group: Environment-$(environmentName) jobs: - job: ImportSolutions displayName: 'Import Solutions to $(environmentName)' steps: - checkout: self
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.tool-installer.PowerPlatformToolInstaller@2 displayName: 'Install Power Platform Build Tools'
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.set-connection-variables.PowerPlatformSetConnectionVariables@2 displayName: 'Set Connection Variables' name: connectionVariables inputs: authenticationType: 'PowerPlatformSPN' PowerPlatformSPN: '$(environmentName) Environment Connection'
- pwsh: | & "$(Build.SourcesDirectory)/scripts/import.ps1" ` -EnvironmentUrl "$(EnvironmentUrl)" ` -ClientId "$(connectionVariables.BuildTools.ApplicationId)" ` -ClientSecret "$(connectionVariables.BuildTools.ClientSecret)" ` -SourcesPath "$(Build.SourcesDirectory)" ` -TempPath "$(Build.ArtifactStagingDirectory)" displayName: 'Run Import Script'scripts/import.ps1, keeping the pipeline YAML clean. The deploy pipeline takes the build artifacts and deploys them across your environments. Each environment is a separate stage with its own approval gates that run in the order they are defined.
A key design principle here is that the deployment scripts are included in the build artifacts and run from there , rather than being read from the repository at deployment time. This approach has several important benefits:
YAML Templates vs Scripts in Artifacts: You might wonder why we don’t use YAML templates (like templates/deploy-environment.yml). The problem is that YAML templates are resolved at pipeline compile time from the repository, not from artifacts. This means if you trigger a deployment of build #100 after the repository has changed, you’d get the new template code but the old solution artifacts - a mismatch that can cause subtle bugs.
First, create the deployment script at scripts/deploy.ps1. This script will be included in the build artifacts and called by the deploy pipeline:
param( [Parameter(Mandatory=$true)] [string]$EnvironmentUrl,
[Parameter(Mandatory=$true)] [string]$ClientId,
[Parameter(Mandatory=$true)] [string]$ClientSecret,
[Parameter(Mandatory=$true)] [string]$SolutionsPath,
[Parameter(Mandatory=$true)] [string]$EnvironmentName)
$ErrorActionPreference = "Stop"
# Install the PowerShell moduleInstall-Module -Name Rnwood.Dataverse.Data.PowerShell -Force -Scope CurrentUser
# Connect and set as defaultGet-DataverseConnection ` -url $EnvironmentUrl ` -ClientId $ClientId ` -ClientSecret $ClientSecret ` -SetAsDefault
# Solutions in dependency order (base solutions first)$solutions = @('CoreSolution', 'ExtensionSolution', 'IntegrationSolution')
# Build environment variables hashtable from prefixed environment variables$envVars = @{}Get-ChildItem env: | Where-Object { $_.Name -like 'ENVVAR_*' } | ForEach-Object { $schemaName = $_.Name -replace '^ENVVAR_', '' $envVars[$schemaName] = $_.Value Write-Host "Environment variable: $schemaName"}
# Build connection references hashtable from prefixed environment variables$connRefs = @{}Get-ChildItem env: | Where-Object { $_.Name -like 'CONNREF_*' } | ForEach-Object { $schemaName = $_.Name -replace '^CONNREF_', '' $connRefs[$schemaName] = $_.Value Write-Host "Connection reference: $schemaName"}
# STEP 1: Stage all solutions as holding solutions (in dependency order)Write-Host "=== STAGING PHASE ==="foreach ($solution in $solutions) { Write-Host "Staging $solution..." Import-DataverseSolution ` -InFile "$SolutionsPath/${solution}_managed.zip" ` -Mode HoldingSolution ` -EnvironmentVariables $envVars ` -ConnectionReferences $connRefs ` -Verbose}
# STEP 2: Apply upgrades in REVERSE dependency orderWrite-Host "=== UPGRADE PHASE ==="$reverseSolutions = $solutions[($solutions.Count-1)..0]foreach ($solution in $reverseSolutions) { Write-Host "Upgrading $solution..." Import-DataverseSolution ` -InFile "$SolutionsPath/${solution}_managed.zip" ` -Mode StageAndUpgrade ` -Verbose}
# Publish all customisationsWrite-Host "Publishing customisations..."Publish-DataverseCustomizations
# Activate any workflows/flows that should be activeWrite-Host "Checking process activation..."Get-DataverseRecord -TableName workflow -Filter @{ statecode = 0; "category:In" = @(0, 5, 6) # Workflows, Business Rules, Modern Flows} | ForEach-Object { Write-Host "Activating: $($_.name)" $_ | Set-DataverseRecord -Values @{ statecode = 1 }}
Write-Host "Deployment to $EnvironmentName complete!" Now create the main pipeline file deploy-pipeline.yml. The pipeline still uses YAML templates for the stage structure (to avoid repetition), but the core deployment logic runs from the script in the artifacts:
First, create the template at templates/deploy-environment.yml:
parameters: - name: environmentName type: string - name: dependsOn type: string default: ''
stages: - stage: DeployTo${{ parameters.environmentName }} displayName: 'Deploy to ${{ parameters.environmentName }}' ${{ if ne(parameters.dependsOn, '') }}: dependsOn: DeployTo${{ parameters.dependsOn }} variables: # Convention-based variable group: Environment-{EnvironmentName} - group: Environment-${{ parameters.environmentName }} jobs: - deployment: Deploy${{ parameters.environmentName }} displayName: 'Deploy to ${{ parameters.environmentName }} Environment' environment: '${{ parameters.environmentName }}' strategy: runOnce: deploy: steps: - task: DownloadPipelineArtifact@2 displayName: 'Download Solution Artifacts' inputs: buildType: 'specific' project: '$(System.TeamProjectId)' definition: '$(resources.pipeline.build.pipelineID)' buildVersionToDownload: 'latest' artifactName: 'solutions' targetPath: '$(Pipeline.Workspace)/solutions'
# Display the build version being deployed - pwsh: | $version = Get-Content "$(Pipeline.Workspace)/solutions/build-version.txt" Write-Host "##[section]Deploying Build Version: $version" Write-Host "##vso[build.addbuildtag]$version" displayName: 'Display Build Version'
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.tool-installer.PowerPlatformToolInstaller@2 displayName: 'Install Power Platform Build Tools'
- task: microsoft-IsvExpTools.PowerPlatform-BuildTools.set-connection-variables.PowerPlatformSetConnectionVariables@2 displayName: 'Set Connection Variables' name: connectionVariables inputs: authenticationType: 'PowerPlatformSPN' # Convention-based service connection: {EnvironmentName} Environment Connection PowerPlatformSPN: '${{ parameters.environmentName }} Environment Connection'
- pwsh: | # Run the deployment script FROM THE ARTIFACTS & "$(Pipeline.Workspace)/solutions/scripts/deploy.ps1" ` -EnvironmentUrl "$(EnvironmentUrl)" ` -ClientId "$(connectionVariables.BuildTools.ApplicationId)" ` -ClientSecret "$(connectionVariables.BuildTools.ClientSecret)" ` -SolutionsPath "$(Pipeline.Workspace)/solutions" ` -EnvironmentName "${{ parameters.environmentName }}" displayName: 'Run Deployment Script from Artifacts' Then create the main pipeline file deploy-pipeline.yml:
trigger: none # Triggered by build completion or manually
resources: pipelines: - pipeline: build source: 'Build Pipeline' trigger: branches: include: - main - release/*
pool: vmImage: 'ubuntu-latest'
# Deploy pipeline name incorporates the build version being deployed# This is set dynamically in the first step after downloading artifactsname: 'Deploy-$(resources.pipeline.build.runName)'
stages: # Deploy to Test (no dependencies) - template: templates/deploy-environment.yml parameters: environmentName: Test
# Deploy to UAT (depends on Test) - template: templates/deploy-environment.yml parameters: environmentName: UAT dependsOn: Test
# Deploy to Production (depends on UAT) - template: templates/deploy-environment.yml parameters: environmentName: Prod dependsOn: UAT Notice how clean this is - we only specify the environment name and dependencies. The template automatically derives the variable group (Environment-{EnvironmentName}) and service connection ({EnvironmentName} Environment Connection) from the environment name.
Compare this to having the full deployment steps repeated three times! The template approach keeps the main pipeline file focused on what environments to deploy to, while the deploy.ps1 script in the artifacts handles the actual deployment logic.
The deploy template now uses the same convention-based approach we use for dev environments. Given an environment name, it automatically determines:
| Resource Type | Naming Convention | Example for “Test” |
|---|---|---|
| Variable Group | Environment-{EnvironmentName} | Environment-Test |
| Service Connection | {EnvironmentName} Environment Connection | Test Environment Connection |
| AzDO Environment | {EnvironmentName} | Test |
This approach has several benefits:
Adding a new environment: To add a new environment (e.g., “Staging”):
Environment-Staging Staging Environment Connection Staging environmentName: Staging The key insight is the separation between:
templates/deploy-environment.yml) - These define the pipeline structure : stages, jobs, approval gates, and how to download artifacts. Changes here affect the pipeline flow but not what the deployment actually does. scripts/deploy.ps1) - This contains the actual deployment logic : which solutions to import, in what order, how to handle environment variables, etc. This is versioned with the build artifacts. This means if you need to change how deployments work (e.g., add a new solution, change the import order), you modify scripts/deploy.ps1 and commit it. The next build will include the updated script in its artifacts, and deployments of that build will use the new logic.
The template takes two parameters:
| Parameter | Purpose |
|---|---|
environmentName | Environment name used for display, convention-based resource names, and passed to the deployment script |
dependsOn | (Optional) Name of the environment that must complete first (e.g., “Test” for UAT) |
Notice the name: property in the deploy pipeline:
name: 'Deploy-$(resources.pipeline.build.runName)' This sets the deploy pipeline run name to include the build version it’s deploying (e.g., Deploy-2.0.456). Combined with the build pipeline name that incorporates the version, this means:
2.0.456 (from main) or 1.0.457 (from a release branch) Deploy-2.0.456 This makes it immediately clear which version is being deployed when you look at the pipeline runs list in Azure DevOps.
The Import-DataverseSolution cmdlet is intelligent about how it imports solutions:
This means you don’t need separate logic for first-time deployments vs updates - the cmdlet handles it automatically.
Why is upgrade important? Using a simple import (update) instead of upgrade can leave behind deleted components in your target environments, causing inconsistencies and unexpected behaviour. Learn more about solution upgrade vs import.
When you have multiple solutions with dependencies, the upgrade order matters. Here’s why we stage all solutions first, then apply upgrades in reverse dependency order:
Example: If IntegrationSolution uses a workflow from CoreSolution, and you’re removing that workflow in the new version of CoreSolution, you need to:
If you did it the other way around, the upgrade would fail because IntegrationSolution still references the workflow.
scripts/deploy.ps1 and is included in the build artifacts, ensuring version consistency -SetAsDefault on Get-DataverseConnection, we don’t need to pass -Connection to every cmdlet ENVVAR_ in the library are automatically collected and passed to the import CONNREF_ in the library are automatically collected and passed to the import workflow table for draft processes and activate them To add approval gates to your deployments:
Test, UAT, Prod) This gives you a controlled release process where, for example:
Workflows, business rules, and cloud flows (modern flows) are stored in the workflow table in Dataverse. After importing a solution, these processes may be in a draft state and need to be activated.
The deployment script includes this step:
Write-Host "Checking process activation..."
Get-DataverseRecord -TableName workflow -Filter @{ statecode = 0; "category:In" = @(0, 5, 6) # Workflows, Business Rules, Modern Flows} | ForEach-Object { Write-Host "Activating: $($_.name)" $_ | Set-DataverseRecord -Values @{ statecode = 1 }}This uses the PowerShell pipeline pattern we learned in the first article:
Get-DataverseRecord queries for all draft processes, filtering by category Set-DataverseRecord updates each record to set statecode = 1 (activated) Note: You might want to be more selective about which processes to activate. You could filter by solution to make this easy to maintain.
The deployment script automatically collects environment-specific values from the variable library using prefixes:
In your variable library, add variables with these prefixes:
ENVVAR_new_apiurl → The environment variable value for new_apiurl CONNREF_new_sharepointconnection → The connection reference value (connection ID) for new_sharepointconnection The script collects these at runtime:
# Collect environment variables from prefixed library variables$envVars = @{}Get-ChildItem env: | Where-Object { $_.Name -like 'ENVVAR_*' } | ForEach-Object { $schemaName = $_.Name -replace '^ENVVAR_', '' $envVars[$schemaName] = $_.Value}
# Collect connection references from prefixed library variables$connRefs = @{}Get-ChildItem env: | Where-Object { $_.Name -like 'CONNREF_*' } | ForEach-Object { $schemaName = $_.Name -replace '^CONNREF_', '' $connRefs[$schemaName] = $_.Value}Import-DataverseSolution: Import-DataverseSolution ` -InFile "solution.zip" ` -EnvironmentVariables $envVars ` -ConnectionReferences $connRefsYou now have a fairly complete ALM setup for your Power Platform solutions using the Rnwood.Dataverse.Data.PowerShell module:
| Pipeline | Trigger | Purpose |
|---|---|---|
| Export | Manual | Capture development environment changes into source control (with auto-versioning) |
| Build | Automatic (on commit) | Pack solutions, create versioned artifacts, and tag source commit |
| Import | Manual | Import solutions from source control into your dev environment |
| Deploy | Automatic/Manual | Deploy through Test → UAT → Production using scripts from the build artifacts |
The key benefits of this approach:
main uses 2.0.x (next major release) while release/1.0 uses 1.0.x (maintenance releases) git status to detect changes, which is simpler and more reliable than file hash comparison This section explains how to use the pipelines in your day-to-day workflow. Start with the basic workflow and add the advanced steps as your project grows.
main Branch) If you’re starting simple with just the main branch, here’s your daily workflow:
Dev-Main environment main branch when you’re ready to save your work main When you need to get the latest changes (e.g., a teammate made changes):
main branch When to use feature branches: Feature branches are useful when multiple people need to work on different features simultaneously, or when a feature is experimental and shouldn’t disrupt main development. Skip this section if you’re working solo or on a small team with one active workstream.
Create the branch in Azure DevOps:
feature/widgets and base it on main Create the environment resources in Azure DevOps:
Dev-widgets Environment-Dev-widgets (with EnvironmentUrl and any env vars) Dev-widgets Environment Connection Run the Import pipeline on the feature/widgets branch to initialise the dev environment with the current state from main
Develop your feature - export and import on the feature branch to save/load changes
feature/widgets → main main to get the merged changes into Dev-Main If the PR has merge conflicts, you can resolve them directly in Azure DevOps:
Dev-Main with the merged result Tip: For complex solution conflicts, the Pull Request Merge Conflict Extension provides enhanced conflict resolution capabilities directly in Azure DevOps.
When to use release branches: Release branches are for when you need to maintain a released version (e.g., patch bugs in v1.0) while continuing development on the next version (v2.0). Skip this section if you only have one version in production.
Create the release branch in Azure DevOps:
release/1.0 and base it on main Create the branch variable group in Azure DevOps:
Branch-release-1.0 BaseVersion = 1.0 Bump the main branch version :
Branch-Main variable group: BaseVersion = 2.0 Create the environment resources for the release branch:
Dev-release-1.0 Environment-Dev-release-1.0 Dev-release-1.0 Environment Connection Test-release-1.0 with corresponding variable group and service connection Run Import on release/1.0 to initialise the release dev environment
release/1.0 from the branch dropdown release/1.0 to sync your release dev environment release/1.0 to capture the fix release/1.0 will have version 1.0.x and deploy through Test → UAT → Prod release/1.0 to main containing just the fix commit | Task | Pipeline | Branch | Notes |
|---|---|---|---|
| Save my changes | Export | Current branch | Commits to the branch you’re on |
| Build for testing | (Automatic) | main or release/* | Triggers on push |
| Sync from teammates | Import | Current branch | Updates dev env from source |
| Start new feature | Import | feature/* | After creating branch and resources |
| Finish feature | PR + Import | main | Merge PR, then import to main dev |
| Release a version | Create branch | release/* | Then update main’s BaseVersion |
| Hotfix old version | Export/Deploy | release/* | Cherry-pick to main if needed |
| Issue | Cause | Solution |
|---|---|---|
| Import fails with “solution dependency” | Solutions imported in wrong order | Check $solutions array in scripts - base solutions first |
| Export shows no changes | No actual changes, or unpack location wrong | Check the solution folder path matches |
| Deploy shows old version | Pipeline cached or wrong build | Check the build artifact being deployed |
| Feature branch has wrong solution state | Didn’t import before starting | Run Import on the feature branch first |
| Merge conflict in Solution.xml | Two people changed version number | Keep the higher version, or resolve manually |
Now that you have the basics in place up you can add almost any custom steps that your solution needs. For instance, managing important configuration data. See the versioning config data article for details.
Find out more about what you can do in the Rnwood.Dataverse.Data.PowerShell documentation.
I’ll post some more examples in future.
Happy automating!