Bitbucket CI/CD pipelines for Optimizely CMS 12 / Blogs / Perficient


A big reason behind this post is because this topic, even though very relevant, has very little to no context available online so far. There are posts about this being possible and very generic build pipelines for .net core deployments to azure, but none describing how to build Bitbucket CI/CD pipelines for Optimizely CMS 12 specifically. And because it took quite a bit of research, and trial and error, over the course of several weeks, to get this working, I feel this effort is worth sharing, so others don’t end up spending so much time.

If you’re reading this, you’ve either already worked with Azure Devops and pipelines and need a similar solution with bitbucket pipelines, or you’re just brand new to Devops and starting with Bitbucket only. Just to put some context out there, there are some key differences between Azure Devops and Bitbucket pipelines, for anyone trying to reference Azure Devops pipelines to build Bitbucket ones :

  1. They both vary in syntax clearly – they define triggers, variables and steps differently; they read variable values differently, the way to run scripts vary in both and much more.
  2. They both have differences in how many YAML files you need to create for the entire CI/CD pipeline – In Azure Devops, you can create 2 separate YAML files, one for build and another for release. In Bitbucket, there is only one YAML file called bitbucket-pipelines.yml.
  3. Another interesting fact for Bitbucket pipelines is that it’s super sensitive on formatting and indentation. You would actually see red squiggly lines in your pipeline code if its not indented correctly. So Bitbucket has this online tool to help resolve those kind of errors early on before you run the pipelines.

Must know things for Optimizely Deployment

Before we jump into how we did this in Bitbucket, lets look at few must knows that we should be aware of when building pipelines for Optimizely deployments, regardless of whether its being built in Azure Devops or Bitbucket :

  1. This needs Powershell – Optimizely provides a custom powershell module called EpiCloud to enable deployment using deployment API. This has most scripts pre-created and you just need to call them with the right project specific parameter values. You can find details on this here.
  2. Code package Format – The final code package that you build to deploy needs to have a certain format and if it doesn’t match this specified format, you’ll either end up with errors during deployment or faulty deployment that might launch the site but with errors. Here’s the details on what all this should include.

Jump to bitbucket-pipelines.yml

Jumping in straight to how we built our Bitbucket Pipeline. I’ll explain step by step with code to make it easier :

Docker Image

Deployments are containerized these days and Docker images are a key component of container-based deployment, providing a reliable and consistent way to package, distribute, and run applications within containers. This is the first thing you need to define in your bitbucket-pipelines.yml.

In Bitbucket, you would usually start with a template that might already have a docker image added on to it to begin with, but may not necessarily match the framework or version that you need. Refer here to find the right docker image for you. In my case, I used .net6 sdk image for my Optimizely CMS 12 project :

image: mcr.microsoft.com/dotnet/sdk:6.0

Pipeline Trigger

You can create multiple pipelines within the same YAML file and define either branch name or pull requests as triggers for them. I needed the build pipeline to run when code gets committed to “develop” branch, so I used that as my trigger :

pipelines:
  branches: 
    develop:

Steps

Each action you want to take as part of the pipeline becomes a step. You would ideally give a name and script to each step. In addition, you can also add a docker image, which would be that step specific. For example in my case, when I needed to build the Front end piece of my code that used Node and NPM, I added a docker image for Node on that step. You can also add caches to your step, which would cache dependencies from previous steps, to enable faster builds.

Here are 2 steps that I had – one for Front end build and another for Optimizely .Net code build :

- step:
         name: Frontend build
         image: node:16.19.1
         caches:
           - node
         script:
           # run command from within the front end directory where package.json lives
           - pushd frontend-dir
           - npm install
           - npm run build:${projectname}           
           # Go Back to previous Directory
           - popd
           - mkdir wwwroot
           # Copy FED Files from project's wwwroot folder to clone dir's wwwroot folder
           - cp -avr ${projectpath}/wwwroot/. wwwroot           
         artifacts:
           - wwwroot/**

 

- step:
          name: .NET Core build, publish and package
          caches:
            - dotnetcore
          script: 
              # Update Package Source & Install ZIP Library
            - apt-get update
            - apt-get install zip -y
              # Create variables
            - export VERSION=1.0.$BITBUCKET_BUILD_NUMBER   # choose your own versioning strategy here              
            - export PROJECT_PATH="$BITBUCKET_CLONE_DIR/${project path}/${projectnanme}.csproj"   
            - export PUBLISH_LOC=$BITBUCKET_CLONE_DIR/OutputFiles # new folder to hold the published code
            - export PACKAGE_NAME=${projectname}.app.$VERSION.nupkg # this needs to have .app and .{version} on it, per Optimizely Code package format
              # Create deployment folders
            - mkdir -p $PUBLISH_LOC
            - mkdir package
            - mkdir $PUBLISH_LOC/wwwroot
              # Restore Project, Build, and Publish
            - dotnet restore ${solutionName}
            - dotnet build ${solutionName} --no-restore --configuration ${buildConfiguration}  
            - dotnet publish $PROJECT_PATH --configuration ${buildConfiguration} --no-build --output $PUBLISH_LOC            
             # Copy in wwwroot files into publish location
            - cp -avr wwwroot/. $PUBLISH_LOC/wwwroot
             # Temporarily run cmommand from new directory
            - pushd $PUBLISH_LOC
             # ZIP directory contents into NUPKG file
            - zip -r $BITBUCKET_CLONE_DIR/package/$PACKAGE_NAME.nupkg ./*
          artifacts:
            - package/**
A few elaborations on the above code piece :
  1. Everything that’s displayed as ${} is a variable defined under RepositorySettings in Bitbucket, eg ${solutionName}
  2. Everything that’s displayed as $ is a local variable defined within the pipeline itself with an export script, eg $VERSION
  3. It does use a bunch of bash commands like pushd, popd, mkdir, cp etc, all mostly to aid in working with the files and directories within the Bitbucket_Clone_Dir where code gets copied over to, for the pipeline. Here’s a cheatsheet to get a quick gist of what these do.
  4. We make use of apt-get command line tool to help with installation of needed packages, like zip in our case.

Artifacts

Steps can be followed by artifacts which are essentially the end results from that step, meaning if we want to pass generated values/content from one step to the other for further processing, we add it to the artifacts. Example passing a variable value from one step to the other.

In our case here, we ran the frontend build first and then had all compiled Front end files in the wwwroot folder. So we added that to artifacts on that step, which allowed us to use them in the next step when building the final package for deployment.

Deployment Step

This is the final piece of our build pipeline, where we pick the code package generated in the .net build step and deploy it to DXP (Integration) environment. Before being able to do this though, we need to go through some extra steps :

  1. Get Deployment API credentials from Paasportal.
  2. Add those values as variables in Bitbucket Repository Settings under Pipelines > Deployments, under individual environments.

The step would look something like this :

- step:
         name: Upload and deploy to integration
         image: mcr.microsoft.com/powershell:latest #this is the docker image to allow running powershell scripts on this step
         deployment: Integration #This needs to match the environment name under Deployment settings and without this, the step can't read environment variables : Clientkey, secret and projectId
         script:

As you can see, we added a new docker image here to support running powershell scripts on this step. We also added a deployment value here to indicate the environment to which this deployment needs to happen, as well as to tell the pipeline which environment variables to pull for this step.

Next we pass these environment variable values to EpiCloud powershell scripts to authenticate and upload package to deployment slot location, from where it’ll be picked for actual deployment.

#Install EpiCloud powershell module
Set-PSRepository -Name "PSGallery" -InstallationPolicy Trusted
Install-Module EpiCloud -Scope CurrentUser -Repository PSGallery -AllowClobber -MinimumVersion 1.0.0 -Force

#From the Artifact Path, getting the nupkg file
$packagePath = Get-ChildItem -Path $ArtifactPath -Filter *.nupkg

#Setting up the object for the Epi Deployment. This is found in the PAAS portal settings.
$getEpiDeploymentPackageLocationSplat = @{
    ClientKey = "$ClientKey"
    ClientSecret = "$ClientSecret"
    ProjectId = "$ProjectID"
}

#Generating the Blob storage location URL to upload the package
$packageLocation = Get-EpiDeploymentPackageLocation @getEpiDeploymentPackageLocationSplat

#Uploading the package to the Blob location
$deploy = Add-EpiDeploymentPackage -SasUrl $packageLocation -Path $packagePath.FullName

#Uploading the package to the Blob location
$deploy = Add-EpiDeploymentPackage -SasUrl $packageLocation -Path $packagePath.FullName

#Starting the Deployment
$deploy = Start-EpiDeployment @startEpiDeploymentSplat

$deployId = $deploy | Select -ExpandProperty "id"

#Setting up the object for the EpiServer Deployment Updates
$getEpiDeploymentSplat = @{
    ProjectId = "$ProjectID"
    ClientSecret = "$ClientSecret"
    ClientKey = "$ClientKey"
    Id = "$deployId"
}

#Setting up Variables for progress output
$percentComplete = 0
$currDeploy = Get-EpiDeployment @getEpiDeploymentSplat | Select-Object -First 1
$status = $currDeploy | Select -ExpandProperty "status"
$exit = 0

Write-Host "Percent Complete: $percentComplete%"
Write-Output "##vso[task.setprogress value=$percentComplete]Percent Complete: $percentComplete%"

#While the exit flag is not true
while($exit -ne 1){

#Get the current Deploy
$currDeploy = Get-EpiDeployment @getEpiDeploymentSplat | Select-Object -First 1

#Set the current Percent and Status
$currPercent = $currDeploy | Select -ExpandProperty "percentComplete"
$status = $currDeploy | Select -ExpandProperty "status"

#If the current percent is not equal to what it was before, send an update
#(This is done this way to prevent a bunch of messages to the screen)
if($currPercent -ne $percentComplete){
    Write-Host "Percent Complete: $currPercent%"
    Write-Output "##vso[task.setprogress value=$currPercent]Percent Complete: $currPercent%"
    #Set the overall percent complete variable to the new percent complete
    $percentComplete = $currPercent
}

#If the Percent Complete is equal to 100%, Set the exit flag to true
if($percentComplete -eq 100){
    $exit = 1    
}

#If the status of the deployment is not what it should be for this scipt, Set the exit flag to true
if($status -ne 'InProgress'){
    $exit = 1
}

#Wait 1 second between checks
start-sleep -Milliseconds 1000

}

#If the status is set to Failed, throw an error
if($status -eq "Failed"){
    Write-Output "##vso[task.complete result=Failed;]"
    throw "Deployment Failed. Errors: \n" + $deploy.deploymentErrors
}

As you progress through the above deployment step, once it begins the deployment process, Paasportal will start reflecting that as well. If this fails for some reason, pipeline will also fail with appropriate messaging. If deployment is successful, you’d be able to see that in Paasportal deployment history and can now test your site on the DXP environment.

Conclusion

The Release pipeline can be built, assuming its similar to build, with just a different branch trigger and a different deployment environment. In Azure Devops, Release pipelines have an additional approval cycle involved to support production deployments after preproduction deployment is verified. That would be the next piece I will be researching on for Bitbucket pipelines.

Lastly, I wanted to thank two fellow OMVPs who helped me navigate some of these unfamiliar paths to achieve the end goal – first David Lewis, my mentor here at Perficient and second Eric Markson, ex-colleague and the person who helped setup the Azure Devops pipelines before this.

If anyone has more feedback to help make this better, or who’ve already figured their way to the next step for Release pipelines and production deployment approvals in bitbucket, please do share your thoughts, comments, concerns below.

 

 

 

 

 

 





Source link

Social media & sharing icons powered by UltimatelySocial
error

Enjoy Our Website? Please share :) Thank you!