Project 16 →End-to-End Automation with Azure DevOps CI/CD Pipelines +GitOps
This project is about making it easier to build, test, and launch applications using Azure DevOps. Instead of doing everything manually, it sets up an automated process called a CI/CD pipeline. This pipeline takes care of building your code, testing it to make sure everything works, and then deploying it to the environment you choose, like development, testing, or production
Completion Steps :-
Phase 1 :Continuous Integration on Azure Devops Platform
Step 1 : Setup Azure Repos
Step 2 : Set Azure Container Registeries
Step 3: Create Azure Pipelines
Step 4: Setup agent pool
Step 5: Setup Pipelines for Voting and worker microservices
Phase 2 : Continuous Deployment using gitops
Step 1: Kubernetes Cluster Creation using Azure Kubernetes Service (AKS)
Step 2: Setup Azure CLI
Step 3: Install and configure Argo-CD
Step 4: Deploy Application on argocd
Phase 3: Automating the whole cicd part
Step 1: Creating a shell script
Step 2 : Added a new stage in pipeline
Step 1 : Setup Azure Repos
- Go to your Azure portel and search for devops organization and create a new organization
2. click on Azure Repos and then click on Import option
3. Paste the github project repo link
https://github.com/Aakibgithuber/Azure-devops-CI-CD-GitOps.git
4. Provide the github project repo URL and then click on Import
5. It will fetch all the Files and branches from github
6. Now you have to set main branch as default branch
7. Select branches from left menu and then click on three dots of main branch where you find a option to set main branch as a default branch
8. We set main branch as default one beacause the latest code will always have to be in main branch
Step 2 : Set Azure Container Registeries
What is Azure Container Registeries → It is central repositories in azure for storing docker images just like dockerhub
- Go to azure portel and search for container registeries
2. Create a new container registery
3. Keep the setting as it is mentioned in below image
4. Your central repository for docker images is ready to use
Step 3: Create Azure Pipelines
What is Azure Pipelines → It helps you automatically build, test, and deploy your code just like jenkins It’s like a robot that takes your code, checks if it works correctly, and sends it to where it needs to be (like a website or app store) without you doing it manually every time
- Click on Pipelines Option present in the left menu
2. In our case we fetch the code from github and store it in Azure Repos so we have to select Azure repos from options given below
3. These are the pre templates of your pipeline code in our case we are using docker for building and pushing the image to ACR
4. Select the subscription I only have pay as you go type so i need to chose this
5. Select the container registry we made earliar in the blog
6. It will provide a pre template of pipeline code now we have to make some changes
# Docker
# Build and push an image to Azure Container Registry
# https://docs.microsoft.com/azure/devops/pipelines/languages/docker
trigger: #This option will only trigger a pipeline when you only make any change in the result folder
paths:
include:
- result/*
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: 'e05f07b8-d307-47cb-87d1-c75b657fa09b'
imageRepository: 'resultapp'
containerRegistry: 'microcicd.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/result/Dockerfile'
tag: '$(Build.BuildId)'
pool:
name: 'azureagent'
stages:
- stage: Build
displayName: Build
jobs:
- job: Build
displayName: Build the image
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'build'
Dockerfile: 'result/Dockerfile'
tags: '$(tag)'
- stage: push
displayName: push
jobs:
- job: Build
displayName: Build the image
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'push'
tags: '$(tag)'
1. Trigger
trigger:
paths:
include:
- result/*
- This tells Azure Pipelines to only trigger this pipeline when you make a change to files in the
result
folder.
For example, if we updateresult/Dockerfile
, the pipeline will run. If you change something outside this folder, it won't trigger.
2. resources
resources:
- repo: self
- This tells the pipeline to use the same repository (
self
) where this YAML file is stored.
3. variables
variables:
dockerRegistryServiceConnection: 'e05f07b8-d307-47cb-87d1-c75b657fa09b'
imageRepository: 'resultapp'
containerRegistry: 'microcicd.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/result/Dockerfile'
tag: '$(Build.BuildId)'
- These are reusable values (variables) used later in the pipeline:
dockerRegistryServiceConnection
: Refers to the Azure service connection that allows the pipeline to communicate with the Azure Container Registry (ACR). This is the unique ID of the service connection.imageRepository
: The name of the repository in the container registry where the image will be stored. Here, it’s calledresultapp
.containerRegistry
: The URL of our Azure Container Registry (microcicd.azurecr.io
).dockerfilePath
: The path to the Dockerfile.$(Build.SourcesDirectory)
is a predefined variable pointing to the root of the source code.tag
: A unique identifier for the Docker image, generated automatically for each pipeline run.$(Build.BuildId)
is a unique number provided by Azure Pipelines for each build.
4. pool
pool:
name: 'azureagent'
- Specifies the pool of agents (virtual machines) to run the pipeline. Here, we’re using the
azureagent
pool provided by Azure.
5. stages
This section defines the pipeline’s main stages. we have two stages: Build and Push.
This error occurs because the agent pool specified in your pipeline (azureagent
) Does not exist there is no agent pool with the name azureagent
in our Azure DevOps organization. so we need to setup agent pool for azureagent
Step 4: Setup agent pool
An agent pool in Azure DevOps is like a group of workers (computers or virtual machines) that perform tasks in our pipeline, such as building, testing, or deploying our code.
How It Works:
- When we run a pipeline, it looks for an available “worker” in the agent pool to carry out the tasks we have defined.
- Each worker is called an agent, and it runs one job at a time.
Why It’s Useful:
- Resource Management: It allows we to manage and organizeyour agents into groups, so we can use them for specific pipelines or projects.
Types of Agent Pools:
- Microsoft-Hosted Agent Pool:
- Provided by Azure DevOps.
- Includes pre-configured virtual machines (e.g. Windows, Linux, macOS).
- You don’t need to maintain or set up anything just use it.
2. Self-Hosted Agent Pool:
- Machines that you set up and maintain (e.g. your own servers or VMs).
- Useful if you need:
- More customization (e.g., special software or configurations).
- To avoid limitations in Microsoft-hosted agents.
In our Case we are going to use Self-Hosted Agent Pool
First we need to create Virtual Machine in which our agent is being hosted
- Go to VM section and click on create
2. Now click on Agent pools option under project setting option
3. Click on Add pool
4. Select self hosted from the options and provide a name to your agent pool and right tick on grant access permission on all pipelines
5. You have provided with certain command to run on your VM that setup your agent
6. Go to your VM and copy the public IP
7. Take ssh of your vm and run the follwing commands
for ssh you need to run (please ensure that your pem file must be present in that particular location)
ssh -i <.pem_file> azureuser@<public_ip>
8. Now Run the folloiwng commands
wget <paste_the_copied_link>
apt update
9. Now list the items and run the following command to extract the file
tar zxvf vsts-agent-linux-x64-4.248.0.tar.gz
This command extracts the contents of the tarball into the current directory
10. After Extracting you have to run .sh file to setup agent
/.config.sh
11. you have to provide server URL which you can get from the documentation self-hosted linux agents on azure →documentation link
12. Copy the server url and paste it there(edit the organization part and provide your devops organization name)now it will asked you to provide the personal acess token(PAT) which you need to generate
14. On the agents tab click on setting > personal access tokens
15. Provide a name to your token select your organization and also provide full authorization acess to your tokenn
16. Copy Your token and paste it on terminal
17. Now enter your agent pool name that we created earliar
18. As you see your agent is offline to make it available for our use we need to install docker and provide neccessary permission to docker
sudo apt install docker.io
sudo usermod -aG docker azureuser
sudo systemctl restart docker
./run.sh
19. Your agent is now online is ready to use
20. If you are Going to pipeline section you will see your pipeline will autmatically triggered by the agent
Now everything is setup you all need to make two more pipelines for voting and worker microservices
Step 5: Setup Pipelines for Voting and worker microservices
- go to pipelines section select the code repo as azure repos , chose the pre-template as the docker and select your subscription
Pipeline code for Voting Service
# Docker
# Build and push an image to Azure Container Registry
# https://docs.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
paths:
include:
- vote/*
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: 'e05f07b8-d307-47cb-87d1-c75b657fa09b'
imageRepository: 'votetapp'
containerRegistry: 'microcicd.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/vote/Dockerfile'
tag: '$(Build.BuildId)'
pool:
name: 'azureagent'
stages:
- stage: Build
displayName: Build
jobs:
- job: Build
displayName: Build the image
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'build'
Dockerfile: 'vote/Dockerfile'
tags: '$(tag)'
- stage: push
displayName: push
jobs:
- job: Build
displayName: Build the image
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'push'
tags: '$(tag)'
Pipeline code for Worker Service
# Docker
# Build and push an image to Azure Container Registry
# https://docs.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
paths:
include:
- worker/*
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: 'e05f07b8-d307-47cb-87d1-c75b657fa09b'
imageRepository: 'workerapp'
containerRegistry: 'microcicd.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/worker/Dockerfile'
tag: '$(Build.BuildId)'
pool:
name: 'azureagent'
stages:
- stage: Build
displayName: Build
jobs:
- job: Build
displayName: Build the image
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'build'
Dockerfile: 'worker/Dockerfile'
tags: '$(tag)'
- stage: push
displayName: push
jobs:
- job: Build
displayName: Build the image
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'push'
tags: '$(tag)'
As of now you have successfully created 3 pipelines and all your pipelines successfully build and push docker images to Azure Container Registeries
Here we completed our Phase 1 that is CI part on Azure Devops Platform Let’s move to CD part
Phase 2 : Continuous Deployment
Step 1: Kubernetes Cluster Creation using Azure Kubernetes Service (AKS)
- Go to Azure portal and search for kubernetes service
- click on create Kubernetes cluster from the dropdown
3. Keep the Everything as it is shown in the image
4. Click on nodepool to update the no. of instances used in autoscaling
What is NodePool →A node pool in Azure Kubernetes Service (AKS) is simply a group of virtual machines (VMs) that act as the workers for your Kubernetes cluster. These VMs are where your applications run. Each node pool contains multiple VMs called nodes and each of these nodes has the tools Kubernetes needs to manage and run your applications
5. your Cluster is now Ready to orchestrate
Step 2: Setup Azure CLI
- Go to your Terminaland Run the Following commands to install azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az --version
2. Now for login run —
az login
3. It will Redirect you to web broswer
4. click on continue select your account and then go back to your terminal
5. You are now login to access your azure account via CLI
az account show
Step 3: Install and configure Argo-CD
- On your termninal run the following command
az aks get-credentials --name AzureDevops --overwrite-existing --resource-group Demo-RG
This command fetches the access credentials for your Azure Kubernetes Service (AKS) cluster, so you can connect to it and manage it using kubectl
--name AzureDevops
: This specifies the name of your AKS cluster. In this case, the cluster is namedAzureDevops
.--resource-group Demo-RG
: This tells Azure which resource group your cluster is in--overwrite-existing
: If your current configuration already has credentials for a cluster with the same name, this option replaces them with the new credentials you’re fetching. This avoids confusion or conflicts if you were previously connected to another cluster
2. Installation of Kubectl
Run the following command to install kubectl on your VM
sudo apt update
sudo apt install curl -y
curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
3. Create a namespace for argocd
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
4. Above command will downloads the install.yaml
file from the Argo CD GitHub repository.
- The file contains a predefined set of instructions to install Argo CD into your Kubernetes cluster.
- Kubernetes will create all the necessary resources (like Deployments, Services, Pods, etc.) required to run Argo CD in the
argocd
namespace
Kubectl get pods -n argocd
5. To access the UI of Argocd we need credentials that is stored as secret
kubectl get secrets -n argocd
6. Open the yaml file for admin secret
kubectl edit secrets argocd-initial-admin-secret -n argocd
7. Copy the password
8. Initially it is base64 encoded and we need to decode the password
#run the following command to encode the secret
echo <encoded_password> |base64 --decode
Base64 encoding is a way to convert data (like text, images, or files) into a format that can be safely transmitted or stored as plain text
Why it’s called “Base64”: There are 64 characters in the Base64 encoding scheme (A-Z, a-z, 0–9, +
, /
). These are used to represent the data
If you want to encode the word “Cat”:
- Convert the text into its binary format:
Cat
becomes01000011 01100001 01110100
. - Split it into chunks and encode each part using Base64 rules. The result:
Q2F0
.
So, the Base64 encoding of “Cat” is Q2F0
9. Copy the decoded password and paste it in notepad
kubectl get svc -n argocd
# This will display a list of all services in the argocd namespace
10. Checkout service for the argocd-server
11. As we see the argocd server expose as ClusterIP we need to change this to NodePort
If you want to open Argo CD’s UI in your web browser or interact with its API from outside the cluster, ClusterIP won’t work. Switching to NodePort exposes it so you can reach it via <Node IP>:<NodePort>
For example:
- ClusterIP: Accessible only inside Kubernetes (e.g.,
http://argocd-server:8080
). - NodePort: Accessible outside (e.g.,
http://<NodeIP>:32000
).
Think of ClusterIP as a private door that can only be opened from inside your house (the cluster). Changing it to NodePort adds a public door that anyone outside your house (the cluster) can use, so you can easily access Argo CD from your browser or tools
11. Run the Following command to edit the argocd-server manifest
kubectl edit svc argocd-server -n argocd
12. Look for type and change It to NodePort
kubectl get svc -n argocd
13. Go to your azure portal type vmss
14. click on agent pool>networking
15. Create a New port rule and open the port no. 30645 that is our argocd port
16. Go back to your terminal
# this command will provide the externalip of node
kubectl get nodes -o wide
17. copy the external ip and go to your browser type http://<public_ip>:<port_no.>
18. Now go back to broswer and proceed to advanced
19. username == admin password is what you decoded earliar in the blog
Step 4: Deploy Application on argocd
1. Connect With Azure Repos
- First you need to conned the Azure repo which contain the source code
- click on setting>repositories
3. Change the connection method to HTTPS
4. For repository url go to azure repos and then click on clone
5. Paste in the URL section
6. As it is a private repo so we need to token that we had generated earliar for connection paste your token here
7. We need to change the Application refresh timout to 10 sec by default it is 3 min. So we need to edit the config file of argocd and add the section of application refresh timeout
kubectl edit configmap argocd-cm -n argocd
#Restart the argocd server
kubectl rollout restart deployment argocd-server -n argocd
2. Create a new Application
- Click on New app section on argocd
2. Keep all the details as it shown in the below images
3. In the path section provide the manifest location. All the manifest is stored in k8s-specification folder
4. Your Application is now deployed on Azure Kubernetes Cluster via argo-cd
5. Go to your terminal check for the pods and services and open the no. in which your services is running
kubectl get pods
kubectl get svc
6. go to yourazure portal and search for vmss>nodepool and open the 31000,31001 port that is for vote app
7. collect the node external ip and again fo to your web broswer type http://<node_ip>:<port_no.>
kubectl get nodes -o wide
Here we done Continuous Deployment Part of our Application using GitOps Approach
Phase 3: Automating the whole cicd part
Step 1: Creating a shell script
- Go to Azure repos click on three dots and make a new repository
- folder name==scripts , file name ==updateK8sManifests.sh
3. Paste the Following code in the .sh file
#!/bin/bash
set -x
# Set the repository URL
REPO_URL="https://<ACCESS-TOKEN>@dev.azure.com/<AZURE-DEVOPS-ORG-NAME>/voting-app/_git/voting-app"
# Clone the git repository into the /tmp directory
git clone "$REPO_URL" /tmp/temp_repo
# Navigate into the cloned repository directory
cd /tmp/temp_repo
# Make changes to the Kubernetes manifest file(s)
# For example, let's say you want to change the image tag in a deployment.yaml file
sed -i "s|image:.*|image: <ACR-REGISTRY-NAME>/$2:$3|g" k8s-specifications/$1-deployment.yaml
# Add the modified files
git add .
# Commit the changes
git commit -m "Update Kubernetes manifest"
# Push the changes back to the repository
git push
# Cleanup: remove the temporary directory
rm -rf /tmp/temp_repo
Don’t forget to change the token name , ACR name and your organization name
What Does This Script Do?
- Clones a Git repository from Azure DevOps into a temporary directory.
- Modifies a Kubernetes deployment YAML file to update the image tag.
- Commits and pushes the changes back to the repository.
- Cleans up by deleting the temporary directory
Step 2 : Added a new stage in pipeline
- Go to Voting pipeline and edit a new stage Update
What does this stage do → This stage will run the script that we created earliar
# $1=vote $2=repository_name=votetapp $3=latest tag of the image these arguments
# will used in the script we created or at the line given below
sed -i "s|image:.*|image: <ACR-REGISTRY-NAME>/$2:$3|g" k8s-specifications/$1-deployment.yaml
#THE above line will change the image tag to the lastest one
2. when the Update stage is completed the changes will reflect in the vote-deployment.yaml file
3. Let’s make a change in app.py that will trigger a new pipeline
4. Push Stage will push the image into ACR with the latest tag
5. Then update stage will run the script that update the image name in the deployment.yaml with the latest tag
6. Argocd will look for a change in k8s-specifications folder and found a new image as the tag was changed to latest one
7. As this change in the k8s-specifications folder will envoke the argocd to change application according to new image
8. But how argocd will fetch these changes from private container registeries for this we need to create secrets
9. For this to resolve we need to create secrets that allow argocd to pull images from private repos
10. Go to ACR > Access keys and click on generate and copy the username and password and go back to your terminal
11. Copy the below command and make the accordingly
service-principal-id == username of your ACR
Service-principal-password == Password you copied
kubectl create secret docker-registry <secret-name> \
--namespace argocd \
--docker-server=<your-acr-name>.azurecr.io \
--docker-username=<your-acr-username> \
--docker-password=<your-acr-password> \
--docker-email=<your-email>
12. Go to your Vote-deployment.yaml and add the follwing line
imagePullSecrets:
- name: acr-secret
13. Now Again go to your vote/app.py to make changes that will triggerd the pipeline again
Go to your Web browser and and check for the change
Here is your updated application
From here we have completed our blog Thanks for reading