Project 10 →Deployment of Swiggy application using github actions
Today we are going to deploy the Swiggy application using github actions as a ci-cd tool in place of jenkins or circle-ci below is the completion steps for this project
Completion Steps →
Step 1 → Setup Terraform and configure aws on your local machine
Step 2 → Building a simple Infrastructure from code using terraform
Step 3 → IAM Role for EC2
Step 4 →Setup github actions with ec2
Step 5 →setup sonarqube and dockerhub for github actions
Step 6→ Elastic kubernetes service or Eks cluster setup
Step 7 → build and push docker image
Step 8 →Monitering via Prmotheus and grafana
Step 9 → Destroy all the things
before do anything →
- open your terminal and make a separate folder for swiggy→mkdir swiggy
- cd swiggy
- clone the github repo
- git clone https://github.com/Aakibgithuber/deployment-using-github-actions.git
Step 1 → Setup Terraform and configure aws on your local machine
1. Setup Terraform
To install terraform copy and paste the below commands
sudo su
snap install terraform --classic
which terraform
2. Configure aws
- create an IAM user
- go to your aws account and type IAM
click on iam
3. click on user →create user
4. Give a name to your user and tick on provide user access to management console and then click on I want an IAM user option
5. choose a password for your user →click next
6. Attach the policies directly to your iam user → click next
note →I will provide the administrator accesss for now but we careful while attaching the policies on your workapce
review and create user
7. click on create user
8. download your password file if it is autogenerated otherwise it is your’s choice
9. Now click on your IAM user →security credentials
10. scroll down to access keys and create an access keys
11.choose aws cli from the options listed
12. click next and download you csv file for username and password
13. go to your terminal and type →aws configure
14. Now it is ask to your access key and secret key for this open your csv file and paste the access and secret key and remain everything default
15. Now you are ready to configure aws from your terminal
Step 2 → Building a simple Infrastructure from code using terraform
- go to folder → cd Instance-terraform
- there are three files present main.tf, script.sh , provider.tf
- open the file →vim Main.tf
4. change this section → ami = # your ami id , key_name= #your key pair if any
Now run terraform commands →
- main.tf includes userdata which links script.sh file on which execution install docker,trivy,and start the sonarqube container on port 9000
terraform init
terraform validate
terraform plan
terraform apply --auto-approve
terraform init,validate and plan output
terraform apply
Go to your aws console and checkout the ec2 instances
Here we see swiggy base server instance is created by terraform with the given configuration
Step 3 → IAM Role for EC2
Why we need IAM role for EC2 → It is used by your ec2 instance to create EKS cluster and manage s3 bucket by applying this IAM role it gives the authenticity to your ec2 to do changes in aws account
1. creating IAM role
- on the search bar type IAM
2. click on Roles on the left side
3. click on create role and choose EC2 from the dropdown
4. click on next
5. choose administrator access on permission sections
6. click on next and give a name to your role
7. click on create role option and your IAM role is created
2. Attach IAM role with your EC2
- go to EC2 section
- click on actions → security → modify IAM role option
3. choose the role from dropdown and click on update IAM role
why we need IAM Role →
Imagine you have a robot (EC2 instance) that does tasks for you in a big factory (AWS environment). Now, this robot needs to access different rooms (AWS services like S3, DynamoDB, etc.) to perform its tasks.
Here’s where IAM (Identity and Access Management) comes in:
- Robot Needs a Key to Enter Rooms: The IAM Role is like giving your robot a special key. This key allows the robot to enter specific rooms (access certain AWS services). Without this key, the robot can’t get in.
- Different Keys for Different Robots: Each robot (EC2 instance) can have its own key (IAM Role) with specific permissions. So, one robot may have a key to enter the storage room (access S3), while another robot has a key to enter the database room (access DynamoDB).
- No Need for Hardcoding Passwords: Using IAM Roles means you don’t have to hardcode passwords (access credentials) into the robot. It’s like not writing down passwords on the robot itself. The robot just uses its key when needed.
- Easily Change Permissions: If you want to change what a robot can do, you just change the permissions on its key (IAM Role). No need to reprogram the robot or give it a new password; just update the permissions on its key.
- Secure and Controlled Access: IAM Roles help keep things secure. You can control exactly what each robot is allowed to do. This way, if one robot is compromised or needs a different role, you can easily adjust its permissions without affecting others.
Now everything is done what we have to do is just throw some commands and build our infrastructure using terraform to run swiggy app
Step 4 →Setup github actions with ec2
- go to your project repo of github →click on setting →Actions →Runners
2. These commands gives by the github that are used to run on your linux server
3. go to your ec2 and connnect to it and run the following commands one by one
# Create a folder
mkdir actions-runner && cd actions-runner# Download the latest runner package
curl -o actions-runner-linux-x64-2.313.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.313.0/actions-runner-linux-x64-2.313.0.tar.gz
# Optional: Validate the hash
echo "56910d6628b41f99d9a1c5fe9df54981ad5d8c9e42fc14899dcc177e222e71c4 actions-runner-linux-x64-2.313.0.tar.gz" | shasum -a 256 -c
# Extract the installer
tar xzf ./actions-runner-linux-x64-2.313.0.tar.gz
Configure
# Create the runner and start the configuration experience
./config.sh --url https://github.com/Aakibgithuber/Swiggy-app-deployment-using-github-actions- --token AYEA4XIU5ILR2XRXF6F2DFLF2GYZW# Last step, run it!
./run.sh
Step 5 →setup sonarqube and dockerhub for github actions
1. Sonarqube setup →
copy the public ip of your machine
- go to your browser and type →<publicip>:9000
sonarqube window open
2. iniatially username and password is admin
3. update your password
4. welcome window of Sonarqube
setup projects in sonarqube for github actions
- go to your sonarqube server
- click on projects
- in the name field type swiggy
- click on set up
5. click on github actions
6. generate a token
7. copy token and go to your github repo →setting →secretand variable →actions →and create new repository secrets
8. paste the name and token generated above
9. create one more new secret repo and add the above info there
10. Now go back to your sonarqube and click on continue
11. Now go to your github and create a new file as below instuctions
12. Name the file as given above and enter the data as shown
13. create one more file according to above instructions
14. enter the above data in your file
15. commit changes and go to your ec2 and start a runner by command
./run.sh
16. Now go to Actions and see your build.yaml is automatically triggerd
17. checkout the sonarqube
2. setup dockerhub for github actions
- go to your dockerhub account and create a token under myaccount →setting →Security
2. Now again go to your github account → setting →Secrets and variables →actions →create a new repo secret
3. create a one more secret of your username
Now you are connected with your dockerhub via github
Step 6→ Elastic kubernetes service or Eks cluster setup
- Go to your ec2 and run the following command
aws configure
2. Provide the access key and secret key setup above
3. fetch the git hub repo of EKS terraform folder
git clone https://github.com/Aakibgithuber/deployment-using-github-actions.git
4. go to EKS terraform folder by
cd deployment-using-github-actions/Eks-terraform
ls
5. Now Setup your EKS cluster by the below terraform commands
terraform init
terraform validate
terraform plan
terraform apply
6. It takes 5 to 10 min to create a cluster
Step 7 → build and push docker image
- Now go to your github workflows folder and edit the build.yaml file
add the following code
name: Build
on:
push:
branches:
- master
jobs:
build:
name: Build
runs-on: [self-hosted]
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
- uses: sonarsource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
- name: NPM Install
run: npm install
- name: Docker build and push
run: |
# Run commands to build and push Docker images
docker build -t swiggy-clone .
docker tag swiggy-clone aakibkhan1212/swiggy-clone:latest
docker login -u ${{ secrets.Dockerhub_username }} -p ${{ secrets.Dockerhub_token }}
docker push aakibkhan1212/swiggy-clone:latest
env:
DOCKER_CLI_ACI: 1
Importent note → make sure to change the dockerhub repo name and tag of your image
2. Preview and commit the changes
3. Go to actions →build to see the changes
Docker image is created and pushed to your dockerhub
Step 7 →Deployment on kubernetes
- Go to your repo and make changes on build.yml file and add the following code
deploy:
needs: build
runs-on: [self-hosted]
steps:
- name: docker pull image
run: docker pull aakibkhan1212/swiggy-clone:latest
- name: Image scan
run: trivy image aakibkhan1212/swiggy-clone:latest > trivyimagedeploy.txt
- name: Deploy to container
run: docker run -d --name swiggy-clone -p 3000:3000 aakibkhan1212/swiggy-clone:latest
2. Above code runs the application locally
3. To run it on kubernetes cluster add the following code in your build.yml
- name: Update kubeconfig
run: aws eks --region ap-south-1 update-kubeconfig --name EKS_CLOUD
- name: Deploy to kubernetes
run: kubectl apply -f deployment-service.yml2. commit the changes and this will deploy the image locally on the base server
6. run the following command
kubectl get all
7. copy the load balancer ingress and paste it on browser
Your Swiggy app is Now deployed on the Kubernetes cluster
8. Complete code for build.yaml file
name: Build
on:
push:
branches:
- master
jobs:
build:
name: Build
runs-on: [self-hosted]
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
- uses: sonarsource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
- name: NPM Install
run: npm install
- name: Docker build and push
run: |
# Run commands to build and push Docker images
docker build -t swiggy-clone .
docker tag swiggy-clone aakibkhan1212/swiggy-clone:latest
docker login -u ${{ secrets.Dockerhub_username }} -p ${{ secrets.Dockerhub_token }}
docker push aakibkhan1212/swiggy-clone:latest
env:
DOCKER_CLI_ACI: 1
deploy:
needs: build
runs-on: [self-hosted]
steps:
- name: docker pull image
run: docker pull aakibkhan1212/swiggy-clone:latest
- name: Image scan
run: trivy image aakibkhan1212/swiggy-clone:latest > trivyimagedeploy.txt
- name: Deploy to container
run: docker run -d --name swiggy-clone1 -p 3000:3000 aakibkhan1212/swiggy-clone:latest
- name: Update kubeconfig
run: aws eks --region ap-south-1 update-kubeconfig --name EKS_CLOUD
- name: Deploy to kubernetes
run: kubectl apply -f deployment-service.yml
Make changes in the above code according to your github repo and dockerhub account
Step7 →Monitering via Prmotheus and grafana
- Prometheus is like a detective that constantly watches your software and gathers data about how it’s performing. It’s good at collecting metrics, like how fast your software is running or how many users are visiting your website.
- Grafana, on the other hand, is like a dashboard designer. It takes all the data collected by Prometheus and turns it into easy-to-read charts and graphs. This helps you see how well your software is doing at a glance and spot any problems quickly.
In other words, Prometheus collects the information, and Grafana makes it look pretty and understandable so you can make decisions about your software. They’re often used together to monitor and manage applications and infrastructure.
1. Setup another server or EC2 for moniterning
- go to ec2 console and launch an instance having a base image ofu buntu and with t2.medium specs because Minimum Requirements to Install Prometheus :
- 2 CPU cores.
- 4 GB of memory.
- 20 GB of free disk space.
2. Installing Prometheus:
- First, create a dedicated Linux user for Prometheus and download Prometheus:
sudo useradd — system — no-create-home — shell /bin/false prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz
2. Extract Prometheus files, move them, and create directories:
tar -xvf prometheus-2.47.1.linux-amd64.tar.gz
cd prometheus-2.47.1.linux-amd64/
sudo mkdir -p /data /etc/prometheus
sudo mv prometheus promtool /usr/local/bin/
sudo mv consoles/ console_libraries/ /etc/prometheus/
sudo mv prometheus.yml /etc/prometheus/prometheus.yml
3. Set ownership for directories:
useradd prometheus
sudo chown -R prometheus:prometheus /etc/prometheus/ /data/
4. Create a systemd unit configuration file for Prometheus:
sudo nano /etc/systemd/system/prometheus.service
Add the following code to the prometheus.service
file:
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/data \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.enable-lifecycle[Install]
WantedBy=multi-user.target
b. press →ctrl+o #for save and then ctrl+x #for exit from the file
Here’s a explanation of the key parts in this above file:
User
andGroup
specify the Linux user and group under which Prometheus will run.ExecStart
is where you specify the Prometheus binary path, the location of the configuration file (prometheus.yml
), the storage directory, and other settings.web.listen-address
configures Prometheus to listen on all network interfaces on port 9090.web.enable-lifecycle
allows for management of Prometheus through API calls.
5. Enable and start Prometheus:
sudo systemctl enable prometheus
sudo systemctl start prometheus
sudo systemctl status prometheus
Now go to your security group of your ec2 to enable port 9090 in which prometheus will run
go to → http://public_ip:9090 to see the webpage of prometheus
3. Installing Node Exporter:
Node exporter is like a “reporter” tool for Prometheus, which helps collect and provide information about a computer (node) so Prometheus can monitor it. It gathers data about things like CPU usage, memory, disk space, and network activity on that computer.
A Node Port Exporter is a specific kind of Node Exporter that is used to collect information about network ports on a computer. It tells Prometheus which network ports are open and what kind of data is going in and out of those ports. This information is useful for monitoring network-related activities and can help you ensure that your applications and services are running smoothly and securely.
Run the following commands for installation
- Create a system user for Node Exporter and download Node Exporter:
sudo useradd — system — no-create-home — shell /bin/false node_exporter
wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz
2. Extract Node Exporter files, move the binary, and clean up:
tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz
sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/
rm -rf node_exporter*
3. Create a systemd unit configuration file for Node Exporter:
sudo nano /etc/systemd/system/node_exporter.service
add the following code to the node_exporter.service
file:
provide more detailed information about what might be going wrong. For example:
[Unit]
Description=Node Exporter
After=network.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter[Install]
WantedBy=default.target
4. Enable and start Node Exporter:
sudo useradd -m -s /bin/bash node_exporter
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl enable node_exporter
sudo systemctl status node_exporter
node exporter service is now running
You can access Node Exporter metrics in Prometheus.
publicip:9100
5. Configure Prometheus Plugin Integration:
- go to your EC2 and run →
cd /etc/prometheus
2. you have to edit the prometheus.yml file to moniter anything
scrape_configs:
- job_name: 'node_exporter'
static_configs:
- targets: ['localhost:9100']
- job_name: 'jenkins'
metrics_path: '/prometheus'
static_configs:
- targets: ['<your-jenkins-ip>:<your-jenkins-port>']
add the above code with proper indentation like this →
press esc+:wq to save and exit
a. Check the validity of the configuration file →
promtool check config /etc/prometheus/prometheus.yml
o/p →success
b. Reload the Prometheus configuration without restarting →
curl -X POST http://localhost:9090/-/reload
go to your prometheus tab again and click on status and select targets you will there is three targets present as we enter in yaml file for moniterning
prometheus targets dashboard
5. Setup Grafana
Install Dependencies:
sudo apt-get update
sudo apt-get install -y apt-transport-https software-properties-common
Add the GPG Key for Grafana:
wget -q -O — https://packages.grafana.com/gpg.key | sudo apt-key add -
Add the repository for Grafana stable releases:
echo “deb https://packages.grafana.com/oss/deb stable main” | sudo tee — a /etc/apt/sources.list.d/grafana.list
Update the package list , install and start Grafana:
sudo apt-get update
sudo apt-get -y install grafana
sudo systemctl enable grafana-server
sudo systemctl start grafana-server
sudo systemctl status grafana-server
Now go to your ec2 security group and open port no. 3000 in which grafana runs
Go and browse http://public_ip:3000 to access your grafana web interface
username = admin, password =admin
6. Add Prometheus Data Source:
To visualize metrics, you need to add a data source.
Follow these steps:
- Click on the gear icon (⚙️) in the left sidebar to open the “Configuration” menu.
- Select “Data Sources.”
- Click on the “Add data source” button.
- Choose “Prometheus” as the data source type.
- In the “HTTP” section:
- Set the “URL” to
http://localhost:9090
(assuming Prometheus is running on the same server). - Click the “Save & Test” button to ensure the data source is working.
7. Import a Dashboard
Importing a dashboard in Grafana is like using a ready-made template to quickly create a set of charts and graphs for monitoring your data, without having to build them from scratch.
- Click on the “+” (plus) icon in the left sidebar to open the “Create” menu.
- Select “Dashboard.”
- Click on the “Import” dashboard option.
- Enter the dashboard code you want to import (e.g., code 1860).
- Click the “Load” button.
- Select the data source you added (Prometheus) from the dropdown.
- Click on the “Import” button.
Step 8 → Destroy all the things
- go to your ec2 and run
terraform destroy
2. It will destroy your kubernetes cluster and node group too
3. go to your terminal and again run terraform destroy
4. all the things are destroyed and your project is also done
where we use github actions as ci-cd tool
Do follow me on medium if you like the blog and don’t forget to clap checkout the other working account mentioned below