Project 11→ Deployment of chat gpt clone app on kubernetes using Terraform and jenkins ci-cd

Aakib
18 min readMay 19, 2024

--

Deployment of chat gpt bot using devSecOps approach towards deployment where we use no of devops tools and aws cloud for deployment servers let’s do it !!

Completion Steps →

Step 1 → Setup Terraform and configure aws on your local machine
Step 2 → Building a simple Infrastructure from code using terraform
Step 3 → Setup Sonarqube and jenkins
Step 4 → ci-cd pipeline
Step 5 → kubernetes cluster creation using terraform via jenkins pipeline
Step 6 → deployment on kubernetes
Step 7 → Monitering via Prmotheus and grafana
Step 8→ Destroy all the Insrastructure

Pre-requisite

  1. aws account
  2. basics of terraform and jenkins

before do anything →

  1. open your terminal and make a separate folder for chat gpt→mkdir gpt
  2. cd gpt
  3. clone the github repo
  4. git clone https://github.com/Aakibgithuber/Chat-gpt-deployment.git

Step 1 → Setup Terraform and configure aws on your local machine

1. Setup Terraform

To install terraform copy and paste the below commands

sudo su
snap install terraform --classic
which terraform

2. Configure aws

  1. create an IAM user
  2. go to your aws account and type IAM

click on iam

3. click on user →create user

4. Give a name to your user and tick on provide user access to management console and then click on I want an IAM user option

5. choose a password for your user →click next

6. Attach the policies directly to your iam user → click next

note →I will provide the administrator accesss for now but we careful while attaching the policies on your workapce

review and create user

7. click on create user

8. download your password file if it is autogenerated otherwise it is your’s choice

9. Now click on your IAM user →security credentials

10. scroll down to access keys and create an access keys

11.choose aws cli from the options listed

12. click next and download you csv file for username and password

13. go to your terminal and type →aws configure

14. Now it is ask to your access key and secret key for this open your csv file and paste the access and secret key and remain everything default

15. Now you are ready to configure aws from your terminal

Step 2 → Building a simple Infrastructure from code using terraform

Now run terraform commands →

  1. main.tf includes userdata which links script.sh file on which execution install jenkins,docker,trivy,and start the sonarqube container on port 9000
terraform init
terraform validate
terraform plan
terraform apply --auto-approve

Here we see instance name gpt is created by terraform with the given configuration

Attach IAM role with your EC2

  1. go to EC2 section
  2. click on actions → security → modify IAM role option

3. choose the role from dropdown and click on update IAM role

why we need IAM Role →

Imagine you have a robot (EC2 instance) that does tasks for you in a big factory (AWS environment). Now, this robot needs to access different rooms (AWS services like S3, DynamoDB, etc.) to perform its tasks.

Here’s where IAM (Identity and Access Management) comes in:

  1. Robot Needs a Key to Enter Rooms: The IAM Role is like giving your robot a special key. This key allows the robot to enter specific rooms (access certain AWS services). Without this key, the robot can’t get in.
  2. Different Keys for Different Robots: Each robot (EC2 instance) can have its own key (IAM Role) with specific permissions. So, one robot may have a key to enter the storage room (access S3), while another robot has a key to enter the database room (access DynamoDB).
  3. No Need for Hardcoding Passwords: Using IAM Roles means you don’t have to hardcode passwords (access credentials) into the robot. It’s like not writing down passwords on the robot itself. The robot just uses its key when needed.
  4. Easily Change Permissions: If you want to change what a robot can do, you just change the permissions on its key (IAM Role). No need to reprogram the robot or give it a new password; just update the permissions on its key.
  5. Secure and Controlled Access: IAM Roles help keep things secure. You can control exactly what each robot is allowed to do. This way, if one robot is compromised or needs a different role, you can easily adjust its permissions without affecting others.

Step 3 → Setup Sonarqube and jenkins

1.Sonarqube →

copy the public ip of your machine

  1. go to your browser and type →<publicip>:9000

sonarqube window open

2. iniatially username and password is admin

3. update your password

4. welcome window of Sonarqube

2. Jenkins →

  1. on browser type →<public_ip>:8080

2. for this go to your ec2 and connect it

3. run the below commands

sudo su
cat /var/lib/jenkins/secrets/initialAdminPassword

output is your password and paste it to your jenkins

4. Install the suggested plugins

5. Setup your jenkins user

Welcome to jenkins dashboard

Step 4 → ci-cd pipeline

1. Install Plugins listed below

1 Eclipse Temurin Installer (Install without restart)

2 SonarQube Scanner (Install without restart)

3 NodeJs Plugin (Install Without restart)

4. owasp →The OWASP Plugin in Jenkins is like a “security assistant” that helps you find and fix security issues in your software. It uses the knowledge and guidelines from the Open Web Application Security Project (OWASP) to scan your web applications and provide suggestions on how to make them more secure. It’s a tool to ensure that your web applications are protected against common security threats and vulnerabilities.

5. Prometheus metrics →to moniter jenkins on grafana dashboard

6. Download all the docker realated plugins

7. Kubernetes

8. Kubernetes CLI

9. Kubernetes Client API

10. Kubernetes Pipeline DevOps steps

2. add credentials of Sonarqube and Docker

1st we genrate a token for sonarqube to use in jenkins credentials as secret text

a. setup sonarqube credentials

  1. go to http://publicip:9000
  2. now enter your username and password
  3. click on security →users →token →generate token
  4. token_name==jenkins

4. copy the token and go to your jenkins →manage jenkins →credentials →global →add credentials

5. select secret text from dropdown

6. secret text ==your token , id =jenkins →click on create

b. setup projects in sonarqube for jenkins

  1. go to your sonarqube server
  2. click on projects
  3. in the name field type gpt
  4. click on set up

click on setup

click on above option

click on generate

click on continue ….

Sonarqube project for jenkins is setup now

c. setup sonarqube

d. Setup docker credentials

  1. go to your jenkins →manage jenkins →credentials →global →add credentials
  2. provide your username and password of your dockerhub
  3. id==docker

credentials for both are setup

3. Now we are going to setup tools for jenkins

go to manage jenkins → tools

a. add jdk

  1. click on add jdk and select installer adoptium.net
  2. choose jdk 17.0.8.1+1version and in name section enter jdk 17

b. add node js

  1. click on add nodejs
  2. enter node16 in name section
  3. choose version nodejs 16.2.0

c. add docker →

  1. click on add docker
  2. name==docker
  3. add installer ==download from docker.com

d. add sonarqube →

  1. add sonar scanner
  2. name ==sonar-scanner

e. add owasp dependency check →

Adding the Dependency-Check plugin in the “Tools” section of Jenkins allows you to perform automated security checks on the dependencies used by your application

  1. add dependency check
  2. name == DP-Check
  3. from add installer select install from github.com

4. Configure global setting for sonarube and setup webhooks

a. configure global setting

  1. go to manage jenkins →Configure global setting →add sonarqube servers
  2. name ==sonar-server
  3. server_url==http://public_ip:9000
  4. server authentication token == jenkins →it is created in sonarqube security configurations

b. setup webhooks

  1. go to administration →configuration →webhooks
  2. crete webhook
  3. provide and in the url section enter →
  4. http://jenkins-public-ip:8080/sonarqube-webhook/
  5. click create

5. let’s run the Pipeline →

  1. go to new item →select pipeline →in the name section type gpt-pipeline
  2. scroll down to the pipeline script and copy paste the following code
pipeline{
agent any
tools{
jdk 'jdk17'
nodejs 'node16'
}
environment {
SCANNER_HOME=tool 'sonar-scanner'
}
stages {
stage('Checkout from Git'){
steps{
git branch: 'legacy', url: 'https://github.com/Aakibgithuber/Chat-gpt-deployment.git'
}
}
stage('Install Dependencies') {
steps {
sh "npm install"
}
}
stage("Sonarqube Analysis "){
steps{
withSonarQubeEnv('sonar-server') {
sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Chatbot \
-Dsonar.projectKey=Chatbot '''
}
}
}
stage("quality gate"){
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token'
}
}
}
stage('OWASP FS SCAN') {
steps {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
stage('TRIVY FS SCAN') {
steps {
sh "trivy fs . > trivyfs.json"
}
}
stage("Docker Build & Push"){
steps{
script{
withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){
sh "docker build -t chatbot ."
sh "docker tag chatbot aakibkhan1212/chatbot:latest "
sh "docker push aakibkhan1212/chatbot:latest "
}
}
}
}
stage("TRIVY"){
steps{
sh "trivy image aakibkhan1212/chatbot:latest > trivy.json"
}
}
stage ("Remove container") {
steps{
sh "docker stop chatbot | true"
sh "docker rm chatbot | true"
}
}
stage('Deploy to container'){
steps{
sh 'docker run -d --name chatbot -p 3000:3000 aakibkhan1212/chatbot:latest'
}
}
}
}

your application is successfully deployed checkout the dockerhub and to acess your application go to your browser and type

http://your_public_ip:3000

Now you have to need an api to connect your application to openai

  1. click or search openai.com
  2. click on create a new secret access key
  3. provide the name an click on create new secret key
  4. copy the secret acess key and comeback to your application and click on openai api key present on bottom left corner and paste your api key and now your application is ready to used

application is ready to use now let’s deployed this on kubernetes

Step 5 → kubernetes cluster creation using terraform via jenkins pipeline

  1. create a new pipeline named as eks-terraform
  2. scroll down and select this project is parameterised

3. scroll down to the pipeline script and paste the below pipeline script

pipeline{
agent any
stages {
stage('Checkout from Git'){
steps{
git branch: 'legacy', url: 'https://github.com/Aakibgithuber/Chat-gpt-deployment.git'
}
}
stage('Terraform version'){
steps{
sh 'terraform --version'
}
}
stage('Terraform init'){
steps{
dir('Eks-terraform') {
sh 'terraform init'
}
}
}
stage('Terraform validate'){
steps{
dir('Eks-terraform') {
sh 'terraform validate'
}
}
}
stage('Terraform plan'){
steps{
dir('Eks-terraform') {
sh 'terraform plan'
}
}
}
stage('Terraform apply/destroy'){
steps{
dir('Eks-terraform') {
sh 'terraform ${action} --auto-approve'
}
}
}
}
}

4. click on apply and then save

5. now click on build with paramteres option and then on apply

6. It taked of about 10 to 15 min in cluster creation

7. your pipeline is completed now go to your aws console and search for EKS

8. check for the node groups →go to your ec2 instances

your instance is also ready

Step 6 → deployment on kubernetes

  1. back to your gpt-instance and run the following command
aws eks update-kubeconfig --name <clustername> --region <region>

The command aws eks update-kubeconfig --name EKS_CLOUD --region us-east-1 is like telling our computer, "Hey, I'm using Amazon EKS (Elastic Kubernetes Service) in the 'us-east-1' region, and I want to connect to it. you could use your desired location

2. clone the repository on your ec2

git clone https://github.com/Aakibgithuber/Chat-gpt-deployment.git

3. go to k8s folder

cd /Chat-gpt-deployment/k8s

4. you will find the deployment file for kubernetes chatbot run the following command to deploy application

kubectl apply -f chatbot-ui.yaml
kubectl get all

5. copy the load balancer external ip and paste on your browser

6. Now again the appliacation is deployed on kubernetes

Load Balancer Ingress →

It is a mechanism that helps distribute incoming internet traffic among multiple servers or services, ensuring efficient and reliable delivery of requests.

It’s like having a receptionist at a busy office building entrance who guides visitors to different floors or departments, preventing overcrowding at any one location. In the digital world, a Load Balancer Ingress helps maintain a smooth user experience, improves application performance, and ensures that no single server becomes overwhelmed with too much traffic.

Service.yaml

service.yaml file is like a set of rules that helps computers find and talk to each other within a software application. It's like a directory that says, "Hey, this is how you can reach different parts of our application." It specifies how different parts of your application communicate and how other services or users can connect to them.

Step 7 → Monitering via Prmotheus and grafana

  • Prometheus is like a detective that constantly watches your software and gathers data about how it’s performing. It’s good at collecting metrics, like how fast your software is running or how many users are visiting your website.
  • Grafana, on the other hand, is like a dashboard designer. It takes all the data collected by Prometheus and turns it into easy-to-read charts and graphs. This helps you see how well your software is doing at a glance and spot any problems quickly.

In other words, Prometheus collects the information, and Grafana makes it look pretty and understandable so you can make decisions about your software. They’re often used together to monitor and manage applications and infrastructure.

1. Setup another server or EC2 for moniterning

  1. go to ec2 console and launch an instance having a base image ofu buntu and with t2.medium specs because Minimum Requirements to Install Prometheus :
  • 2 CPU cores.
  • 4 GB of memory.
  • 20 GB of free disk space.

2. Installing Prometheus:

  1. First, create a dedicated Linux user for Prometheus and download Prometheus:
sudo useradd — system — no-create-home — shell /bin/false prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz

2. Extract Prometheus files, move them, and create directories:

tar -xvf prometheus-2.47.1.linux-amd64.tar.gz
cd prometheus-2.47.1.linux-amd64/
sudo mkdir -p /data /etc/prometheus
sudo mv prometheus promtool /usr/local/bin/
sudo mv consoles/ console_libraries/ /etc/prometheus/
sudo mv prometheus.yml /etc/prometheus/prometheus.yml

3. Set ownership for directories:

useradd prometheus
sudo chown -R prometheus:prometheus /etc/prometheus/ /data/

4. Create a systemd unit configuration file for Prometheus:

sudo nano /etc/systemd/system/prometheus.service

Add the following code to the prometheus.service file:

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/data \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.enable-lifecycle[Install]
WantedBy=multi-user.target

b. press →ctrl+o #for save and then ctrl+x #for exit from the file

Here’s a explanation of the key parts in this above file:

  • User and Group specify the Linux user and group under which Prometheus will run.
  • ExecStart is where you specify the Prometheus binary path, the location of the configuration file (prometheus.yml), the storage directory, and other settings.
  • web.listen-address configures Prometheus to listen on all network interfaces on port 9090.
  • web.enable-lifecycle allows for management of Prometheus through API calls.

5. Enable and start Prometheus:

sudo systemctl enable prometheus
sudo systemctl start prometheus
sudo systemctl status prometheus

Now go to your security group of your ec2 to enable port 9090 in which prometheus will run

go to → http://public_ip:9090 to see the webpage of prometheus

3. Installing Node Exporter:

Node exporter is like a “reporter” tool for Prometheus, which helps collect and provide information about a computer (node) so Prometheus can monitor it. It gathers data about things like CPU usage, memory, disk space, and network activity on that computer.

A Node Port Exporter is a specific kind of Node Exporter that is used to collect information about network ports on a computer. It tells Prometheus which network ports are open and what kind of data is going in and out of those ports. This information is useful for monitoring network-related activities and can help you ensure that your applications and services are running smoothly and securely.

Run the following commands for installation

  1. Create a system user for Node Exporter and download Node Exporter:
sudo useradd — system — no-create-home — shell /bin/false node_exporter
wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz

2. Extract Node Exporter files, move the binary, and clean up:

tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz
sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/
rm -rf node_exporter*

3. Create a systemd unit configuration file for Node Exporter:

sudo nano /etc/systemd/system/node_exporter.service

add the following code to the node_exporter.service file:

provide more detailed information about what might be going wrong. For example:

[Unit]
Description=Node Exporter
After=network.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=default.target

4. Enable and start Node Exporter:

sudo useradd -m -s /bin/bash node_exporter
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl enable node_exporter
sudo systemctl status node_exporter

node exporter service is now running

You can access Node Exporter metrics in Prometheus.

publicip:9100

5. Configure Prometheus Plugin Integration:

  1. go to your EC2 and run →
cd /etc/prometheus

2. you have to edit the prometheus.yml file to moniter anything

scrape_configs:
- job_name: 'node_exporter'
static_configs:
- targets: ['localhost:9100']
- job_name: 'jenkins'
metrics_path: '/prometheus'
static_configs:
- targets: ['<your-jenkins-ip>:<your-jenkins-port>']

add the above code with proper indentation like this →

press esc+:wq to save and exit

a. Check the validity of the configuration file →

promtool check config /etc/prometheus/prometheus.yml

o/p →success

b. Reload the Prometheus configuration without restarting →

curl -X POST http://localhost:9090/-/reload

go to your prometheus tab again and click on status and select targets you will there is three targets present as we enter in yaml file for moniterning

prometheus targets dashboard

5. Setup Grafana

Install Dependencies:

sudo apt-get update
sudo apt-get install -y apt-transport-https software-properties-common

Add the GPG Key for Grafana:

wget -q -O —  https://packages.grafana.com/gpg.key | sudo apt-key add -

Add the repository for Grafana stable releases:

echo “deb https://packages.grafana.com/oss/deb stable main” | sudo tee — a /etc/apt/sources.list.d/grafana.list

Update the package list , install and start Grafana:

sudo apt-get update
sudo apt-get -y install grafana
sudo systemctl enable grafana-server
sudo systemctl start grafana-server
sudo systemctl status grafana-server

Now go to your ec2 security group and open port no. 3000 in which grafana runs

Go and browse http://public_ip:3000 to access your grafana web interface

username = admin, password =admin

6. Add Prometheus Data Source:

To visualize metrics, you need to add a data source.

Follow these steps:

  • Click on the gear icon (⚙️) in the left sidebar to open the “Configuration” menu.
  • Select “Data Sources.”
  • Click on the “Add data source” button.
  • Choose “Prometheus” as the data source type.
  • In the “HTTP” section:
  • Set the “URL” to http://localhost:9090 (assuming Prometheus is running on the same server).
  • Click the “Save & Test” button to ensure the data source is working.

7. Import a Dashboard

Importing a dashboard in Grafana is like using a ready-made template to quickly create a set of charts and graphs for monitoring your data, without having to build them from scratch.

  • Click on the “+” (plus) icon in the left sidebar to open the “Create” menu.
  • Select “Dashboard.”
  • Click on the “Import” dashboard option.
  • Enter the dashboard code you want to import (e.g., code 1860).
  • Click the “Load” button.
  • Select the data source you added (Prometheus) from the dropdown.
  • Click on the “Import” button.

8. Configure global setting for promotheus

go to manage jenkins →system →search for promotheus —> apply →save

9. import a dashboard for jenkins

  • Click on the “+” (plus) icon in the left sidebar to open the “Create” menu.
  • Select “Dashboard.”
  • Click on the “Import” dashboard option.
  • Enter the dashboard code you want to import (e.g., code 9964).
  • Click the “Load” button.
  • Select the data source you added (Prometheus) from the dropdown.

Step 8→ Destroy all the Insrastructure

1 . Below commands delete your deployment and service

kubectl delete deployment chatbot

2. Go to your jenkins and location the eks-terraform pipeline and click on build with parameter and select destroy option and then click on build

It will destroy your eks cluster and node group

It takes of about 10 min to destroy all the infrastructure

3. After EKS cluster deletion let’s delete the base instance → “gpt-instance and monitering instance

go to your terminal and locate folder instance terraform and then run

terraform destroy --auto-approve

It will delete all your infrastrucutre

4. Don’t forget to delete iam role and Iam user from Iam section

This is all about this blog Thanks for reading if you like then hit the clap button

Don’t forget to follow me on linked in →linked in profile

--

--

Aakib

Cloud computing and DevOps Engineer and to be as a fresher I am learning and gaining experiance by doing some hands on projects on DevOps and in AWS OR GCP