Project 13 →Deployment of Youtube Clone on Kubernetes cluster with Slack integration

Aakib
13 min readSep 3, 2024

--

The above animation Diagram is designed with Cloudairy Cloudchart that is powerful architectural design tool

It provides you the innovative features to enhance your design process such as

  1. ✅𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈 𝐌𝐚𝐠𝐢𝐜: Bring your ideas to life with our AI-powered flowchart generator.
  2. ✅𝐒𝐡𝐚𝐩𝐞 𝐘𝐨𝐮𝐫 𝐖𝐨𝐫𝐥𝐝: Choose from an extensive library of BPMN, dataflow, enterprise, flowchart, and geometric shapes.
  3. ✅𝐕𝐢𝐬𝐮𝐚𝐥𝐢𝐳𝐞 𝐢𝐧 𝟑𝐃: Bring your cloud infrastructure to life with our 3D icons for AWS, Azure, GCP, and Kubernetes.
  4. ✅𝐄𝐧𝐫𝐢𝐜𝐡 𝐘𝐨𝐮𝐫 𝐃𝐞𝐬𝐢𝐠𝐧𝐬: Search and import images, icons, and GIFs directly into your diagrams.
  5. ✅200+Templates: Accelerate your design with 200+ pre-built cloud architecture templates.

Cloudairy Cloudchart link →https://cloudairy.com

Today we are going to deploy youtube clone . In this project we also integrate slack notification for jenkins . This is one of the resume ready project with each and every steps Do follow the project guys

Completion steps →

Step 1: Set a base EC2 on AWS

Step 2 : IAM Role for EC2

Step 3 : Configure EC2

Step 4 : Setup jenkins pipeline

Step 5 : ci-cd pipeline to build and push the image to docker hub

Step 6 : Integrate Slack notification for pipeline

Step 7 : Kuberntes Cluster creation using terraform

Step 8→ Creation of deployment for EKS

Step 9→ Destroy all the Insrastructure

Step 1 : Set a base EC2 on AWS

  1. go to aws console and launch a instance

2. Choose ubuntu from the list and create a new key pair →my key and open ports http and https

and then click on launch instance

Step 2 : IAM Role for EC2

Why we need IAM role for EC2 → It is used by your ec2 instance to create EKS cluster and manage s3 bucket by applying this IAM role it gives the authenticity to your ec2 to do changes in aws account

1. creating IAM role

  1. on the search bar type IAM

2. click on Roles on the left side

3. click on create role and choose EC2 from the dropdown

4. click on next

5. choose administrator access on permission sections

6. click on next and give a name to your role

7. click on create role option and your IAM role is created

2. Attach IAM role with your EC2

  1. go to EC2 section
  2. click on actions → security → modify IAM role option

3. choose the role from dropdown and click on update IAM role

why we need IAM Role →

Imagine you have a robot (EC2 instance) that does tasks for you in a big factory (AWS environment). Now, this robot needs to access different rooms (AWS services like S3, DynamoDB, etc.) to perform its tasks.

Here’s where IAM (Identity and Access Management) comes in:

  1. Robot Needs a Key to Enter Rooms: The IAM Role is like giving your robot a special key. This key allows the robot to enter specific rooms (access certain AWS services). Without this key, the robot can’t get in.
  2. Different Keys for Different Robots: Each robot (EC2 instance) can have its own key (IAM Role) with specific permissions. So, one robot may have a key to enter the storage room (access S3), while another robot has a key to enter the database room (access DynamoDB).
  3. No Need for Hardcoding Passwords: Using IAM Roles means you don’t have to hardcode passwords (access credentials) into the robot. It’s like not writing down passwords on the robot itself. The robot just uses its key when needed.
  4. Easily Change Permissions: If you want to change what a robot can do, you just change the permissions on its key (IAM Role). No need to reprogram the robot or give it a new password; just update the permissions on its key.
  5. Secure and Controlled Access: IAM Roles help keep things secure. You can control exactly what each robot is allowed to do. This way, if one robot is compromised or needs a different role, you can easily adjust its permissions without affecting others.

Now go back to your EC2 and connect with your EC2 instance

Step 3 : Configure EC2

you have to install git , docker , jenkins , kubectl etc for docker container and kubernetes deployment

  1. run the following commands
apt update 
vim run.sh

2. Enter the following commands to your .sh file

#!/bin/bash
sudo apt update -y
wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | tee /etc/apt/keyrings/adoptium.asc
echo "deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | tee /etc/apt/sources.list.d/adoptium.list
sudo apt update -y
sudo apt install temurin-17-jdk -y
/usr/bin/java --version

#install jenkins
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update -y
sudo apt-get install jenkins -y
sudo systemctl start jenkins
sudo systemctl status jenkins

#install docker
sudo apt-get update
sudo apt-get install docker.io -y
sudo usermod -aG docker ubuntu
sudo usermod -aG docker jenkins
newgrp docker
sudo chmod 777 /var/run/docker.sock
sudo systemctl restart jenkins

# install trivy
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy -y


# Install Terraform
sudo apt install wget -y
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform

# Install kubectl
sudo apt update
sudo apt install curl -y
curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client

3. run the .sh file by command bash run.sh

Step 4 : Setup jenkins pipeline

From the above step everything is installed all we need to setup jenkins ci-cd pipeline

  1. copy the public ip of your ec2 and paste it on browser type →<public_ip>:8080

2. for this go to your ec2 and connect it

3. run the below commands

sudo su
cat /var/lib/jenkins/secrets/initialAdminPassword

output is your password and paste it to your jenkins

4. Install the suggested plugins

5. Setup your jenkins user

Welcome to jenkins dashboard

Step 5 : ci-cd pipeline to build and push the image to docker hub

1. Install Plugins listed below

  1. Terrform
  2. slack notification
  3. Global Slack Notifier
  4. Eclipse Temurin Installer (Install without restart)
  5. NodeJs Plugin (Install Without restart)
  6. Parameterized Trigger (to trigger another pipeline if one completed)
  7. Download all the docker realated plugins

Note → if jenkins is not running on the given website then go to your ec2 security groups and open the 8080 port no.

also open port 3000 for the docker container

A. Setup docker credentials

  1. go to your jenkins →manage jenkins →credentials →global →add credentials
  2. provide your username and password of your dockerhub
  3. id==docker

B. Setup tools for jenkins

go to manage jenkins → tools

a. add jdk

  1. click on add jdk and select installer adoptium.net
  2. choose jdk 17.0.8.1+1version and in name section enter jdk 17

b. add node js

  1. click on add nodejs
  2. enter node16 in name section
  3. choose version nodejs 16.2.0

c. add docker →

  1. click on add docker
  2. name==docker
  3. add installer ==download from docker.com

d. terraform

  1. go to manage jenkins →tools →search for terraform
  2. add terraform
  3. provide the name in the name field and untick the install automatically option and give the path /usr/bin/
  4. As terraform is installed in this section it takes it from there

5. click on apply and save

C. The Pipeline Script

  1. go to new item →select pipeline →in the name section type Youtube Pipeline 1

2. scroll down to the pipeline script and copy paste the following code

pipeline{
agent any
tools{
jdk 'jdk17'
nodejs 'node16'
}
stages {
stage('clean workspace'){
steps{
cleanWs()
}
}
stage('Checkout from Git'){
steps{
git branch: 'main', url: 'https://github.com/Aakibgithuber/deployment-of-youtube.git'
}
}
stage('Install Dependencies') {
steps {
sh "npm install"
}
}
stage("Docker Build & Push"){
steps{
script{
withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){
sh "docker build -t youtube-clone ."
sh "docker tag youtube-clone aakibkhan1212/youtube-clone:latest "
sh "docker push aakibkhan1212/youtube-clone:latest "
sh "docker "
}
}
}
}
stage('Deploy to container'){
steps{
sh 'docker run -d --name youtube-clone -p 3000:3000 aakibkhan1212/youtube-clone:latest'
}
}
}
}

If you want to add owasp and sonarqube then you could make changes by following the previous project blog

click on build

your application is successfully deployed checkout the dockerhub and to acess your application go to your browser and type

http://your_public_ip:3000

Here is your application

Image from dockerhub that is pushed by jenkins pipeline

Step 6 : Integrate Slack notification for pipeline

  1. go to slack.com and make a account

2. choose any of the option

3. Create a Workspace

4. click on next

Welcome window of slack

5. Create a new channel

6. Now you have to browse Slack app directory and search for jenkins

7. click on add to slack and choose a channel

click on add jenkins CI Integration

8. It will generate token for the jenkins

9. Now you have to go to Jenkins →manage jenkins →global and scroll down to Slack Notifications

10. On credentials add the token in the secret text dropdown

11. Click on Test Connection and you will see Success blink on

12. Now again go to your youtube Pipeline and click on configure then add the following code to the pipeline script

pipeline {
agent any
tools {
jdk 'jdk17'
nodejs 'node16'
}
stages {
stage('Clean Workspace') {
steps {
cleanWs()
}
}
stage('Checkout from Git') {
steps {
git branch: 'main', url: 'https://github.com/Aakibgithuber/deployment-of-youtube.git'
}
}
stage('Install Dependencies') {
steps {
sh "npm install"
}
}
stage('Docker Build & Push') {
steps {
script {
withDockerRegistry(credentialsId: 'docker', toolName: 'docker') {
sh "docker build -t youtube-clone ."
sh "docker tag youtube-clone aakibkhan1212/youtube-clone:latest"
sh "docker push aakibkhan1212/youtube-clone:latest"
}
}
}
}
stage('Deploy to Container') {
steps {
sh 'docker run -d --name youtube-clone -p 3000:3000 aakibkhan1212/youtube-clone:latest'
}
}
}
post {
success {
slackSend(channel: '#all-the-cloud-hub', color: 'good', message: "Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL}) was successful.")
}
failure {
slackSend(channel: '#all-the-cloud-hub', color: 'danger', message: "Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL}) failed.")
}
always {
echo 'Build finished, check Slack for notifications.'
}
}
}

13. Now again build the Pipeline and check for the slack notifications

14. It shows success on slack for the pipeline that we created on jenkins

Step 7 : Kuberntes Cluster creation using terraform

  1. clone the github repo by →

a. mkdir super_mario

b. cd super_mario

c. git clone https://github.com/Aakibgithuber/deployment-of-youtube.git

d. cd deployment-of-youtube/

e. cd EKS-TF

f. edit the backend.tf file by → vim backend.tf

Note →make sure to provide your bucket and region name in this file otherwise it doesn’t work and IAM role is also associated with your ec2 which helps ec2 to use other services such S3 bucket

NOW RUN →

  1. terraform init

When we run terraform init, it sets up your working area, downloads necessary plugins, and makes sure everything is in place so that you can start using Terraform to create, update, or manage your infrastructure. It’s like getting all the tools and materials ready before you start building something amazing with your computer.

2. terraform validate

Terraform validate reviews our code to catch any syntax errors or mistake and gives an output success if everything is no error in the file

3. terraform plan

Terraform plan is used to see what changes will be made to your infrastructure. By using this command we could review and confirm that everything looks good before giving the final approval to build or modify our application infrasructure It is like the blueprint of the construction project before actually creating or changing anything with Terraform

4. terraform apply

terraform apply --auto-approve

Running terraform apply --auto-approve is like telling computer, "Go ahead and build everything exactly as planned without asking me for confirmation each time." It's a way to automate the deployment of your infrastructure without needing our constant input. When we execute this command, Terraform reads our code, figures out what needs to be created or changed, and then goes ahead with the construction, skipping the usual step where it checks with you for approval.

It takes your 5 to 10 min for completion

5. Below command is used to update the configuration of EKS

aws eks update-kubeconfig --name EKS_CLOUD --region us-east-1

The command aws eks update-kubeconfig --name EKS_CLOUD --region us-east-1 is like telling our computer, "Hey, I'm using Amazon EKS (Elastic Kubernetes Service) in the 'us-east-1' region, and I want to connect to it. you could use your desired location

Step 8→ Creation of deployment for EKS

  1. change the directory where deployment and service files are stored use the command → cd ..
  2. create the deployment
kubectl apply -f deployment.yaml

deployment.yaml file is like a set of instructions that tells a computer system, "Hey, here's how you should run and manage a particular application " . It provides the necessary information for a computer system to deploy and manage a specific software application. It includes details like what the application is, how many copies of it should run, and other settings that help the system understand how to keep the application up and running smoothly.

3. Service file is included in the deployment file

service.yaml file is like a set of rules that helps computers find and talk to each other within a software application. It's like a directory that says, "Hey, this is how you can reach different parts of our application." It specifies how different parts of your application communicate and how other services or users can connect to them.

4. run → kubectl get all

copy the load balancer ingress and paste it on browser and your application is running there

don’t forget to destory everything that’s saves of aws bill and you aws account too

Load Balancer Ingress →

It is a mechanism that helps distribute incoming internet traffic among multiple servers or services, ensuring efficient and reliable delivery of requests.

It’s like having a receptionist at a busy office building entrance who guides visitors to different floors or departments, preventing overcrowding at any one location. In the digital world, a Load Balancer Ingress helps maintain a smooth user experience, improves application performance, and ensures that no single server becomes overwhelmed with too much traffic.

Step 9→ Destroy all the Insrastructure

1 . Below commands delete or destroy your cluster

cd EKS-TF
terraform destroy --auto-approve

after 3 -5 mins all things are destroyed

3. Now go to your EC2 and terminate your Instance

Here we completed another project don’t forget to create a good animation diagram for this project using cloudairy cloudchart

Cloudairy cloudchart website link →https://cloudairy.com

--

--

Aakib
Aakib

Written by Aakib

Cloud computing and DevOps Engineer and to be as a fresher I am learning and gaining experiance by doing some hands on projects on DevOps and in AWS OR GCP

Responses (2)