Project 14 →Deployment of Three-tier web application with Docker-compose

Aakib
19 min readSep 12, 2024

--

Topics covered in this blog

  1. Introduction about the project
  2. Three-Tier Web Application Architecture
  3. Deployment Flowchart
  4. DevOps Pipeline to Deploy Applications in Development, QA, and Production

Phase 1 : Testing Locally

Step 1 : setup a Base EC2

Step 2 : IAM Role for EC2

Step 3 : Clone the github Repository

Step 4 : Setup Docker and docker compose

Step 5 : Deploy Locally using docker-compose

Phase 2 : Setup CI/CD Pipeline

Step 1→ Setup Sonarqube

Step 2 →Setup jenkins

Step 3 → Install Plugins

Step 4 → Setup Credentials

Step 5 → Setup Tools and configure global setting

Step 6 → Pipeline script

Phase 3 : Setup Multiple Pipeline for Quality Assuarance and for Production

  • Pipeline for Development
  • Pipeline for Quality assurance
  • Pipeline for Production

1. Introduction

In today’s world, many websites and applications we use every day, like online stores, social media platforms, or banking apps, are built using something called a three-tier architecture. This design pattern helps developers organize their code and make sure the application works smoothly, securely, and efficiently.

The three tiers (or layers) in this architecture are:

  1. Presentation Layer (Frontend) — This is what users see and interact with, like the buttons, images, and forms on a website.
  2. Logic Layer (Backend) — This handles the “behind the scenes” work, like processing user data, running calculations, or managing tasks.
  3. Data Layer (Database) — This stores and retrieves all the important information, like user profiles, product details, or transaction history.

Each of these layers works together to make sure the website or application runs well. The three-tier architecture is very popular because it keeps things organized, makes applications easier to update, and helps handle large numbers of users without slowing down.

In this blog, we’ll take a detailed look at how these layers work, how the application is deployed step-by-step, and how developers use tools like DevOps pipelines to automate the process of building and updating the application in different environments such as Development (Dev), Quality Assurance (QA), and Production (Prod).

we are going to deploy this three-tier application with docker compose

Docker Compose is a tool that makes it easier to manage multi-container Docker applications.

Think of Docker as a way to package your app into a “container,” which includes everything it needs to run, like code, libraries, and system tools. But many apps need more than one container to work. For example, a web app might need one container for the web server and another for the database.

Docker Compose lets you define and run these multiple containers with a simple text file called docker-compose.yml. In this file, you list all the containers your app needs and how they should interact. Then, with one command, Docker Compose will start everything up, making sure all the containers work together as intended.

In short, Docker Compose is like an organizer for your Docker containers, making it easy to manage and run complex apps with just a few commands

2. Three-Tier Web Application Architecture

Web applications today, whether it’s an online shopping platform or a social media site, are often built using a three-tier architecture. This design is popular because it splits the application into three distinct parts, making it easier to manage and maintain. Each part has a specific job to do, and they all work together to give users a smooth experience.

Designed with cloudairy cloudchart

Let’s break down these three parts:

1) Presentation Layer (The Frontend)

Think of this as the face of your application. It’s what users see and interact with when they visit your website. This layer includes all the visual stuff — buttons, images, forms, menus, and text.

For example, if you’re on an shopping website, the homepage, the search bar, the product listings, and the “Add to Cart” button are all part of this presentation layer. When you click on a product to view more details, you’re interacting with this layer.

Technologies used Here :

  • HTML (for the structure of web pages)
  • CSS (to make things look nice and styled)
  • JavaScript (to make the website interactive, like responding when you click buttons)

2) Logic Layer (The Backend)

This is where the magic happens. The logic layer handles all the heavy lifting. When you, the user, click a button or submit a form on the frontend, this middle layer processes the request, does the calculations, and makes decisions based on what you want to do.

Let’s say you’re shopping online and you add something to your cart. The backend checks whether the item is available, updates your cart, and calculates the total. It doesn’t just store or display the information — it works with it.

The logic layer is like the “brain” of the application. It processes what the user does on the frontend and interacts with the database to get or save data. Once it processes everything, it sends the final result back to the frontend for the user to see.

Technologies used here:

  • Programming languages like Python, Java, Node.js, or Ruby.
  • Frameworks such as Django (for Python) or Spring (for Java).
  • APIs (Application Programming Interfaces), which let the frontend and backend talk to each other.

3) Data Layer (The Database)

The data layer is where all the important information is stored. This could be anything from user profiles, order histories, or product details. The database holds onto this data and sends it back to the logic layer whenever it’s needed.

For example, when you log in to your account on a website, the backend checks the data layer for your username and password. If it finds a match, it lets you log in. If not, it shows an error message.

The data layer stores, updates, and retrieves data as the backend requests it. When you submit a form or need to see certain information (like product availability), the backend checks with the database to get the info and then processes it.

Technologies used here :

  • SQL databases like MySQL or PostgreSQL.
  • NoSQL databases like MongoDB (which is great for unstructured data).
  • Cloud databases like AWS RDS or Google Cloud SQL for managing large-scale applications.

How Do These Layers Interact With Each Other

Let’s walk through a real-world example. Imagine you’re on a website, looking to buy a new pair of shoes:

  1. You search for “sneakers” in the search bar (this is happening on the frontend).
  2. The search request is sent to the backend, where the logic layer processes it. The backend knows it needs to look in the database for sneakers.
  3. The database (data layer) is queried, and it sends back a list of all the sneakers available in the store.
  4. The backend receives this data, processes it, and sends the result back to the frontend, which displays all the sneakers on your screen.

Each layer has its own job, but they rely on each other to keep the application running smoothly. This separation of responsibilities makes the system more organized and scalable. If you need to change the design, you only modify the frontend. If you need to update how the system handles data, you focus on the backend or database.

If you ever wonder how websites like Amazon or Netflix handle millions of users at the same time, it’s thanks to architectures like this one!

3. Deployment Flowchart for a Three-Tier Web Application

  1. Coding Part
  • Frontend code : Developers design and build the part of the app you see and interact with.
  • Backend code: Developers write the code that handles data processing and logic.

2. Code Management

  • Written code is managed by source code management tools like git Think of it as a shared workspace where everyone’s changes are recorded.
  • When after your code is ready you have to push it in a central repository like github

3. CI/CD Pipeine

  • You can use any CI/CD tool like jenkins , circle ci etc. for me it is jenkins
  • Jenkins is the CI/CD tool which automate the whole Software Development Lifecycle Process
  • It Fetches code from github , tests it and deployed on docker container

4. Containerization

  • The app is packed into containers (using Docker), making it easy to run consistently across different environments.

5. Set Up Different Environments

  • Development (Dev): Where the initial testing happens.
  • Quality Assurance (QA): Where more detailed testing and bug fixes are done.
  • Production (Prod): The live version of the app where real users interact with it.

6. Deploy to Dev Environment

  • The new code is first deployed to the Dev environment. Here, developers check if everything works as expected.

7. Testing in QA Environment

  • Next, the code moves to QA. Testers perform more rigorous testing to ensure there are no bugs or performance issues.

8 . Get Approval for Production

  • Once QA testing is complete and the code is stable, it’s reviewed for final approval before going live.

9. Deploy to Production

  • The code is deployed to the Production environment. This is where users can see and use the updated app.

10. Repeat and Improve

  • The process starts again with new code updates, improving the app based on user feedback and performance data.
Designed with cloudairy cloudchart

4. DevOps Pipeline to Deploy Applications in Development, QA, and Production :

Phase 1 : Testing Locally

Step 1 : setup a Base EC2

Step 2 : IAM Role for EC2

Step 3 : Clone the github Repository

Step 4 : Setup Docker and docker compose

Designed with cloudairy cloudchart

Step 5 : Deploy Locally using docker-compose

Step 1 → setup a Base EC2

  1. go to aws console and launch a instance

2. Choose ubuntu from the list and create a new key pair →my key and open ports http and https

and then click on launch instance

3. click on EC2 →security →click on security groups→edit the inbound rules to open the following port no.

Step 2 : IAM Role for EC2

Why we need IAM role for EC2 → It is used by your ec2 instance to create EKS cluster and manage s3 bucket by applying this IAM role it gives the authenticity to your ec2 to do changes in aws account

1. creating IAM role

  1. on the search bar type IAM

2. click on Roles on the left side

3. click on create role and choose EC2 from the dropdown

4. click on next

5. choose administrator access on permission sections

6. click on next and give a name to your role

7. click on create role option and your IAM role is created

2. Attach IAM role with your EC2

  1. go to EC2 section
  2. click on actions → security → modify IAM role option

3. choose the role from dropdown and click on update IAM role

Now go back to your EC2 and connect with your EC2 instance

Run the following command

sudo su
apt update -y

Step 3 →Clone the github Repository

On your EC2 Instance clone the github repo for the source code

git clone https://github.com/Aakibgithuber/wanderlust-3-tier-project.git
ls

Step 4 → Setup docker and docker compose

There is a file called script.sh run that file using below command to setup docker and docker compose

bash script.sh # it will install all the necessary things

Step 5 →Deploy with docker compose

  1. you just need to run a single command to up all the containers the frontend , the backend , the database containers and connect all of them
docker-compose up -d
Here is your 3 container is running

Copy public ip and paste to your web broswer

  1. public_ip:5173

As you see featured section is not accessible to do that we have to run a following commands

docker ps #give containers name 

Copy mongo container name and run

docker exec -it <container_name> mongoimport --db wanderlust --collection posts --file ./data/sample_posts.json --jsonArray

In output it gives 10 documents imported successfully

Now again if you hit the url it will give the whole website

Here is your complete website let’s create a post
Here is your what is devops

Note: If you getting error in loading backend and in featured post then you need to update the current ip address of your machine in /backend/ .env.sample and in /frontend/.env.sample files

Phase 2 : The Pipeline

Step 1→ Setup Sonarqube

Step 2 →Setup jenkins

Step 3 → Install Plugins

Step 4 → Setup Credentials

Step 5 → Setup Tools and configure global setting

Step 6 → Pipeline script

1.Sonarqube →

copy the public ip of your machine

  1. go to your browser and type →<publicip>:9000

sonarqube window open

2. iniatially username and password is admin

3. update your password

4. welcome window of Sonarqube

2. Jenkins →

  1. on browser type →<public_ip>:8080 for password

2. for this go to your ec2 and connect it

3. run the below commands

sudo su
cat /var/lib/jenkins/secrets/initialAdminPassword

output is your password and paste it to your jenkins

4. Install the suggested plugins

5. Setup your jenkins user

Welcome to jenkins dashboard

Step 3 → Install Plugins listed below

  1. Eclipse Temurin Installer (Install without restart)
  2. SonarQube Scanner (Install without restart)
  3. docker compose build step
  4. NodeJs Plugin (Install Without restart)
  5. owasp →The OWASP Plugin in Jenkins is like a “security assistant” that helps you find and fix security issues in your software. It uses the knowledge and guidelines from the Open Web Application Security Project (OWASP) to scan your web applications and provide suggestions on how to make them more secure. It’s a tool to ensure that your web applications are protected against common security threats and vulnerabilities.
  6. Prometheus metrics →to moniter jenkins on grafana dashboard
  7. Download all the docker realated plugins

Step 4 →add credentials of Sonarqube and Docker

1st we genrate a token for sonarqube to use in jenkins credentials as secret text

a. setup sonarqube credentials

  1. go to http://publicip:9000
  2. now enter your username and password
  3. click on security →users →token →generate token
  4. token_name==jenkins

4. copy the token and go to your jenkins →manage jenkins →credentials →global →add credentials

5. select secret text from dropdown

6. secret text ==your token , id =jenkins →click on create

b. setup projects in sonarqube for jenkins

  1. go to your sonarqube server
  2. click on projects
  3. in the name field type gpt
  4. click on set up

click on setup

click on above option

click on generate

click on continue ….

Sonarqube project for jenkins is setup now

c. setup sonarqube

d. Setup docker credentials

  1. go to your jenkins →manage jenkins →credentials →global →add credentials
  2. provide your username and password of your dockerhub
  3. id==docker

credentials for both are setup

Step 5 →Now we are going to setup tools for jenkins

go to manage jenkins → tools

a. add jdk

  1. click on add jdk and select installer adoptium.net
  2. choose jdk 17.0.8.1+1version and in name section enter jdk 17

b. add node js

  1. click on add nodejs
  2. enter node16 in name section
  3. choose version nodejs 16.2.0

c. add docker →

  1. click on add docker
  2. name==docker
  3. add installer ==download from docker.com

d. add sonarqube →

  1. add sonar scanner
  2. name ==sonar-scanner

e. add owasp dependency check →

Adding the Dependency-Check plugin in the “Tools” section of Jenkins allows you to perform automated security checks on the dependencies used by your application

  1. add dependency check
  2. name == DP-Check
  3. from add installer select install from github.com

Configure global setting for sonarube and setup webhooks

  1. go to manage jenkins →Configure global setting →add sonarqube servers
  2. name ==sonar-server
  3. server_url==http://public_ip:9000
  4. server authentication token == jenkins →it is created in sonarqube security configurations

5. let’s run the Pipeline →

  1. go to new item →select pipeline →in the name section type gpt-pipeline
  2. scroll down to the pipeline script and copy paste the following code
pipeline{
agent any
tools{
jdk 'jdk17'
nodejs 'node16'
}
environment {
SCANNER_HOME=tool 'sonar-scanner'
}
stages {
stage('Checkout from Git'){
steps{
git branch: 'main', url: 'https://github.com/Aakibgithuber/wanderlust-3-tier-project.git'
}
}
stage('Install Dependencies') {
steps {
sh "npm install"
}
}
stage("Sonarqube Analysis "){
steps{
withSonarQubeEnv('sonar-server') {
sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=docker-compose \
-Dsonar.projectKey=docker-compose '''
}
}
}
stage("quality gate"){
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token'
}
}
}
stage('OWASP FS SCAN') {
steps {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
stage('TRIVY FS SCAN') {
steps {
sh "trivy fs . > trivyfs.json"
}
}
// Docker Compose Build Stage with Timeout
stage('Docker-compose Build') {
steps {
script {
timeout(time: 1, unit: 'MINUTES') { // Timeout set to 1 minute
sh 'docker-compose up -d'
}
}
}
}

stage('Docker-compose Push') {
steps {
script {
withDockerRegistry(credentialsId: 'docker', toolName: 'docker') {
// Tag and push backend and frontend images
sh "docker tag devpipeline-backend aakibkhan1212/devpipeline-backend:latest"
sh "docker tag devpipeline-frontend aakibkhan1212/devpipeline-frontend:latest"

sh "docker push aakibkhan1212/devpipeline-backend:latest"
sh "docker push aakibkhan1212/devpipeline-frontend:latest"
}
}
}
}
stage("TRIVY"){
steps{
sh "trivy image aakibkhan1212/devpipeline-backend > trivy.json"
sh "trivy image aakibkhan1212/devpipeline-frontend > trivy.json"
}
}
}
}
Designed with cloudairy cloudchrt
Here is a success message for the pipeline

your application is successfully deployed checkout the dockerhub and to acess your application go to your browser and type

“public_ip:5173”

Here is your website deployed by jenkins pipeline

Phase 3 : Setup Multiple Pipeline for Quality Assuarance and for Production

Why we need to Deploy it Multiple Times

We deploy the application in three different stages — Development, Quality Assurance (QA), and Production — to ensure the application works correctly at every step and to minimize the chances of errors in the final product. Here’s why each stage is important:

1. Development (Dev)

In the Development stage, the application is tested by developers. This is where new features are added, bugs are fixed, and code changes are made. It’s like a “testing ground” where developers can try out new things without worrying about breaking anything important.

  • Purpose: To develop and test new features in a safe environment.
  • Why it’s needed: It helps catch issues early before the code moves to the next stage.

2. Quality Assurance (QA)

In the QA stage, testers check if everything works as expected. They test the application in a controlled environment, trying to find bugs, issues, or performance problems. This stage simulates a real-world scenario but is still separate from the actual users.

  • Purpose: To test the stability, performance, and functionality of the application.
  • Why it’s needed: It helps ensure the application is ready for users by catching any issues that were missed in the development stage.

3. Production (Prod)

The Production stage is where the application goes live for actual users. This is the final environment where the real customers or users interact with the application. It’s crucial that only well-tested, stable code reaches this stage, so users don’t experience issues.

  • Purpose: To serve the final, stable version of the application to users.
  • Why it’s needed: This ensures that only high-quality, reliable code is deployed, providing a smooth experience for users.

Pipeline for Development

In the Development pipeline, the goal is to test new features or bug fixes in a safe environment where developers can make changes without impacting the live application. It’s all about fast feedback to help developers iterate quickly.

Steps:

  • Checkout Code: Pull the latest code from the repository so developers have the most up-to-date version.
  • Build Docker Images: Create Docker images for each service (frontend, backend, and database) to package the application in a standardized way.
  • Run Unit Tests: These tests focus on checking small, individual parts of the application (like functions or modules) to make sure they work as expected.
  • Push Docker Images (Optional): You can choose to push the images to a Docker registry, but it’s optional at this stage since the images are usually local.
  • Deploy to Development: The application is deployed in the Development environment where developers can interact with it to see if their changes worked.
  • Basic Integration Tests: Optionally, run tests to ensure different parts of the application (like frontend and backend) work together correctly.
  • Trigger the QA Pipeline: Once the development work is complete and basic tests pass, the pipeline automatically triggers the QA process for further testing.
pipeline {
agent any

stages {
stage('Checkout Code') {
steps {
git 'https://github.com/Aakibgithuber/docker-compose-project.git'
}
}

stage('Build Docker Images') {
steps {
sh 'docker-compose build'
}
}

stage('Run Unit Tests') {
steps {
sh 'docker-compose run backend pytest'
}
}

stage('Deploy to Development') {
steps {
sh 'docker-compose up -d'
}
}
}

post {
success {
echo "Development deployment successful, triggering QA pipeline in 10 seconds..."
sleep 10
build job: 'QA_Pipeline'
}
failure {
echo "Development deployment failed."
}
}
}

2. Pipeline for Quality Assurance

In the QA pipeline, the focus shifts from development to testing the entire application thoroughly. QA teams simulate real-world conditions to ensure the application works as expected for users. The QA environment is closer to what the production environment will be, so this is where more advanced tests are run.

Steps:

  • Pull Docker Images: The pipeline pulls the Docker images that were built and tested in the Development pipeline, ensuring that the exact same version of the application is being tested in QA.
  • Deploy to QA: The application is deployed in the QA environment so that testers can interact with it and run their tests.
  • Run Functional Tests: These tests check that the application functions correctly from a user’s perspective. They make sure all features work as expected.
  • Run End-to-End Tests: These tests simulate real user scenarios, verifying that all parts of the application (frontend, backend, and database) work seamlessly together.
  • Run Performance Tests: Performance or load testing checks how well the application handles heavy traffic and stress. This ensures the application can scale and perform well under real-world conditions.
  • Manual QA Approval: Once the QA team is satisfied with the tests, there’s a manual approval step to ensure human oversight before the application moves to production.
  • Trigger the Production Pipeline: If all tests pass and QA approves, the pipeline automatically triggers the Production deployment process
pipeline {
agent any

stages {
stage('Pull Docker Images') {
steps {
sh 'docker-compose pull'
}
}

stage('Deploy to QA') {
steps {
sh 'docker-compose up -d'
}
}

stage('Run Functional Tests') {
steps {
sh 'docker-compose run frontend npm run test'
}
}

stage('Run End-to-End Tests') {
steps {
sh 'docker-compose run backend behave tests/'
}
}

stage('Run Performance Tests') {
steps {
sh 'docker-compose run backend stress --cpu 8 --timeout 10'
}
}
}

post {
success {
echo "QA testing successful, triggering Production pipeline in 10 seconds..."
sleep 10
build job: 'Production_Pipeline'
}
failure {
echo "QA testing failed."
}
}
}

3. Pipeline for Production

The Production pipeline is where the final version of the application is deployed for real users. This environment must be stable and reliable, so the pipeline focuses on smooth deployment and post-deployment checks to ensure everything runs as expected.

Steps:

  • Pull Docker Images: The Production pipeline pulls the same Docker images that were built and tested in Development and QA, ensuring consistency.
  • Approval Step: A manual approval step is typically included here to ensure that nothing is deployed to production without explicit authorization. This step adds an extra layer of protection.
  • Deploy to Production: The final version of the application is deployed to the production environment, where real users will interact with it.
  • Post-Deployment Smoke Tests: After the deployment, simple smoke tests are run to verify that the application is up and running correctly. This ensures that the deployment didn’t break anything critical.
  • Monitoring & Alerts: In production, it’s important to have monitoring systems and alerts set up to keep track of the application’s performance and notify the team if anything goes wrong.
  • Notify Team: Once the production deployment is successful, notifications (via Slack, email, etc.) are sent to the team, confirming that the application is live and stable
pipeline {
agent any

stages {
stage('Pull Docker Images') {
steps {
sh 'docker-compose pull'
}
}

stage('Manual Approval') {
steps {
input message: 'Approve deployment to production?'
}
}

stage('Deploy to Production') {
steps {
sh 'docker-compose up -d'
}
}

stage('Run Smoke Tests') {
steps {
sh 'curl http://your-production-url'
}
}

stage('Post-Deployment Notifications') {
steps {
echo "Deployment to production was successful!"
// Here you can add additional steps for notifications like Slack or email
}
}
}

post {
success {
echo "Production deployment successful."
}
failure {
echo "Production deployment failed."
}
}
}

Here we completed Our blog if you like it do clap , follow and share the blog

Follow me on github

Follow me on Linkedin

--

--

Aakib

Cloud computing and DevOps Engineer and to be as a fresher I am learning and gaining experiance by doing some hands on projects on DevOps and in AWS OR GCP