Building a Production-Grade CI/CD Pipeline with Kubernetes and Jenkins

CI/CD Pipeline with Kubernetes and Jenkins

As someone exploring real-world DevOps workflows, I recently followed and extended a corporate-level CI/CD pipeline tutorial that walks through deploying Jenkins, SonarQube, Nexus, and Kubernetes on AWS EC2. While the base tutorial provided solid guidance, I added my own twists and documentation to make the setup production-ready. In this blog, I’ll walk you through every step—from provisioning infrastructure to building a 13-stage Jenkins pipeline, and finally setting up monitoring with Prometheus and Grafana.

To keep it clear and structured, this blog is divided into 8 sections:

  1. Infrastructure Setup – Provisioning EC2 instances and preparing the environment

  2. Cluster Configuration – Setting up Kubernetes master and worker nodes

  3. Tool Installation – Installing Jenkins, Docker, SonarQube, Nexus, and other tools

  4. Jenkins Configuration – Plugin setup, credentials, tool integrations

  5. CI/CD Pipeline Breakdown – Full Jenkins pipeline for code checkout, build, scan, and deploy

  6. Monitoring Setup – Prometheus and Grafana for system and app observability

  7. Lessons Learned – Practical insights gained from this setup

  8. Final Thoughts – Wrapping up and future possibilities


1. Infrastructure Setup

To simulate a real-world production environment, I began by provisioning the entire infrastructure on AWS using EC2 instances. Here's a breakdown of how I structured the environment to support Kubernetes and all supporting tools.

EC2 Instances: The Foundation

I spun up a total of 6 EC2 instances:

  • Kubernetes Cluster:

    • Master: Controls the cluster

    • Slave-1: Worker node

    • Slave-2: Worker node

  • Tooling & Monitoring:

    • Jenkins

    • Nexus

    • SonarQube

    • (Later) Monitor-Node for Prometheus & Grafana

AMI and Instance Type

All instances were created using:

  • AMI: Ubuntu 22.04 LTS (latest LTS at time of setup)

  • Instance Type: t2.medium

  • ⚠️ Note: t2.micro is incompatible with Kubernetes as it only offers 1 vCPU. Kubernetes requires a minimum of 2 vCPUs, especially for initializing the control plane on the master node.


Key Pair

If you don’t have a key pair already:

  • Generate a new key pair in the AWS EC2 console

  • Download the .pem file and store it securely

  • Use it to SSH into all the instances

bash
ssh -i "your-key.pem" ubuntu@<public-ip>

Security Groups

For security, I reused an existing security group with predefined inbound rules:

  • SSH (port 22) from my IP

  • HTTP/HTTPS as needed

  • Custom ports for:

    • Jenkins (8080)

    • SonarQube (9000)

    • Nexus (8081)

    • Prometheus (9090)

    • Grafana (3000)

    • Node/Blackbox Exporter (9100/9115)

Storage Configuration

Each instance was configured with:

  • 25 GB of gp2 (or gp3) storage

  • Enough to handle tool installations, Docker images, and logs without needing to expand volumes during the setup

Instance Naming

To make management easier, I renamed each instance immediately after creation:

  • Master

  • Slave-1

  • Slave-2

  • Jenkins

  • Nexus

  • SonarQube

  • Later: Monitor-Node

You can do this directly in the EC2 console by updating the "Name" tag.

2. Cluster Configuration

The foundation of this CI/CD setup lies in a well-configured Kubernetes cluster. I used AWS EC2 to provision a three-node cluster consisting of one master and two worker nodes. All instances run the latest LTS version of Ubuntu.

Terminal Setup

Although the original tutorial recommended MobaXterm (for Windows), I used the native macOS Terminal. To stay organized:

  • I renamed each terminal session (Master, Slave-1, Slave-2).

  • I applied unique colors to each tab for quick visual identification.


Kubernetes Installation and Initialization

After logging in, I installed the necessary packages on all three nodes:

bash
sudo apt update && sudo apt install -y docker.io apt-transport-https curl

Then I added the Kubernetes package repo, installed kubeadm, kubelet, and kubectl, and disabled swap (required by Kubernetes).

On the master node, I initialized the cluster:

bash
sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Once complete, I set up kubectl:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

On the worker nodes, I used the kubeadm join command (provided by the master node after initialization) to join the cluster.

✅ Verifying the Cluster

Back on the master node, I verified everything was up and running:

kubectl get nodes

Expected output:

NAME STATUS ROLES AGE VERSION master Ready control-plane Xm v1.xx.x slave-1 Ready <none> Xm v1.xx.x slave-2 Ready <none> Xm v1.xx.x

At this point, the Kubernetes cluster was ready for Jenkins and the CI/CD pipeline.

3. Tool Installation

Once our Kubernetes cluster was up and running with one master and two worker nodes, the next step was to set up the core tools for the CI/CD pipeline. I launched three additional EC2 instances, each dedicated to a specific tool:

  • SonarQube (Code quality analysis)

  • Nexus (Artifact repository)

  • Jenkins (CI/CD automation)

All three instances were created using the same Ubuntu LTS AMI and t2.medium instance type (2 vCPU, 4 GB RAM) with 25 GB of gp2 storage. This sizing ensured enough resources to run Docker containers and services smoothly.

Docker Installation

Since both SonarQube and Nexus would be running as containers, the first step was to install Docker on those instances:

bash
sudo apt update sudo apt install -y docker.io

By default, Docker can only be executed by the root user. To allow other users (like Jenkins or ubuntu) to run Docker commands, I used this shortcut:

bash
sudo chmod 666 /var/run/docker.sock

⚠️ Note: This is fine for testing/dev environments but not recommended for production due to security implications. A better approach would be to use the docker group.

Running SonarQube in a Docker Container

Once Docker was installed, I launched the SonarQube container using:

bash
docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
  • -d: Runs the container in detached mode (background)

  • --name sonar: Assigns a container name

  • -p 9000:9000: Maps host port 9000 to container port 9000

After a few minutes, I was able to access the SonarQube interface by visiting http://<ec2-public-ip>:9000. The default login is:

  • Username: admin

  • Password: admin

You’ll be prompted to change the password on the first login.

Running Nexus in a Docker Container

Similarly, for Nexus, I ran the following:

bash
docker run -d --name nexus -p 8081:8081 sonatype/nexus3

Once the container was up, I accessed Nexus at http://<ec2-public-ip>:8081. The initial admin password can be retrieved by logging into the container:

bash
docker exec -it nexus /bin/bash cat /nexus-data/admin.password

Copy that password and use it to log in with the admin user. Be sure to change it and store the new credentials securely.

Jenkins Installation (on Separate EC2)

For Jenkins, I installed it directly on the Ubuntu instance:

bash
sudo apt update sudo apt install -y fontconfig openjdk-17-jre wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - sudo sh -c 'echo deb https://pkg.jenkins.io/debian binary/ > /etc/apt/sources.list.d/jenkins.list' sudo apt update sudo apt install -y jenkins

I then started the Jenkins service:

bash
sudo systemctl start jenkins sudo systemctl enable jenkins

Jenkins runs on port 8080, so I accessed it at http://<ec2-public-ip>:8080.

4. Jenkins Configuration

With Jenkins installed and accessible via the browser, it was time to configure it to act as the brain of our CI/CD pipeline. This section covers the essential plugin setup, tool configuration, credential management, and Kubernetes integration to make Jenkins production-ready.

Initial Setup and Unlocking Jenkins

After accessing Jenkins at http://<ec2-ip>:8080, Jenkins prompts you for an unlock password. I retrieved it using:

bash
sudo cat /var/lib/jenkins/secrets/initialAdminPassword

I pasted the password into the browser and continued with the setup, selecting "Install suggested plugins" for a quick start. Once that was done, I created the first admin user.

Installing Essential Plugins

Jenkins depends heavily on plugins for integrating external tools. I went to:

Manage Jenkins → Plugins → Available

And installed the following plugins:

  • Eclipse Temurin Installer – Manages multiple JDK versions

  • Maven Integration – Supports Maven builds

  • Config File Provider – Handles settings.xml and other config files

  • Pipeline Maven Integration – Links Maven with pipelines

  • SonarQube Scanner – Connects Jenkins to SonarQube for code analysis

  • Docker & Docker Pipeline – Enables Jenkins to interact with Docker

  • Kubernetes CLI & Kubernetes Client API – For cluster management

  • Kubernetes Credentials – Secure K8s token integration

  • Prometheus Metrics – For Jenkins monitoring

After installation, I restarted Jenkins to apply all changes.

Global Tool Configuration

Next, I configured tools under:

Manage Jenkins → Global Tool Configuration

Here’s what I added:

  • JDK 17: via Eclipse Temurin installer

  • Maven 3.x: added and set to install automatically

  • SonarQube Scanner: latest version

  • Docker: system path /usr/bin/docker

These tools are now available for any job or pipeline to use without extra configuration.

Credential Management

Credentials are critical to keep secrets secure while enabling integrations.

Manage Jenkins → Credentials → Global (scope)

I added:

  1. Git Token – Username + Personal Access Token (for cloning repos)

  2. SonarQube Token – Secret text from SonarQube > Administration > Users > Tokens

  3. Docker Hub – Username + password

  4. Email (Gmail App Password) – Used for sending build notifications

  5. Kubernetes Token – Secret text from Kubernetes service account (next section explains)

💡 Use clear, recognizable IDs when creating credentials (e.g., docker-cred, sonar-token, git-token) so they’re easy to reference in pipelines.

SonarQube Server Configuration

To link Jenkins with the running SonarQube instance:

Manage Jenkins → Configure System

I scrolled down to the SonarQube Servers section and added:

  • Name: SonarQube

  • Server URL: http://<sonarqube-ec2-ip>:9000

  • Credentials: Used the token created earlier

Kubernetes Integration via Service Account (RBAC)

Jenkins needs secure access to the Kubernetes cluster to deploy apps.

1. Create a Namespace

bash
kubectl create namespace webapps

2. Create Service Account (svc.yaml)

yaml
apiVersion: v1 kind: ServiceAccount metadata: name: jenkins namespace: webapps
bash
kubectl apply -f svc.yaml

3. Create Role and RoleBinding

Define role.yaml to give access:

yaml
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: webapps name: jenkins-role rules: - apiGroups: [""] resources: ["pods", "services", "deployments"] verbs: ["get", "list", "create", "delete", "update"]

Define bind.yaml:

yaml
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: jenkins-bind namespace: webapps subjects: - kind: ServiceAccount name: jenkins namespace: webapps roleRef: kind: Role name: jenkins-role apiGroup: rbac.authorization.k8s.io

Apply both:

bash
kubectl apply -f role.yaml kubectl apply -f bind.yaml

4. Create Secret for Service Account (sec.yaml)

yaml
apiVersion: v1 kind: Secret metadata: name: jenkins-secret namespace: webapps annotations: kubernetes.io/service-account.name: jenkins type: kubernetes.io/service-account-token

Apply it:

bash
kubectl apply -f sec.yaml -n webapps

Get the token:

bash
kubectl describe secret jenkins-secret -n webapps

Copy the token and add it as a secret text credential in Jenkins with the ID: k8s-token.

Email Notification Setup

Jenkins can send email notifications using Gmail’s SMTP.

Manage Jenkins → Configure System

Under Extended Email Notification and Email Notification, I configured:

  • SMTP server: smtp.gmail.com

  • SMTP port: 465

  • SSL: Enabled

  • Credentials: Gmail App Password (created via “App Passwords” in Google Account > Security)

Tested the configuration to ensure email delivery worked.

Now Jenkins was fully integrated with SonarQube, Docker, Nexus, and Kubernetes — secured and ready to orchestrate the CI/CD pipeline. The next step: writing the pipeline itself.


5. CI/CD Pipeline Breakdown

This section dives into the heart of the project — the CI/CD pipeline implemented using Jenkins Declarative Pipeline syntax. The pipeline automates everything from code checkout to Docker image scanning and Kubernetes deployment.

Here’s a breakdown of each stage, directly based on the actual Jenkinsfile used:

 Tools and Environment

The pipeline uses:

  • JDK 17 and Maven 3

  • SonarQube for static analysis

  • Trivy for filesystem and container vulnerability scanning

  • Docker for containerization

  • Kubernetes for deployment


tools { jdk 'jdk17' maven 'maven3' } environment { SCANNER_HOME = tool 'sonar-scanner' }

Stage-wise Breakdown

1. Git Checkout

Pulls the latest code from GitHub using stored credentials.


git branch: 'main', credentialsId: 'git-cred', url: 'https://github.com/jaiswaladi246/Boardgame.git'

2. Compile

Compiles the Maven-based Java application.

sh "mvn compile"

3. Test

Runs unit tests.

sh "mvn test"

4. File System Vulnerability Scan

Scans the workspace using Trivy and generates an HTML report.

sh "trivy fs --format table -o trivy-fs-report.html ."

5. SonarQube Analysis

Static code analysis using SonarScanner CLI.

withSonarQubeEnv('sonar') { sh ''' $SCANNER_HOME/bin/sonar-scanner \ -Dsonar.projectName=BoardGame \ -Dsonar.projectKey=BoardGame \ -Dsonar.java.binaries=. ''' }

6. Quality Gate

Pauses the pipeline and waits for SonarQube to return the quality gate result.

waitForQualityGate abortPipeline: false, credentialsId: 'sonar-token'

7. Build

Packages the application into a .jar.

sh "mvn package"

8. Publish to Nexus

Deploys the JAR to a Nexus artifact repository.

withMaven(globalMavenSettingsConfig: 'global-settings', jdk: 'jdk17', maven: 'maven3') { sh "mvn deploy" }

9. Docker Build & Tag

Builds a Docker image and tags it.

docker build -t adijaiswal/boardshack:latest .

10. Docker Image Vulnerability Scan

Scans the Docker image using Trivy.

sh "trivy image --format table -o trivy-image-report.html adijaiswal/boardshack:latest"

11. Push Docker Image

Pushes the image to Docker Hub using secured credentials.

docker push adijaiswal/boardshack:latest

12. Deploy to Kubernetes

Applies the Kubernetes deployment and service YAML to the cluster.

kubectl apply -f deployment-service.yaml

13. Verify the Deployment

Checks the status of pods and services in the webapps namespace

kubectl get pods -n webapps kubectl get svc -n webapps

Post-build Email Notification

At the end of the pipeline, Jenkins sends a customized HTML email report showing the status with color-coded results and attaches the Trivy image scan report:

emailext ( subject: "${jobName} - Build ${buildNumber} - ${pipelineStatus.toUpperCase()}", body: body, to: 'jaiswaladi246@gmail.com', attachmentsPattern: 'trivy-image-report.html' )

This pipeline ensures secure, tested, and production-ready deployments with minimal human intervention. It's a great example of how corporate-grade CI/CD can be implemented with open-source tools.



 7. Lessons Learned

Setting up a corporate-grade CI/CD pipeline taught me more than just tool usage:

  • Infrastructure matters: Choosing the right instance type (e.g., avoiding t2.micro for Kubernetes) is foundational.

  • Security is not optional: Proper RBAC, credential handling, and token management are essential even in dev setups.

  • Automation beats repetition: Jenkins pipelines with well-defined stages drastically reduce manual errors.

  • Observability is key: Monitoring with Prometheus & Grafana helped catch issues early and proved the value of metrics.

  • Tool synergy: Tools like SonarQube, Nexus, Docker, and Trivy work best when connected via Jenkins—resulting in a smooth, traceable workflow.


 8. Final Thoughts

This project, built by following a well-structured tutorial and adapting it to real-world scenarios, gave me hands-on exposure to modern DevOps practices. It was more than just setting up tools—it was about connecting them into a seamless pipeline that ensures code quality, artifact management, secure delivery, and application observability.

Whether you're a beginner in DevOps or preparing for real-world projects, replicating a setup like this can give you both technical depth and confidence.



Comments

Popular posts from this blog

Creating Spotify Playlists with Terraform: Overcoming Spotify’s OAuth Update

Jenkins Master Slave Configuration. (Ubuntu instance as a slave)