Building a Production-Grade CI/CD Pipeline with Kubernetes and Jenkins
As someone exploring real-world DevOps workflows, I recently followed and extended a corporate-level CI/CD pipeline tutorial that walks through deploying Jenkins, SonarQube, Nexus, and Kubernetes on AWS EC2. While the base tutorial provided solid guidance, I added my own twists and documentation to make the setup production-ready. In this blog, I’ll walk you through every step—from provisioning infrastructure to building a 13-stage Jenkins pipeline, and finally setting up monitoring with Prometheus and Grafana.
To keep it clear and structured, this blog is divided into 8 sections:
-
Infrastructure Setup – Provisioning EC2 instances and preparing the environment
-
Cluster Configuration – Setting up Kubernetes master and worker nodes
-
Tool Installation – Installing Jenkins, Docker, SonarQube, Nexus, and other tools
-
Jenkins Configuration – Plugin setup, credentials, tool integrations
-
CI/CD Pipeline Breakdown – Full Jenkins pipeline for code checkout, build, scan, and deploy
-
Monitoring Setup – Prometheus and Grafana for system and app observability
-
Lessons Learned – Practical insights gained from this setup
-
Final Thoughts – Wrapping up and future possibilities
1. Infrastructure Setup
To simulate a real-world production environment, I began by provisioning the entire infrastructure on AWS using EC2 instances. Here's a breakdown of how I structured the environment to support Kubernetes and all supporting tools.
EC2 Instances: The Foundation
I spun up a total of 6 EC2 instances:
-
Kubernetes Cluster:
-
Master
: Controls the cluster -
Slave-1
: Worker node -
Slave-2
: Worker node
-
-
Tooling & Monitoring:
-
Jenkins
-
Nexus
-
SonarQube
-
(Later)
Monitor-Node
for Prometheus & Grafana
-
AMI and Instance Type
All instances were created using:
-
AMI: Ubuntu 22.04 LTS (latest LTS at time of setup)
-
Instance Type:
t2.medium
⚠️ Note:
t2.micro
is incompatible with Kubernetes as it only offers 1 vCPU. Kubernetes requires a minimum of 2 vCPUs, especially for initializing the control plane on the master node.
Key Pair
If you don’t have a key pair already:
-
Generate a new key pair in the AWS EC2 console
-
Download the
.pem
file and store it securely -
Use it to SSH into all the instances
Security Groups
For security, I reused an existing security group with predefined inbound rules:
-
SSH (port 22) from my IP
-
HTTP/HTTPS as needed
-
Custom ports for:
-
Jenkins (8080)
-
SonarQube (9000)
-
Nexus (8081)
-
Prometheus (9090)
-
Grafana (3000)
-
Node/Blackbox Exporter (9100/9115)
-
Storage Configuration
Each instance was configured with:
-
25 GB of gp2 (or gp3) storage
-
Enough to handle tool installations, Docker images, and logs without needing to expand volumes during the setup
Instance Naming
To make management easier, I renamed each instance immediately after creation:
-
Master
-
Slave-1
-
Slave-2
-
Jenkins
-
Nexus
-
SonarQube
-
Later:
Monitor-Node
You can do this directly in the EC2 console by updating the "Name" tag.
2. Cluster Configuration
The foundation of this CI/CD setup lies in a well-configured Kubernetes cluster. I used AWS EC2 to provision a three-node cluster consisting of one master and two worker nodes. All instances run the latest LTS version of Ubuntu.
Terminal Setup
Although the original tutorial recommended MobaXterm (for Windows), I used the native macOS Terminal. To stay organized:
I renamed each terminal session (Master, Slave-1, Slave-2).
I applied unique colors to each tab for quick visual identification.
Although the original tutorial recommended MobaXterm (for Windows), I used the native macOS Terminal. To stay organized:
I renamed each terminal session (Master, Slave-1, Slave-2).
I applied unique colors to each tab for quick visual identification.
Kubernetes Installation and Initialization
After logging in, I installed the necessary packages on all three nodes:
Then I added the Kubernetes package repo, installed kubeadm
, kubelet
, and kubectl
, and disabled swap (required by Kubernetes).
On the master node, I initialized the cluster:
Once complete, I set up kubectl
:
kubeadm join
command (provided by the master node after initialization) to join the cluster.✅ Verifying the Cluster
Back on the master node, I verified everything was up and running:
Expected output:
At this point, the Kubernetes cluster was ready for Jenkins and the CI/CD pipeline.
3. Tool Installation
Once our Kubernetes cluster was up and running with one master and two worker nodes, the next step was to set up the core tools for the CI/CD pipeline. I launched three additional EC2 instances, each dedicated to a specific tool:
-
SonarQube (Code quality analysis)
-
Nexus (Artifact repository)
-
Jenkins (CI/CD automation)
All three instances were created using the same Ubuntu LTS AMI and t2.medium instance type (2 vCPU, 4 GB RAM) with 25 GB of gp2 storage. This sizing ensured enough resources to run Docker containers and services smoothly.
Docker Installation
Since both SonarQube and Nexus would be running as containers, the first step was to install Docker on those instances:
By default, Docker can only be executed by the root user. To allow other users (like Jenkins or ubuntu
) to run Docker commands, I used this shortcut:
Running SonarQube in a Docker Container
Once Docker was installed, I launched the SonarQube container using:
-
-d
: Runs the container in detached mode (background) -
--name sonar
: Assigns a container name -
-p 9000:9000
: Maps host port 9000 to container port 9000
After a few minutes, I was able to access the SonarQube interface by visiting http://<ec2-public-ip>:9000
. The default login is:
-
Username:
admin
-
Password:
admin
You’ll be prompted to change the password on the first login.
Running Nexus in a Docker Container
Similarly, for Nexus, I ran the following:
Once the container was up, I accessed Nexus at http://<ec2-public-ip>:8081
. The initial admin password can be retrieved by logging into the container:
Jenkins Installation (on Separate EC2)
For Jenkins, I installed it directly on the Ubuntu instance:
I then started the Jenkins service:
Jenkins runs on port 8080, so I accessed it at http://<ec2-public-ip>:8080
.
4. Jenkins Configuration
With Jenkins installed and accessible via the browser, it was time to configure it to act as the brain of our CI/CD pipeline. This section covers the essential plugin setup, tool configuration, credential management, and Kubernetes integration to make Jenkins production-ready.
Initial Setup and Unlocking Jenkins
After accessing Jenkins at http://<ec2-ip>:8080
, Jenkins prompts you for an unlock password. I retrieved it using:
I pasted the password into the browser and continued with the setup, selecting "Install suggested plugins" for a quick start. Once that was done, I created the first admin user.
Installing Essential Plugins
Jenkins depends heavily on plugins for integrating external tools. I went to:
Manage Jenkins → Plugins → Available
And installed the following plugins:
-
Eclipse Temurin Installer – Manages multiple JDK versions
-
Maven Integration – Supports Maven builds
-
Config File Provider – Handles
settings.xml
and other config files -
Pipeline Maven Integration – Links Maven with pipelines
-
SonarQube Scanner – Connects Jenkins to SonarQube for code analysis
-
Docker & Docker Pipeline – Enables Jenkins to interact with Docker
-
Kubernetes CLI & Kubernetes Client API – For cluster management
-
Kubernetes Credentials – Secure K8s token integration
-
Prometheus Metrics – For Jenkins monitoring
After installation, I restarted Jenkins to apply all changes.
Global Tool Configuration
Next, I configured tools under:
Manage Jenkins → Global Tool Configuration
Here’s what I added:
-
JDK 17: via Eclipse Temurin installer
-
Maven 3.x: added and set to install automatically
-
SonarQube Scanner: latest version
-
Docker: system path
/usr/bin/docker
These tools are now available for any job or pipeline to use without extra configuration.
Credential Management
Credentials are critical to keep secrets secure while enabling integrations.
Manage Jenkins → Credentials → Global (scope)
I added:
-
Git Token – Username + Personal Access Token (for cloning repos)
-
SonarQube Token – Secret text from SonarQube > Administration > Users > Tokens
-
Docker Hub – Username + password
-
Email (Gmail App Password) – Used for sending build notifications
-
Kubernetes Token – Secret text from Kubernetes service account (next section explains)
💡 Use clear, recognizable IDs when creating credentials (e.g., docker-cred
, sonar-token
, git-token
) so they’re easy to reference in pipelines.
SonarQube Server Configuration
To link Jenkins with the running SonarQube instance:
Manage Jenkins → Configure System
I scrolled down to the SonarQube Servers section and added:
-
Name:
SonarQube
-
Server URL:
http://<sonarqube-ec2-ip>:9000
-
Credentials: Used the token created earlier
Kubernetes Integration via Service Account (RBAC)
Jenkins needs secure access to the Kubernetes cluster to deploy apps.
1. Create a Namespace
2. Create Service Account (svc.yaml)
3. Create Role and RoleBinding
Define role.yaml
to give access:
Define bind.yaml
:
Apply both:
4. Create Secret for Service Account (sec.yaml)
Apply it:
Get the token:
Copy the token and add it as a secret text credential in Jenkins with the ID: k8s-token
.
Email Notification Setup
Jenkins can send email notifications using Gmail’s SMTP.
Manage Jenkins → Configure System
Under Extended Email Notification and Email Notification, I configured:
-
SMTP server:
smtp.gmail.com
-
SMTP port:
465
-
SSL: Enabled
-
Credentials: Gmail App Password (created via “App Passwords” in Google Account > Security)
Tested the configuration to ensure email delivery worked.
Now Jenkins was fully integrated with SonarQube, Docker, Nexus, and Kubernetes — secured and ready to orchestrate the CI/CD pipeline. The next step: writing the pipeline itself.
5. CI/CD Pipeline Breakdown
This section dives into the heart of the project — the CI/CD pipeline implemented using Jenkins Declarative Pipeline syntax. The pipeline automates everything from code checkout to Docker image scanning and Kubernetes deployment.
Here’s a breakdown of each stage, directly based on the actual Jenkinsfile used:
Tools and Environment
The pipeline uses:
-
JDK 17 and Maven 3
-
SonarQube for static analysis
-
Trivy for filesystem and container vulnerability scanning
-
Docker for containerization
-
Kubernetes for deployment
Stage-wise Breakdown
1. Git Checkout
Pulls the latest code from GitHub using stored credentials.
2. Compile
Compiles the Maven-based Java application.
sh "mvn compile"
3. Test
Runs unit tests.
4. File System Vulnerability Scan
Scans the workspace using Trivy and generates an HTML report.
5. SonarQube Analysis
Static code analysis using SonarScanner CLI.
6. Quality Gate
Pauses the pipeline and waits for SonarQube to return the quality gate result.
7. Build
Packages the application into a .jar
.
8. Publish to Nexus
Deploys the JAR to a Nexus artifact repository.
9. Docker Build & Tag
Builds a Docker image and tags it.
10. Docker Image Vulnerability Scan
Scans the Docker image using Trivy.
11. Push Docker Image
Pushes the image to Docker Hub using secured credentials.
12. Deploy to Kubernetes
Applies the Kubernetes deployment and service YAML to the cluster.
13. Verify the Deployment
Checks the status of pods and services in the webapps
namespace
Post-build Email Notification
At the end of the pipeline, Jenkins sends a customized HTML email report showing the status with color-coded results and attaches the Trivy image scan report:
This pipeline ensures secure, tested, and production-ready deployments with minimal human intervention. It's a great example of how corporate-grade CI/CD can be implemented with open-source tools.
7. Lessons Learned
Setting up a corporate-grade CI/CD pipeline taught me more than just tool usage:
-
Infrastructure matters: Choosing the right instance type (e.g., avoiding
t2.micro
for Kubernetes) is foundational. -
Security is not optional: Proper RBAC, credential handling, and token management are essential even in dev setups.
-
Automation beats repetition: Jenkins pipelines with well-defined stages drastically reduce manual errors.
-
Observability is key: Monitoring with Prometheus & Grafana helped catch issues early and proved the value of metrics.
-
Tool synergy: Tools like SonarQube, Nexus, Docker, and Trivy work best when connected via Jenkins—resulting in a smooth, traceable workflow.
8. Final Thoughts
This project, built by following a well-structured tutorial and adapting it to real-world scenarios, gave me hands-on exposure to modern DevOps practices. It was more than just setting up tools—it was about connecting them into a seamless pipeline that ensures code quality, artifact management, secure delivery, and application observability.
Whether you're a beginner in DevOps or preparing for real-world projects, replicating a setup like this can give you both technical depth and confidence.
Comments
Post a Comment