Master Container Orchestration and Package Management
0%
Lesson 1 of 6
Introduction to Kubernetes
What is Kubernetes?
Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Originally developed by Google, it's now maintained by the Cloud Native Computing Foundation (CNCF).
Why Kubernetes?
Kubernetes solves critical challenges in modern application deployment:
Automatic scaling: Scale applications up or down based on demand
Kubelet: Agent that runs on each node and ensures containers are running.
Kube-proxy: Maintains network rules for pod communication.
Container Runtime: Software responsible for running containers (Docker, containerd, CRI-O).
Key Concepts
Pod
The smallest deployable unit in Kubernetes. A pod represents one or more containers that share storage and network resources.
Node
A worker machine (physical or virtual) that runs your containerized applications. Each node contains the services necessary to run pods.
Cluster
A set of nodes grouped together. This provides fault tolerance - if one node fails, your application continues running on other nodes.
Namespace
Virtual clusters within a physical cluster. Namespaces provide a way to divide cluster resources between multiple users or projects.
Basic kubectl Commands
kubectl is the command-line tool for interacting with Kubernetes clusters. Here are essential commands:
# Check cluster information
kubectl cluster-info
# Get cluster nodes
kubectl get nodes
# View all resources in a namespace
kubectl get all -n default
# Get detailed information about a resource
kubectl describe node node-name
# Check kubectl version
kubectl version --client
# Get cluster status
kubectl get componentstatuses
# View namespaces
kubectl get namespaces
# Set default namespace
kubectl config set-context --current --namespace=my-namespace
Important Note
Most kubectl commands require a running Kubernetes cluster. If you're just learning, consider using Minikube (local cluster) or kind (Kubernetes in Docker) for practice.
Common kubectl Operations
# Apply a configuration file
kubectl apply -f config.yaml
# Delete resources
kubectl delete -f config.yaml
# View logs from a pod
kubectl logs pod-name
# Execute a command in a container
kubectl exec -it pod-name -- /bin/bash
# Port forward to access a pod locally
kubectl port-forward pod-name 8080:80
# Get resource usage
kubectl top nodes
kubectl top pods
Real-World Use Case
A typical web application might run 10 replicas of your application container across 5 nodes. Kubernetes ensures that if a node fails, the containers are automatically rescheduled on healthy nodes. If traffic increases, Kubernetes can automatically scale up to 20 replicas.
Kubernetes Resource Types
Resource
Purpose
Short Name
Pods
Running containers
po
Deployments
Manage pod replicas
deploy
Services
Expose pods to network
svc
ConfigMaps
Configuration data
cm
Secrets
Sensitive data
secret
# Using short names
kubectl get po
kubectl get deploy
kubectl get svc
kubectl get cm
Test Your Knowledge - Lesson 1
Answer the following questions to proceed. You need 70% (2/3 correct) to pass.
Question 1: What is the smallest deployable unit in Kubernetes?
Question 2: Which component stores all cluster data in a distributed key-value store?
Question 3: What command would you use to view all nodes in your cluster?
Lesson 2 of 6
Pods and Deployments
Understanding Pods
A Pod is the fundamental execution unit in Kubernetes. While a Pod can contain multiple containers, the most common pattern is one container per Pod. Containers in a Pod share the same network namespace, IP address, and storage volumes.
# Create the pod
kubectl apply -f nginx-pod.yaml
# Get pod status
kubectl get pods
# Get detailed pod information
kubectl describe pod nginx-pod
# View pod logs
kubectl logs nginx-pod
# Delete the pod
kubectl delete pod nginx-pod
Pods are Ephemeral
Pods are designed to be disposable and replaceable. If a Pod dies, Kubernetes doesn't resurrect it. Instead, you should use higher-level controllers like Deployments to manage Pods.
Multi-Container Pods
Sometimes you need multiple containers working together in a single Pod. Common patterns include:
Sidecar: Helper container that extends the main container (e.g., log shipping)
Ambassador: Proxy container that simplifies network connections
Adapter: Standardizes output from the main container
A Deployment provides declarative updates for Pods. It manages a ReplicaSet, which in turn manages Pods. Deployments are the recommended way to run stateless applications in Kubernetes.
# Create deployment
kubectl apply -f nginx-deployment.yaml
# Get deployments
kubectl get deployments
# Get replica sets
kubectl get rs
# Get pods created by deployment
kubectl get pods -l app=nginx
# Scale deployment
kubectl scale deployment nginx-deployment --replicas=5
# Update deployment image
kubectl set image deployment/nginx-deployment nginx=nginx:1.22
# Check rollout status
kubectl rollout status deployment/nginx-deployment
# View rollout history
kubectl rollout history deployment/nginx-deployment
# Rollback to previous version
kubectl rollout undo deployment/nginx-deployment
# Rollback to specific revision
kubectl rollout undo deployment/nginx-deployment --to-revision=2
Labels and Selectors
Labels are key-value pairs attached to Kubernetes objects. Selectors allow you to filter and select resources based on labels.
Using Labels
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web
tier: frontend
environment: production
version: v1.0.0
spec:
replicas: 3
selector:
matchLabels:
app: web
tier: frontend
template:
metadata:
labels:
app: web
tier: frontend
environment: production
spec:
containers:
- name: web
image: myapp:1.0.0
# Get pods by label
kubectl get pods -l app=web
# Get pods with multiple labels
kubectl get pods -l app=web,tier=frontend
# Get pods with label selector
kubectl get pods -l 'environment in (production,staging)'
# Add label to existing pod
kubectl label pod nginx-pod version=v1
# Remove label from pod
kubectl label pod nginx-pod version-
# Show labels in output
kubectl get pods --show-labels
Real-World Use Case
You're running an e-commerce website with 3 replicas. During Black Friday, traffic increases 10x. You can instantly scale to 30 replicas with a single command. After the sale, scale back down. If you deploy a buggy version, rollback to the previous working version in seconds.
Deployment Strategies
Rolling Update (Default)
Gradually replaces old Pods with new ones. This is the default strategy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: rolling-update-app
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2 # Max number of pods above desired count
maxUnavailable: 1 # Max number of pods that can be unavailable
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:2.0.0
Recreate Strategy
Terminates all old Pods before creating new ones. This causes downtime but ensures no two versions run simultaneously.
Always use Deployments instead of creating Pods directly
Set resource requests and limits for all containers
Use meaningful labels for organization and selection
Implement readiness and liveness probes for health checks
Use rolling updates for zero-downtime deployments
Test Your Knowledge - Lesson 2
Answer the following questions to proceed. You need 70% (2/3 correct) to pass.
Question 1: What is the main advantage of using a Deployment over creating Pods directly?
Question 2: Which field in a Deployment spec determines how many Pod copies should run?
Question 3: What command would you use to scale a deployment named "web-app" to 10 replicas?
Lesson 3 of 6
Services and Networking
Why Services?
Pods are ephemeral and can be created, destroyed, and replaced at any time. Each Pod gets its own IP address, but these IPs change when Pods are recreated. Services provide a stable endpoint to access a set of Pods.
Service Benefits
Stable IP address and DNS name
Load balancing across multiple Pods
Service discovery within the cluster
External access to cluster applications
Service Types
1. ClusterIP (Default)
Exposes the Service on an internal IP within the cluster. This is only accessible from within the cluster.
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: ClusterIP
selector:
app: backend
tier: api
ports:
- protocol: TCP
port: 80 # Port exposed by the service
targetPort: 8080 # Port on the container
# Create the service
kubectl apply -f backend-service.yaml
# Get services
kubectl get svc
# Describe service
kubectl describe svc backend-service
# Get service endpoints (Pod IPs)
kubectl get endpoints backend-service
# Access service from within cluster (from another pod)
curl http://backend-service.default.svc.cluster.local
2. NodePort
Exposes the Service on each Node's IP at a static port (30000-32767). You can access the service from outside the cluster using NodeIP:NodePort.
apiVersion: v1
kind: Service
metadata:
name: web-nodeport
spec:
type: NodePort
selector:
app: web
ports:
- protocol: TCP
port: 80 # Service port
targetPort: 8080 # Container port
nodePort: 30080 # External port (optional, auto-assigned if omitted)
# Access the service
# From outside: http://:30080
curl http://192.168.1.100:30080
# Get node IPs
kubectl get nodes -o wide
NodePort Limitations
NodePort is good for development but not recommended for production. It exposes services on all nodes, requires knowledge of node IPs, and uses non-standard ports. Use LoadBalancer or Ingress for production.
3. LoadBalancer
Creates an external load balancer (in cloud environments like AWS, GCP, Azure) and assigns a fixed external IP to the Service.
apiVersion: v1
kind: Service
metadata:
name: web-loadbalancer
spec:
type: LoadBalancer
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 8080
loadBalancerIP: 203.0.113.10 # Optional: request specific IP
# Get external IP (may take a few minutes)
kubectl get svc web-loadbalancer
# Example output:
# NAME TYPE EXTERNAL-IP PORT(S) AGE
# web-loadbalancer LoadBalancer 203.0.113.10 80:31234/TCP 2m
# Access the service
curl http://203.0.113.10
4. Headless Service
When you don't need load balancing and want to directly connect to Pods, set clusterIP: None.
Kubernetes runs a DNS server (CoreDNS) that assigns DNS names to Services:
# Service DNS format:
# {service-name}.{namespace}.svc.cluster.local
# Examples:
backend-service.default.svc.cluster.local
api-service.production.svc.cluster.local
# Within the same namespace, you can use just the service name:
curl http://backend-service
Ingress
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. It provides features like SSL termination, name-based virtual hosting, and path-based routing.
Ingress vs LoadBalancer
LoadBalancer: Creates one external load balancer per service (expensive in cloud environments).
Ingress: Single entry point that can route to multiple services based on hostnames and paths (cost-effective).
Installing an Ingress Controller
Before using Ingress, you need an Ingress Controller (like NGINX, Traefik, or HAProxy):
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
# Verify installation
kubectl get pods -n ingress-nginx
# Check ingress controller service
kubectl get svc -n ingress-nginx
You have a web application with a frontend, API backend, and database. You create:
ClusterIP service for the database (internal only)
ClusterIP service for the API backend
Ingress for the frontend, routing www.myapp.com to frontend and www.myapp.com/api to backend
Network Policies to ensure only the frontend can access the backend, and only the backend can access the database
Test Your Knowledge - Lesson 3
Answer the following questions to proceed. You need 70% (2/3 correct) to pass.
Question 1: Which Service type is only accessible from within the cluster?
Question 2: What is the main advantage of using Ingress over LoadBalancer services?
Question 3: What is the DNS name format for accessing a service in Kubernetes?
Lesson 4 of 6
ConfigMaps and Secrets
Configuration Management
Hardcoding configuration values in container images is a bad practice. It makes images environment-specific and requires rebuilding for configuration changes. Kubernetes provides ConfigMaps and Secrets to externalize configuration.
ConfigMaps vs Secrets
ConfigMaps: Store non-sensitive configuration data (database URLs, feature flags, etc.)
Secrets: Store sensitive data (passwords, API keys, certificates) - base64 encoded
ConfigMaps
ConfigMaps allow you to decouple configuration from container images, making your applications more portable.
Creating ConfigMaps
# Create ConfigMap from literal values
kubectl create configmap app-config \
--from-literal=database_url=postgres://db.example.com:5432 \
--from-literal=log_level=debug \
--from-literal=max_connections=100
# Create ConfigMap from file
kubectl create configmap nginx-config \
--from-file=nginx.conf
# Create ConfigMap from directory
kubectl create configmap app-configs \
--from-file=configs/
# View ConfigMaps
kubectl get configmaps
# Describe ConfigMap
kubectl describe configmap app-config
# Get ConfigMap in YAML format
kubectl get configmap app-config -o yaml
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:1.0
env:
# Single environment variable from ConfigMap
- name: DATABASE_URL
valueFrom:
configMapKeyRef:
name: app-config
key: database_url
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: log_level
# Load all ConfigMap data as environment variables
envFrom:
- configMapRef:
name: app-config
Method 2: Volume Mounts
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.21
volumeMounts:
# Mount entire ConfigMap as files
- name: config-volume
mountPath: /etc/config
# Mount specific key as file
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: config-volume
configMap:
name: app-config
- name: nginx-config
configMap:
name: nginx-config
items:
- key: nginx.conf
path: nginx.conf
Secrets
Secrets are similar to ConfigMaps but are intended for sensitive information. They are stored base64-encoded (not encrypted by default).
Security Note
Base64 encoding is NOT encryption. For production environments, enable encryption at rest in etcd and use RBAC to control access to Secrets. Consider using external secret management solutions like HashiCorp Vault or AWS Secrets Manager.
When using volume mounts, ConfigMaps and Secrets are automatically updated in running Pods (with some delay). However, environment variables are NOT updated - you must restart the Pod.
Best Practices
Configuration Best Practices
Use ConfigMaps for non-sensitive configuration
Use Secrets for passwords, tokens, and certificates
Enable encryption at rest for Secrets in production
Use RBAC to limit access to Secrets
Prefer volume mounts over environment variables for large configs
Version your ConfigMaps and Secrets (e.g., app-config-v1, app-config-v2)
Use external secret management for sensitive production data
Answer the following questions to proceed. You need 70% (2/3 correct) to pass.
Question 1: What is the main difference between ConfigMaps and Secrets?
Question 2: When ConfigMaps or Secrets are mounted as volumes, what happens when they are updated?
Question 3: What command creates a ConfigMap from literal key-value pairs?
Lesson 5 of 6
Introduction to Helm
What is Helm?
Helm is the package manager for Kubernetes. It simplifies the deployment and management of applications on Kubernetes by packaging related Kubernetes resources into a single unit called a "chart."
Why Use Helm?
Package Management: Bundle multiple Kubernetes resources into reusable packages
Version Control: Track and manage application versions
Configuration Management: Customize deployments with values files
Chart: A package of pre-configured Kubernetes resources
Release: An instance of a chart running in a Kubernetes cluster
Repository: A collection of charts that can be shared
Values: Configuration parameters for customizing charts
Installing Helm
# Install Helm on macOS
brew install helm
# Install Helm on Linux
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Install Helm on Windows (using Chocolatey)
choco install kubernetes-helm
# Verify installation
helm version
# Get help
helm help
Working with Helm Repositories
# Add official Helm stable repository
helm repo add stable https://charts.helm.sh/stable
# Add Bitnami repository (popular charts)
helm repo add bitnami https://charts.bitnami.com/bitnami
# Add NGINX repository
helm repo add nginx-stable https://helm.nginx.com/stable
# List configured repositories
helm repo list
# Update repository information
helm repo update
# Search for charts in repositories
helm search repo nginx
# Search for charts with specific version
helm search repo nginx --version 1.0.0
# Remove a repository
helm repo remove stable
Installing Charts
# Install a chart (creates a release)
helm install my-nginx bitnami/nginx
# Install with custom release name
helm install my-release bitnami/wordpress
# Install in a specific namespace
helm install my-db bitnami/mysql --namespace database --create-namespace
# Install with custom values
helm install my-app bitnami/nginx --set service.type=LoadBalancer
# Install with values file
helm install my-app bitnami/nginx -f custom-values.yaml
# Install and wait for resources to be ready
helm install my-app bitnami/nginx --wait --timeout 5m
# Dry run (test without installing)
helm install my-app bitnami/nginx --dry-run --debug
# Generate manifest without installing
helm template my-app bitnami/nginx > manifest.yaml
Managing Releases
# List all releases
helm list
# List releases in all namespaces
helm list --all-namespaces
# List releases in specific namespace
helm list -n production
# Get release status
helm status my-nginx
# Get release history
helm history my-nginx
# Show values used in a release
helm get values my-nginx
# Show all information about a release
helm get all my-nginx
# Show manifest of deployed release
helm get manifest my-nginx
Upgrading Releases
# Upgrade a release
helm upgrade my-nginx bitnami/nginx
# Upgrade with new values
helm upgrade my-nginx bitnami/nginx --set replicaCount=3
# Upgrade with values file
helm upgrade my-nginx bitnami/nginx -f production-values.yaml
# Upgrade or install (install if doesn't exist)
helm upgrade --install my-nginx bitnami/nginx
# Force upgrade
helm upgrade my-nginx bitnami/nginx --force
# Atomic upgrade (rollback on failure)
helm upgrade my-nginx bitnami/nginx --atomic
# Upgrade with wait
helm upgrade my-nginx bitnami/nginx --wait --timeout 10m
Rolling Back Releases
# Rollback to previous version
helm rollback my-nginx
# Rollback to specific revision
helm rollback my-nginx 2
# View what will be rolled back (dry run)
helm rollback my-nginx --dry-run
Uninstalling Releases
# Uninstall a release
helm uninstall my-nginx
# Uninstall and keep history
helm uninstall my-nginx --keep-history
# Uninstall from specific namespace
helm uninstall my-nginx -n production
Working with Values
Values files allow you to customize chart deployments. You can override default values to match your environment.
Viewing Default Values
# Show default values for a chart
helm show values bitnami/nginx
# Save default values to file
helm show values bitnami/nginx > default-values.yaml
# Show chart information
helm show chart bitnami/nginx
# Show chart README
helm show readme bitnami/nginx
# Show all chart information
helm show all bitnami/nginx
# Download chart without installing
helm pull bitnami/nginx
# Download and unpack chart
helm pull bitnami/nginx --untar
# Download specific version
helm pull bitnami/nginx --version 13.2.0
Your company runs the same application across development, staging, and production environments. With Helm:
Create one chart with all Kubernetes resources
Use separate values files for each environment (dev-values.yaml, staging-values.yaml, prod-values.yaml)
Deploy to dev: helm install myapp ./myapp-chart -f dev-values.yaml
Deploy to prod: helm install myapp ./myapp-chart -f prod-values.yaml
Easy upgrades, rollbacks, and consistent deployments across environments
Test Your Knowledge - Lesson 5
Answer the following questions to proceed. You need 70% (2/3 correct) to pass.
Question 1: What is a Helm chart?
Question 2: What command would you use to install a Helm chart named "nginx" from the bitnami repository with the release name "my-nginx"?
Question 3: How do you rollback a Helm release named "my-app" to the previous version?
Lesson 6 of 6
Advanced Kubernetes with Helm
StatefulSets
While Deployments are great for stateless applications, StatefulSets are designed for stateful applications that require stable network identities, persistent storage, and ordered deployment and scaling.
You're deploying a microservices e-commerce platform:
Frontend: Deployment with Ingress, 5 replicas, autoscaling
API Gateway: Deployment with LoadBalancer service
User Service: Deployment with ClusterIP
Order Service: Deployment with ClusterIP
PostgreSQL: StatefulSet with PVC (100GB)
Redis: StatefulSet for caching
Message Queue: StatefulSet (RabbitMQ)
Package everything in a single Helm chart with values files for dev/staging/prod. Use Helm hooks for database migrations. Implement network policies to secure communication. Add monitoring with Prometheus and logging with ELK stack.
Test Your Knowledge - Lesson 6
Answer the following questions to proceed. You need 70% (2/3 correct) to pass.
Question 1: What is the primary difference between a Deployment and a StatefulSet?
Question 2: What file in a Helm chart contains the default configuration values?
Question 3: What command creates a new Helm chart scaffold?