Kubernetes & Helm Crash Course

Master Container Orchestration and Package Management

0%
Lesson 1 of 6

Introduction to Kubernetes

What is Kubernetes?

Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Originally developed by Google, it's now maintained by the Cloud Native Computing Foundation (CNCF).

Why Kubernetes?

Kubernetes solves critical challenges in modern application deployment:

  • Automatic scaling: Scale applications up or down based on demand
  • Self-healing: Automatically restarts failed containers
  • Load balancing: Distributes traffic across containers
  • Rolling updates: Deploy new versions without downtime
  • Storage orchestration: Automatically mount storage systems

Kubernetes Architecture

A Kubernetes cluster consists of two main components:

Control Plane (Master Node)

  • API Server: The frontend for Kubernetes. All communication goes through this.
  • etcd: Distributed key-value store that stores all cluster data.
  • Scheduler: Assigns pods to nodes based on resource requirements.
  • Controller Manager: Runs controller processes (node controller, replication controller, etc.).

Worker Nodes

  • Kubelet: Agent that runs on each node and ensures containers are running.
  • Kube-proxy: Maintains network rules for pod communication.
  • Container Runtime: Software responsible for running containers (Docker, containerd, CRI-O).

Key Concepts

Pod

The smallest deployable unit in Kubernetes. A pod represents one or more containers that share storage and network resources.

Node

A worker machine (physical or virtual) that runs your containerized applications. Each node contains the services necessary to run pods.

Cluster

A set of nodes grouped together. This provides fault tolerance - if one node fails, your application continues running on other nodes.

Namespace

Virtual clusters within a physical cluster. Namespaces provide a way to divide cluster resources between multiple users or projects.

Basic kubectl Commands

kubectl is the command-line tool for interacting with Kubernetes clusters. Here are essential commands:

# Check cluster information kubectl cluster-info # Get cluster nodes kubectl get nodes # View all resources in a namespace kubectl get all -n default # Get detailed information about a resource kubectl describe node node-name # Check kubectl version kubectl version --client # Get cluster status kubectl get componentstatuses # View namespaces kubectl get namespaces # Set default namespace kubectl config set-context --current --namespace=my-namespace

Important Note

Most kubectl commands require a running Kubernetes cluster. If you're just learning, consider using Minikube (local cluster) or kind (Kubernetes in Docker) for practice.

Common kubectl Operations

# Apply a configuration file kubectl apply -f config.yaml # Delete resources kubectl delete -f config.yaml # View logs from a pod kubectl logs pod-name # Execute a command in a container kubectl exec -it pod-name -- /bin/bash # Port forward to access a pod locally kubectl port-forward pod-name 8080:80 # Get resource usage kubectl top nodes kubectl top pods

Real-World Use Case

A typical web application might run 10 replicas of your application container across 5 nodes. Kubernetes ensures that if a node fails, the containers are automatically rescheduled on healthy nodes. If traffic increases, Kubernetes can automatically scale up to 20 replicas.

Kubernetes Resource Types

Resource Purpose Short Name
Pods Running containers po
Deployments Manage pod replicas deploy
Services Expose pods to network svc
ConfigMaps Configuration data cm
Secrets Sensitive data secret
# Using short names kubectl get po kubectl get deploy kubectl get svc kubectl get cm

Test Your Knowledge - Lesson 1

Answer the following questions to proceed. You need 70% (2/3 correct) to pass.

Question 1: What is the smallest deployable unit in Kubernetes?

Question 2: Which component stores all cluster data in a distributed key-value store?

Question 3: What command would you use to view all nodes in your cluster?

Lesson 2 of 6

Pods and Deployments

Understanding Pods

A Pod is the fundamental execution unit in Kubernetes. While a Pod can contain multiple containers, the most common pattern is one container per Pod. Containers in a Pod share the same network namespace, IP address, and storage volumes.

Creating a Simple Pod

Here's a basic Pod definition in YAML:

apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx environment: production spec: containers: - name: nginx image: nginx:1.21 ports: - containerPort: 80 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
# Create the pod kubectl apply -f nginx-pod.yaml # Get pod status kubectl get pods # Get detailed pod information kubectl describe pod nginx-pod # View pod logs kubectl logs nginx-pod # Delete the pod kubectl delete pod nginx-pod

Pods are Ephemeral

Pods are designed to be disposable and replaceable. If a Pod dies, Kubernetes doesn't resurrect it. Instead, you should use higher-level controllers like Deployments to manage Pods.

Multi-Container Pods

Sometimes you need multiple containers working together in a single Pod. Common patterns include:

  • Sidecar: Helper container that extends the main container (e.g., log shipping)
  • Ambassador: Proxy container that simplifies network connections
  • Adapter: Standardizes output from the main container
apiVersion: v1 kind: Pod metadata: name: multi-container-pod spec: containers: - name: web-app image: nginx:1.21 ports: - containerPort: 80 volumeMounts: - name: shared-logs mountPath: /var/log/nginx - name: log-shipper image: fluent/fluentd:v1.14 volumeMounts: - name: shared-logs mountPath: /var/log volumes: - name: shared-logs emptyDir: {}

Introduction to Deployments

A Deployment provides declarative updates for Pods. It manages a ReplicaSet, which in turn manages Pods. Deployments are the recommended way to run stateless applications in Kubernetes.

Key Features of Deployments:

  • Maintains desired number of Pod replicas
  • Supports rolling updates with zero downtime
  • Enables rollback to previous versions
  • Self-healing: replaces failed Pods automatically
  • Scaling: easily adjust number of replicas

Creating a Deployment

apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.21 ports: - containerPort: 80 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"

Working with Deployments

# Create deployment kubectl apply -f nginx-deployment.yaml # Get deployments kubectl get deployments # Get replica sets kubectl get rs # Get pods created by deployment kubectl get pods -l app=nginx # Scale deployment kubectl scale deployment nginx-deployment --replicas=5 # Update deployment image kubectl set image deployment/nginx-deployment nginx=nginx:1.22 # Check rollout status kubectl rollout status deployment/nginx-deployment # View rollout history kubectl rollout history deployment/nginx-deployment # Rollback to previous version kubectl rollout undo deployment/nginx-deployment # Rollback to specific revision kubectl rollout undo deployment/nginx-deployment --to-revision=2

Labels and Selectors

Labels are key-value pairs attached to Kubernetes objects. Selectors allow you to filter and select resources based on labels.

Using Labels

apiVersion: apps/v1 kind: Deployment metadata: name: web-app labels: app: web tier: frontend environment: production version: v1.0.0 spec: replicas: 3 selector: matchLabels: app: web tier: frontend template: metadata: labels: app: web tier: frontend environment: production spec: containers: - name: web image: myapp:1.0.0
# Get pods by label kubectl get pods -l app=web # Get pods with multiple labels kubectl get pods -l app=web,tier=frontend # Get pods with label selector kubectl get pods -l 'environment in (production,staging)' # Add label to existing pod kubectl label pod nginx-pod version=v1 # Remove label from pod kubectl label pod nginx-pod version- # Show labels in output kubectl get pods --show-labels

Real-World Use Case

You're running an e-commerce website with 3 replicas. During Black Friday, traffic increases 10x. You can instantly scale to 30 replicas with a single command. After the sale, scale back down. If you deploy a buggy version, rollback to the previous working version in seconds.

Deployment Strategies

Rolling Update (Default)

Gradually replaces old Pods with new ones. This is the default strategy.

apiVersion: apps/v1 kind: Deployment metadata: name: rolling-update-app spec: replicas: 10 strategy: type: RollingUpdate rollingUpdate: maxSurge: 2 # Max number of pods above desired count maxUnavailable: 1 # Max number of pods that can be unavailable selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:2.0.0

Recreate Strategy

Terminates all old Pods before creating new ones. This causes downtime but ensures no two versions run simultaneously.

apiVersion: apps/v1 kind: Deployment metadata: name: recreate-app spec: replicas: 5 strategy: type: Recreate selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:2.0.0

Best Practices

  • Always use Deployments instead of creating Pods directly
  • Set resource requests and limits for all containers
  • Use meaningful labels for organization and selection
  • Implement readiness and liveness probes for health checks
  • Use rolling updates for zero-downtime deployments

Test Your Knowledge - Lesson 2

Answer the following questions to proceed. You need 70% (2/3 correct) to pass.

Question 1: What is the main advantage of using a Deployment over creating Pods directly?

Question 2: Which field in a Deployment spec determines how many Pod copies should run?

Question 3: What command would you use to scale a deployment named "web-app" to 10 replicas?

Lesson 3 of 6

Services and Networking

Why Services?

Pods are ephemeral and can be created, destroyed, and replaced at any time. Each Pod gets its own IP address, but these IPs change when Pods are recreated. Services provide a stable endpoint to access a set of Pods.

Service Benefits

  • Stable IP address and DNS name
  • Load balancing across multiple Pods
  • Service discovery within the cluster
  • External access to cluster applications

Service Types

1. ClusterIP (Default)

Exposes the Service on an internal IP within the cluster. This is only accessible from within the cluster.

apiVersion: v1 kind: Service metadata: name: backend-service spec: type: ClusterIP selector: app: backend tier: api ports: - protocol: TCP port: 80 # Port exposed by the service targetPort: 8080 # Port on the container
# Create the service kubectl apply -f backend-service.yaml # Get services kubectl get svc # Describe service kubectl describe svc backend-service # Get service endpoints (Pod IPs) kubectl get endpoints backend-service # Access service from within cluster (from another pod) curl http://backend-service.default.svc.cluster.local

2. NodePort

Exposes the Service on each Node's IP at a static port (30000-32767). You can access the service from outside the cluster using NodeIP:NodePort.

apiVersion: v1 kind: Service metadata: name: web-nodeport spec: type: NodePort selector: app: web ports: - protocol: TCP port: 80 # Service port targetPort: 8080 # Container port nodePort: 30080 # External port (optional, auto-assigned if omitted)
# Access the service # From outside: http://:30080 curl http://192.168.1.100:30080 # Get node IPs kubectl get nodes -o wide

NodePort Limitations

NodePort is good for development but not recommended for production. It exposes services on all nodes, requires knowledge of node IPs, and uses non-standard ports. Use LoadBalancer or Ingress for production.

3. LoadBalancer

Creates an external load balancer (in cloud environments like AWS, GCP, Azure) and assigns a fixed external IP to the Service.

apiVersion: v1 kind: Service metadata: name: web-loadbalancer spec: type: LoadBalancer selector: app: web ports: - protocol: TCP port: 80 targetPort: 8080 loadBalancerIP: 203.0.113.10 # Optional: request specific IP
# Get external IP (may take a few minutes) kubectl get svc web-loadbalancer # Example output: # NAME TYPE EXTERNAL-IP PORT(S) AGE # web-loadbalancer LoadBalancer 203.0.113.10 80:31234/TCP 2m # Access the service curl http://203.0.113.10

4. Headless Service

When you don't need load balancing and want to directly connect to Pods, set clusterIP: None.

apiVersion: v1 kind: Service metadata: name: database-headless spec: clusterIP: None selector: app: database ports: - protocol: TCP port: 5432 targetPort: 5432

Service Discovery

Kubernetes provides two ways for service discovery:

Environment Variables

Automatically injected into Pods when they start:

# Format: # {SERVICE_NAME}_SERVICE_HOST # {SERVICE_NAME}_SERVICE_PORT # Example: BACKEND_SERVICE_SERVICE_HOST=10.96.10.50 BACKEND_SERVICE_SERVICE_PORT=80

DNS (Recommended)

Kubernetes runs a DNS server (CoreDNS) that assigns DNS names to Services:

# Service DNS format: # {service-name}.{namespace}.svc.cluster.local # Examples: backend-service.default.svc.cluster.local api-service.production.svc.cluster.local # Within the same namespace, you can use just the service name: curl http://backend-service

Ingress

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. It provides features like SSL termination, name-based virtual hosting, and path-based routing.

Ingress vs LoadBalancer

LoadBalancer: Creates one external load balancer per service (expensive in cloud environments).

Ingress: Single entry point that can route to multiple services based on hostnames and paths (cost-effective).

Installing an Ingress Controller

Before using Ingress, you need an Ingress Controller (like NGINX, Traefik, or HAProxy):

# Install NGINX Ingress Controller kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml # Verify installation kubectl get pods -n ingress-nginx # Check ingress controller service kubectl get svc -n ingress-nginx

Creating an Ingress Resource

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: www.example.com http: paths: - path: / pathType: Prefix backend: service: name: web-service port: number: 80 - path: /api pathType: Prefix backend: service: name: api-service port: number: 8080 - host: admin.example.com http: paths: - path: / pathType: Prefix backend: service: name: admin-service port: number: 80

Ingress with TLS/SSL

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: secure-ingress spec: ingressClassName: nginx tls: - hosts: - www.example.com secretName: tls-secret # Contains certificate rules: - host: www.example.com http: paths: - path: / pathType: Prefix backend: service: name: web-service port: number: 80
# Create TLS secret kubectl create secret tls tls-secret \ --cert=path/to/cert.crt \ --key=path/to/key.key # Get ingress kubectl get ingress # Describe ingress kubectl describe ingress web-ingress

Network Policies

Network Policies control traffic flow between Pods. By default, all Pods can communicate with each other.

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: backend-policy namespace: production spec: podSelector: matchLabels: app: backend policyTypes: - Ingress - Egress ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 8080 egress: - to: - podSelector: matchLabels: app: database ports: - protocol: TCP port: 5432

Real-World Use Case

You have a web application with a frontend, API backend, and database. You create:

  • ClusterIP service for the database (internal only)
  • ClusterIP service for the API backend
  • Ingress for the frontend, routing www.myapp.com to frontend and www.myapp.com/api to backend
  • Network Policies to ensure only the frontend can access the backend, and only the backend can access the database

Test Your Knowledge - Lesson 3

Answer the following questions to proceed. You need 70% (2/3 correct) to pass.

Question 1: Which Service type is only accessible from within the cluster?

Question 2: What is the main advantage of using Ingress over LoadBalancer services?

Question 3: What is the DNS name format for accessing a service in Kubernetes?

Lesson 4 of 6

ConfigMaps and Secrets

Configuration Management

Hardcoding configuration values in container images is a bad practice. It makes images environment-specific and requires rebuilding for configuration changes. Kubernetes provides ConfigMaps and Secrets to externalize configuration.

ConfigMaps vs Secrets

ConfigMaps: Store non-sensitive configuration data (database URLs, feature flags, etc.)

Secrets: Store sensitive data (passwords, API keys, certificates) - base64 encoded

ConfigMaps

ConfigMaps allow you to decouple configuration from container images, making your applications more portable.

Creating ConfigMaps

# Create ConfigMap from literal values kubectl create configmap app-config \ --from-literal=database_url=postgres://db.example.com:5432 \ --from-literal=log_level=debug \ --from-literal=max_connections=100 # Create ConfigMap from file kubectl create configmap nginx-config \ --from-file=nginx.conf # Create ConfigMap from directory kubectl create configmap app-configs \ --from-file=configs/ # View ConfigMaps kubectl get configmaps # Describe ConfigMap kubectl describe configmap app-config # Get ConfigMap in YAML format kubectl get configmap app-config -o yaml

ConfigMap YAML Definition

apiVersion: v1 kind: ConfigMap metadata: name: app-config namespace: production data: # Simple key-value pairs database_url: "postgres://db.example.com:5432" log_level: "info" max_connections: "100" enable_cache: "true" # Multi-line configuration file app.properties: | server.port=8080 server.host=0.0.0.0 database.pool.size=20 cache.ttl=3600 feature.new_ui=enabled # JSON configuration config.json: | { "apiEndpoint": "https://api.example.com", "timeout": 30, "retries": 3 }

Using ConfigMaps in Pods

Method 1: Environment Variables

apiVersion: v1 kind: Pod metadata: name: app-pod spec: containers: - name: app image: myapp:1.0 env: # Single environment variable from ConfigMap - name: DATABASE_URL valueFrom: configMapKeyRef: name: app-config key: database_url - name: LOG_LEVEL valueFrom: configMapKeyRef: name: app-config key: log_level # Load all ConfigMap data as environment variables envFrom: - configMapRef: name: app-config

Method 2: Volume Mounts

apiVersion: v1 kind: Pod metadata: name: nginx-pod spec: containers: - name: nginx image: nginx:1.21 volumeMounts: # Mount entire ConfigMap as files - name: config-volume mountPath: /etc/config # Mount specific key as file - name: nginx-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: config-volume configMap: name: app-config - name: nginx-config configMap: name: nginx-config items: - key: nginx.conf path: nginx.conf

Secrets

Secrets are similar to ConfigMaps but are intended for sensitive information. They are stored base64-encoded (not encrypted by default).

Security Note

Base64 encoding is NOT encryption. For production environments, enable encryption at rest in etcd and use RBAC to control access to Secrets. Consider using external secret management solutions like HashiCorp Vault or AWS Secrets Manager.

Creating Secrets

# Create secret from literal values kubectl create secret generic db-secret \ --from-literal=username=admin \ --from-literal=password=super-secret-password # Create secret from files kubectl create secret generic tls-secret \ --from-file=tls.crt=cert.crt \ --from-file=tls.key=key.key # Create TLS secret kubectl create secret tls tls-secret \ --cert=cert.crt \ --key=key.key # Create Docker registry secret kubectl create secret docker-registry registry-secret \ --docker-server=registry.example.com \ --docker-username=user \ --docker-password=password \ --docker-email=user@example.com # View secrets (data hidden) kubectl get secrets # Describe secret (shows keys but not values) kubectl describe secret db-secret

Secret YAML Definition

apiVersion: v1 kind: Secret metadata: name: db-secret namespace: production type: Opaque data: # Values must be base64 encoded username: YWRtaW4= # admin password: c3VwZXItc2VjcmV0 # super-secret # Alternative: use stringData for automatic encoding stringData: api_key: "sk-1234567890abcdef" database_url: "postgres://user:pass@db:5432/mydb"
# Encode values for data field echo -n 'admin' | base64 # Output: YWRtaW4= # Decode values echo 'YWRtaW4=' | base64 -d # Output: admin

Using Secrets in Pods

Method 1: Environment Variables

apiVersion: v1 kind: Pod metadata: name: app-with-secrets spec: containers: - name: app image: myapp:1.0 env: # Single secret value - name: DB_USERNAME valueFrom: secretKeyRef: name: db-secret key: username - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-secret key: password - name: API_KEY valueFrom: secretKeyRef: name: api-secret key: api_key # Load all secret data as environment variables envFrom: - secretRef: name: db-secret

Method 2: Volume Mounts

apiVersion: v1 kind: Pod metadata: name: app-with-secret-files spec: containers: - name: app image: myapp:1.0 volumeMounts: - name: secret-volume mountPath: /etc/secrets readOnly: true volumes: - name: secret-volume secret: secretName: db-secret defaultMode: 0400 # Read-only for owner items: - key: username path: db-username - key: password path: db-password

Using Secrets for Private Container Registries

apiVersion: v1 kind: Pod metadata: name: private-image-pod spec: containers: - name: app image: registry.example.com/myapp:1.0 imagePullSecrets: - name: registry-secret

Updating ConfigMaps and Secrets

# Update ConfigMap kubectl create configmap app-config \ --from-literal=log_level=info \ --dry-run=client -o yaml | kubectl apply -f - # Edit ConfigMap directly kubectl edit configmap app-config # Replace ConfigMap from file kubectl create configmap app-config \ --from-file=config.yaml \ --dry-run=client -o yaml | kubectl replace -f - # Delete and recreate kubectl delete configmap app-config kubectl create configmap app-config --from-file=config.yaml

Automatic Updates

When using volume mounts, ConfigMaps and Secrets are automatically updated in running Pods (with some delay). However, environment variables are NOT updated - you must restart the Pod.

Best Practices

Configuration Best Practices

  • Use ConfigMaps for non-sensitive configuration
  • Use Secrets for passwords, tokens, and certificates
  • Enable encryption at rest for Secrets in production
  • Use RBAC to limit access to Secrets
  • Prefer volume mounts over environment variables for large configs
  • Version your ConfigMaps and Secrets (e.g., app-config-v1, app-config-v2)
  • Use external secret management for sensitive production data
  • Never commit Secrets to version control

Real-World Example

apiVersion: apps/v1 kind: Deployment metadata: name: web-app spec: replicas: 3 selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: - name: web image: mywebapp:2.0 ports: - containerPort: 8080 # Environment variables from ConfigMap env: - name: LOG_LEVEL valueFrom: configMapKeyRef: name: app-config key: log_level # Environment variables from Secret - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-secret key: password # Load all ConfigMap keys as env vars envFrom: - configMapRef: name: app-config # Mount application config file volumeMounts: - name: config-volume mountPath: /app/config # Mount TLS certificates - name: tls-volume mountPath: /app/certs readOnly: true volumes: - name: config-volume configMap: name: app-config - name: tls-volume secret: secretName: tls-secret # Pull from private registry imagePullSecrets: - name: registry-secret

Test Your Knowledge - Lesson 4

Answer the following questions to proceed. You need 70% (2/3 correct) to pass.

Question 1: What is the main difference between ConfigMaps and Secrets?

Question 2: When ConfigMaps or Secrets are mounted as volumes, what happens when they are updated?

Question 3: What command creates a ConfigMap from literal key-value pairs?

Lesson 5 of 6

Introduction to Helm

What is Helm?

Helm is the package manager for Kubernetes. It simplifies the deployment and management of applications on Kubernetes by packaging related Kubernetes resources into a single unit called a "chart."

Why Use Helm?

  • Package Management: Bundle multiple Kubernetes resources into reusable packages
  • Version Control: Track and manage application versions
  • Configuration Management: Customize deployments with values files
  • Release Management: Easy rollbacks and upgrades
  • Dependency Management: Handle complex application dependencies
  • Templating: Create reusable, parameterized Kubernetes manifests

Helm Architecture

Key Concepts

  • Chart: A package of pre-configured Kubernetes resources
  • Release: An instance of a chart running in a Kubernetes cluster
  • Repository: A collection of charts that can be shared
  • Values: Configuration parameters for customizing charts

Installing Helm

# Install Helm on macOS brew install helm # Install Helm on Linux curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash # Install Helm on Windows (using Chocolatey) choco install kubernetes-helm # Verify installation helm version # Get help helm help

Working with Helm Repositories

# Add official Helm stable repository helm repo add stable https://charts.helm.sh/stable # Add Bitnami repository (popular charts) helm repo add bitnami https://charts.bitnami.com/bitnami # Add NGINX repository helm repo add nginx-stable https://helm.nginx.com/stable # List configured repositories helm repo list # Update repository information helm repo update # Search for charts in repositories helm search repo nginx # Search for charts with specific version helm search repo nginx --version 1.0.0 # Remove a repository helm repo remove stable

Installing Charts

# Install a chart (creates a release) helm install my-nginx bitnami/nginx # Install with custom release name helm install my-release bitnami/wordpress # Install in a specific namespace helm install my-db bitnami/mysql --namespace database --create-namespace # Install with custom values helm install my-app bitnami/nginx --set service.type=LoadBalancer # Install with values file helm install my-app bitnami/nginx -f custom-values.yaml # Install and wait for resources to be ready helm install my-app bitnami/nginx --wait --timeout 5m # Dry run (test without installing) helm install my-app bitnami/nginx --dry-run --debug # Generate manifest without installing helm template my-app bitnami/nginx > manifest.yaml

Managing Releases

# List all releases helm list # List releases in all namespaces helm list --all-namespaces # List releases in specific namespace helm list -n production # Get release status helm status my-nginx # Get release history helm history my-nginx # Show values used in a release helm get values my-nginx # Show all information about a release helm get all my-nginx # Show manifest of deployed release helm get manifest my-nginx

Upgrading Releases

# Upgrade a release helm upgrade my-nginx bitnami/nginx # Upgrade with new values helm upgrade my-nginx bitnami/nginx --set replicaCount=3 # Upgrade with values file helm upgrade my-nginx bitnami/nginx -f production-values.yaml # Upgrade or install (install if doesn't exist) helm upgrade --install my-nginx bitnami/nginx # Force upgrade helm upgrade my-nginx bitnami/nginx --force # Atomic upgrade (rollback on failure) helm upgrade my-nginx bitnami/nginx --atomic # Upgrade with wait helm upgrade my-nginx bitnami/nginx --wait --timeout 10m

Rolling Back Releases

# Rollback to previous version helm rollback my-nginx # Rollback to specific revision helm rollback my-nginx 2 # View what will be rolled back (dry run) helm rollback my-nginx --dry-run

Uninstalling Releases

# Uninstall a release helm uninstall my-nginx # Uninstall and keep history helm uninstall my-nginx --keep-history # Uninstall from specific namespace helm uninstall my-nginx -n production

Working with Values

Values files allow you to customize chart deployments. You can override default values to match your environment.

Viewing Default Values

# Show default values for a chart helm show values bitnami/nginx # Save default values to file helm show values bitnami/nginx > default-values.yaml

Custom Values File

# custom-values.yaml replicaCount: 3 image: registry: docker.io repository: bitnami/nginx tag: 1.21.0 service: type: LoadBalancer port: 80 annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb resources: requests: memory: "128Mi" cpu: "250m" limits: memory: "256Mi" cpu: "500m" ingress: enabled: true hostname: www.example.com tls: true autoscaling: enabled: true minReplicas: 2 maxReplicas: 10 targetCPU: 70
# Install with custom values helm install my-nginx bitnami/nginx -f custom-values.yaml # Override specific values helm install my-nginx bitnami/nginx \ --set replicaCount=5 \ --set service.type=NodePort # Use multiple values files (later files override earlier) helm install my-nginx bitnami/nginx \ -f base-values.yaml \ -f environment-values.yaml \ -f custom-values.yaml

Inspecting Charts

# Show chart information helm show chart bitnami/nginx # Show chart README helm show readme bitnami/nginx # Show all chart information helm show all bitnami/nginx # Download chart without installing helm pull bitnami/nginx # Download and unpack chart helm pull bitnami/nginx --untar # Download specific version helm pull bitnami/nginx --version 13.2.0

Real-World Example: Deploying WordPress

# Create custom values file cat > wordpress-values.yaml << EOF wordpressUsername: admin wordpressEmail: admin@example.com wordpressFirstName: Admin wordpressLastName: User service: type: LoadBalancer persistence: enabled: true size: 10Gi mariadb: auth: database: wordpress_db username: wordpress_user primary: persistence: enabled: true size: 8Gi resources: requests: memory: 512Mi cpu: 300m limits: memory: 1Gi cpu: 1000m EOF # Install WordPress with custom values helm install my-wordpress bitnami/wordpress \ -f wordpress-values.yaml \ --namespace wordpress \ --create-namespace # Get WordPress URL and password echo "WordPress URL: http://$(kubectl get svc my-wordpress --namespace wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" echo "Password: $(kubectl get secret my-wordpress --namespace wordpress -o jsonpath="{.data.wordpress-password}" | base64 -d)" # Monitor deployment kubectl get pods -n wordpress --watch # Upgrade WordPress helm upgrade my-wordpress bitnami/wordpress \ -f wordpress-values.yaml \ --namespace wordpress

Important Notes

  • Always review chart values before installing in production
  • Use version pinning to ensure reproducible deployments
  • Test with --dry-run before applying to production
  • Keep values files in version control (except secrets)
  • Use separate values files for different environments

Helm Plugins

# Install helm-diff plugin (compare releases) helm plugin install https://github.com/databus23/helm-diff # Use helm diff helm diff upgrade my-nginx bitnami/nginx -f new-values.yaml # List installed plugins helm plugin list # Update plugin helm plugin update diff # Uninstall plugin helm plugin uninstall diff

Real-World Use Case

Your company runs the same application across development, staging, and production environments. With Helm:

  • Create one chart with all Kubernetes resources
  • Use separate values files for each environment (dev-values.yaml, staging-values.yaml, prod-values.yaml)
  • Deploy to dev: helm install myapp ./myapp-chart -f dev-values.yaml
  • Deploy to prod: helm install myapp ./myapp-chart -f prod-values.yaml
  • Easy upgrades, rollbacks, and consistent deployments across environments

Test Your Knowledge - Lesson 5

Answer the following questions to proceed. You need 70% (2/3 correct) to pass.

Question 1: What is a Helm chart?

Question 2: What command would you use to install a Helm chart named "nginx" from the bitnami repository with the release name "my-nginx"?

Question 3: How do you rollback a Helm release named "my-app" to the previous version?

Lesson 6 of 6

Advanced Kubernetes with Helm

StatefulSets

While Deployments are great for stateless applications, StatefulSets are designed for stateful applications that require stable network identities, persistent storage, and ordered deployment and scaling.

StatefulSet vs Deployment

Feature Deployment StatefulSet
Pod Identity Random names Stable, unique names
Startup Order Parallel Sequential
Storage Shared or ephemeral Unique per pod
Network Identity Random Stable DNS names
Use Case Stateless apps Databases, queues

Creating a StatefulSet

apiVersion: v1 kind: Service metadata: name: mongodb-service labels: app: mongodb spec: clusterIP: None # Headless service selector: app: mongodb ports: - port: 27017 name: mongodb --- apiVersion: apps/v1 kind: StatefulSet metadata: name: mongodb spec: serviceName: "mongodb-service" replicas: 3 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo:5.0 ports: - containerPort: 27017 name: mongodb volumeMounts: - name: mongodb-data mountPath: /data/db env: - name: MONGO_INITDB_ROOT_USERNAME value: admin - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongodb-secret key: password volumeClaimTemplates: - metadata: name: mongodb-data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 10Gi
# StatefulSet pods have predictable names # mongodb-0, mongodb-1, mongodb-2 # Access specific pod via DNS mongodb-0.mongodb-service.default.svc.cluster.local # Scale statefulset kubectl scale statefulset mongodb --replicas=5 # Update statefulset kubectl rollout status statefulset/mongodb

Persistent Volumes and Persistent Volume Claims

PersistentVolumes (PV) provide storage resources in the cluster. PersistentVolumeClaims (PVC) are requests for storage by users.

PersistentVolume (PV)

apiVersion: v1 kind: PersistentVolume metadata: name: pv-data labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" --- # AWS EBS Example apiVersion: v1 kind: PersistentVolume metadata: name: pv-aws-ebs spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: gp2 awsElasticBlockStore: volumeID: vol-0123456789abcdef fsType: ext4

PersistentVolumeClaim (PVC)

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-pvc spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 5Gi

Using PVC in a Pod

apiVersion: v1 kind: Pod metadata: name: app-with-storage spec: containers: - name: app image: myapp:1.0 volumeMounts: - name: data-volume mountPath: /data volumes: - name: data-volume persistentVolumeClaim: claimName: data-pvc

Storage Classes

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast-ssd provisioner: kubernetes.io/aws-ebs parameters: type: gp3 iops: "3000" throughput: "125" encrypted: "true" reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer
# Create PVC using storage class apiVersion: v1 kind: PersistentVolumeClaim metadata: name: fast-storage-pvc spec: storageClassName: fast-ssd accessModes: - ReadWriteOnce resources: requests: storage: 50Gi

Creating Helm Charts

Creating your own Helm charts allows you to package and distribute your applications efficiently.

Chart Structure

# Create a new chart helm create myapp # Chart directory structure: myapp/ ├── Chart.yaml # Chart metadata ├── values.yaml # Default configuration values ├── charts/ # Dependency charts ├── templates/ # Kubernetes manifest templates │ ├── deployment.yaml │ ├── service.yaml │ ├── ingress.yaml │ ├── _helpers.tpl # Template helpers │ └── NOTES.txt # Post-install notes └── .helmignore # Files to ignore

Chart.yaml

apiVersion: v2 name: myapp description: A Helm chart for my awesome application type: application version: 1.0.0 appVersion: "2.0.0" keywords: - web - application maintainers: - name: Your Name email: you@example.com dependencies: - name: postgresql version: 12.1.0 repository: https://charts.bitnami.com/bitnami condition: postgresql.enabled

values.yaml

replicaCount: 2 image: repository: myapp tag: "2.0.0" pullPolicy: IfNotPresent service: type: ClusterIP port: 80 targetPort: 8080 ingress: enabled: false className: nginx hosts: - host: myapp.example.com paths: - path: / pathType: Prefix tls: [] resources: limits: cpu: 500m memory: 512Mi requests: cpu: 250m memory: 256Mi autoscaling: enabled: false minReplicas: 2 maxReplicas: 10 targetCPUUtilizationPercentage: 80 postgresql: enabled: true auth: database: myapp_db username: myapp_user

templates/deployment.yaml

apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "myapp.fullname" . }} labels: {{- include "myapp.labels" . | nindent 4 }} spec: {{- if not .Values.autoscaling.enabled }} replicas: {{ .Values.replicaCount }} {{- end }} selector: matchLabels: {{- include "myapp.selectorLabels" . | nindent 6 }} template: metadata: labels: {{- include "myapp.selectorLabels" . | nindent 8 }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: {{ .Values.service.targetPort }} protocol: TCP resources: {{- toYaml .Values.resources | nindent 10 }}

Working with Your Chart

# Lint the chart (check for errors) helm lint myapp/ # Test template rendering helm template myapp myapp/ # Package the chart helm package myapp/ # Creates: myapp-1.0.0.tgz # Install your chart helm install my-release ./myapp # Install with custom values helm install my-release ./myapp -f custom-values.yaml # Debug chart installation helm install my-release ./myapp --dry-run --debug

Helm Chart Best Practices

Production Best Practices

  • Immutable Tags: Use specific image tags, never latest
  • Resource Limits: Always define resource requests and limits
  • Health Checks: Implement liveness and readiness probes
  • Security Contexts: Run containers as non-root users
  • PodDisruptionBudgets: Ensure availability during updates
  • Network Policies: Restrict pod-to-pod communication
  • Secrets Management: Never hardcode secrets, use external secret managers
  • Monitoring: Add Prometheus metrics and logging

Advanced Deployment Example

apiVersion: apps/v1 kind: Deployment metadata: name: production-app spec: replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 selector: matchLabels: app: production-app template: metadata: labels: app: production-app version: v2.0.0 annotations: prometheus.io/scrape: "true" prometheus.io/port: "9090" spec: securityContext: runAsNonRoot: true runAsUser: 1000 fsGroup: 1000 containers: - name: app image: myapp:2.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 name: http - containerPort: 9090 name: metrics env: - name: DATABASE_URL valueFrom: secretKeyRef: name: db-secret key: url resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m" livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 10 periodSeconds: 5 timeoutSeconds: 3 successThreshold: 1 volumeMounts: - name: config mountPath: /app/config - name: data mountPath: /app/data volumes: - name: config configMap: name: app-config - name: data persistentVolumeClaim: claimName: app-data affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - production-app topologyKey: kubernetes.io/hostname --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: production-app-pdb spec: minAvailable: 2 selector: matchLabels: app: production-app

Helm Hooks

Hooks allow you to intervene at certain points in a release lifecycle (pre-install, post-install, pre-upgrade, etc.).

apiVersion: batch/v1 kind: Job metadata: name: "{{ .Release.Name }}-db-migration" annotations: "helm.sh/hook": pre-upgrade "helm.sh/hook-weight": "-5" "helm.sh/hook-delete-policy": hook-succeeded spec: template: spec: containers: - name: migration image: myapp-migrations:latest command: ["./migrate.sh"] restartPolicy: Never

Managing Dependencies

# Chart.yaml with dependencies dependencies: - name: postgresql version: 12.1.0 repository: https://charts.bitnami.com/bitnami condition: postgresql.enabled - name: redis version: 17.3.0 repository: https://charts.bitnami.com/bitnami condition: redis.enabled # Update dependencies helm dependency update myapp/ # List dependencies helm dependency list myapp/ # Build dependencies helm dependency build myapp/

Real-World Production Scenario

You're deploying a microservices e-commerce platform:

  • Frontend: Deployment with Ingress, 5 replicas, autoscaling
  • API Gateway: Deployment with LoadBalancer service
  • User Service: Deployment with ClusterIP
  • Order Service: Deployment with ClusterIP
  • PostgreSQL: StatefulSet with PVC (100GB)
  • Redis: StatefulSet for caching
  • Message Queue: StatefulSet (RabbitMQ)

Package everything in a single Helm chart with values files for dev/staging/prod. Use Helm hooks for database migrations. Implement network policies to secure communication. Add monitoring with Prometheus and logging with ELK stack.

Test Your Knowledge - Lesson 6

Answer the following questions to proceed. You need 70% (2/3 correct) to pass.

Question 1: What is the primary difference between a Deployment and a StatefulSet?

Question 2: What file in a Helm chart contains the default configuration values?

Question 3: What command creates a new Helm chart scaffold?