Kubernetes Advanced Topics

ConfigMaps, Debugging & Application Lifecycle

0%
Lesson 1 of 4

ConfigMaps: Managing Application Configuration

What is a ConfigMap?

A ConfigMap is a Kubernetes object used to store non-confidential configuration data in key-value pairs. It allows you to decouple configuration from your application code, making your containerized applications more portable.

Why Use ConfigMaps?

  • Separation of concerns: Keep configuration separate from application code
  • Reusability: Use the same container image across different environments
  • Easy updates: Change configuration without rebuilding images
  • Centralized management: Store all configuration in Kubernetes

ConfigMap Structure

A ConfigMap manifest includes standard Kubernetes fields:

apiVersion: v1 kind: ConfigMap metadata: name: app-config namespace: default data: # Simple key-value pairs DATABASE_HOST: "mysql.example.com" DATABASE_PORT: "3306" LOG_LEVEL: "info" MAX_CONNECTIONS: "100"

Key Components

  • apiVersion: v1 - ConfigMap API version
  • kind: ConfigMap - Resource type
  • metadata - Name and namespace
  • data - The actual configuration key-value pairs

Multi-line Configuration with YAML Pipe

For embedding full configuration files (like nginx.conf, application.yml, etc.), use the YAML vertical pipe (|) to define multi-line values:

apiVersion: v1 kind: ConfigMap metadata: name: nginx-config data: # Single-line configuration server_name: "example.com" # Multi-line configuration using | nginx.conf: | server { listen 80; server_name example.com; location / { root /usr/share/nginx/html; index index.html; } location /api { proxy_pass http://backend:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } }

YAML Multi-line Operators

  • | (pipe) - Preserves newlines (literal block scalar)
  • > (greater than) - Folds newlines into spaces (folded block scalar)

Use | for configuration files where line breaks matter.

Creating ConfigMaps

Method 1: From YAML Manifest

# Create from file kubectl apply -f configmap.yaml # View ConfigMaps kubectl get configmaps # Describe a ConfigMap kubectl describe configmap app-config

Method 2: From Literal Values

# Create from command line kubectl create configmap app-config \ --from-literal=DATABASE_HOST=mysql.example.com \ --from-literal=DATABASE_PORT=3306 \ --from-literal=LOG_LEVEL=info

Method 3: From Files

# Create from a file kubectl create configmap nginx-config \ --from-file=nginx.conf # Create from directory (all files in directory) kubectl create configmap app-configs \ --from-file=./config-files/

Using ConfigMaps in Deployments

ConfigMaps can be consumed by Pods in several ways:

1. As Environment Variables

apiVersion: apps/v1 kind: Deployment metadata: name: web-app spec: replicas: 3 selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: - name: app image: my-app:1.0 # Inject all ConfigMap data as environment variables envFrom: - configMapRef: name: app-config # Or inject specific keys env: - name: DB_HOST valueFrom: configMapKeyRef: name: app-config key: DATABASE_HOST

2. As Volume Mounts (Files)

This is the primary method for injecting full configuration files:

apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.21 ports: - containerPort: 80 # Mount ConfigMap as files volumeMounts: - name: nginx-config-volume mountPath: /etc/nginx/conf.d # Where to mount readOnly: true # Define the volume from ConfigMap volumes: - name: nginx-config-volume configMap: name: nginx-config # ConfigMap name

How Volume Mounts Work

When you mount a ConfigMap as a volume:

  • Each key in the ConfigMap's data section becomes a file
  • The file name is the key name
  • The file contents are the value
  • Files are mounted at the specified mountPath

3. Mounting Specific Keys

volumes: - name: config-volume configMap: name: app-config items: # Mount only specific keys - key: nginx.conf path: nginx.conf # File name in the mount - key: app.properties path: config/app.properties # Can use subdirectories

Complete Example

# configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: webapp-config data: # Simple configuration API_URL: "https://api.example.com" TIMEOUT: "30" # Full configuration file application.yml: | server: port: 8080 database: host: postgres port: 5432 logging: level: INFO --- # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: webapp spec: replicas: 3 selector: matchLabels: app: webapp template: metadata: labels: app: webapp spec: containers: - name: app image: webapp:1.0 # Environment variables from ConfigMap env: - name: API_URL valueFrom: configMapKeyRef: name: webapp-config key: API_URL # Mount config file volumeMounts: - name: config mountPath: /app/config volumes: - name: config configMap: name: webapp-config items: - key: application.yml path: application.yml

ConfigMap Best Practices

  • Use descriptive names for ConfigMaps
  • Group related configuration together
  • Use volume mounts for large or complex configuration files
  • Use environment variables for simple key-value pairs
  • Version your ConfigMaps (e.g., app-config-v1, app-config-v2)
  • For sensitive data, use Secrets instead of ConfigMaps

ConfigMap Limitations

  • Size limit: 1MB per ConfigMap
  • Not encrypted - use Secrets for sensitive data
  • Changes don't automatically restart Pods (unless using features like Reloader)
Lesson 2 of 4

Debugging with Port Forwarding

What is kubectl port-forward?

The kubectl port-forward command is a practical debugging tool that allows you to access services and Pods directly from your local machine without exposing them through a Service or Ingress.

Use Cases for Port Forwarding

  • Debugging: Access a specific Pod for troubleshooting
  • Database access: Connect to databases without public exposure
  • Development: Test services locally before creating Services
  • Admin interfaces: Access admin dashboards securely
  • Quick testing: Verify application behavior without networking setup

Basic Port Forward Syntax

The command requires specifying the target resource and port mapping:

# Basic syntax kubectl port-forward [RESOURCE_TYPE/NAME] [LOCAL_PORT]:[POD_PORT] # Forward to a Pod kubectl port-forward pod/my-pod 8080:80 # Forward to a Deployment (picks first Pod) kubectl port-forward deployment/my-app 8080:80 # Forward to a Service kubectl port-forward service/my-service 8080:80

Port Mapping

  • LOCAL_PORT: The port on your local machine where kubectl will listen
  • POD_PORT: The port in the Pod where the application is accepting requests

Practical Examples

Example 1: Forward to a Web Application

# List running Pods kubectl get pods # Output: # NAME READY STATUS RESTARTS AGE # webapp-7d8f9c5b6d-abc12 1/1 Running 0 5m # Forward local port 8080 to Pod's port 80 kubectl port-forward pod/webapp-7d8f9c5b6d-abc12 8080:80 # Now access in browser: http://localhost:8080 # Press Ctrl+C to stop forwarding

Example 2: Access a Database

# Forward to PostgreSQL database kubectl port-forward pod/postgres-0 5432:5432 # Now connect using local tools: psql -h localhost -p 5432 -U myuser -d mydb # Or use a GUI client pointing to localhost:5432

Example 3: Multiple Port Forwards

# Forward multiple ports simultaneously kubectl port-forward pod/my-pod 8080:80 8443:443 # Access HTTP on localhost:8080 # Access HTTPS on localhost:8443

Example 4: Using Different Local Port

# If local port 8080 is busy, use a different one kubectl port-forward pod/my-pod 9090:80 # Access on localhost:9090

Forwarding to Services vs. Pods

# Forward to a specific Pod (recommended for debugging) kubectl port-forward pod/webapp-abc123 8080:80 # Forward to a Deployment (uses first available Pod) kubectl port-forward deployment/webapp 8080:80 # Forward to a Service (load balances across Pods) kubectl port-forward service/webapp 8080:80

When to Use Each

  • Pod: When debugging a specific Pod instance
  • Deployment: Quick access, don't care which Pod
  • Service: Test Service-level load balancing

Advanced Options

1. Specify Namespace

# Forward to Pod in specific namespace kubectl port-forward -n production pod/webapp-abc123 8080:80

2. Listen on Specific Address

# Listen on all interfaces (allow external connections) kubectl port-forward --address 0.0.0.0 pod/webapp-abc123 8080:80 # Listen on specific IP kubectl port-forward --address 192.168.1.100 pod/webapp-abc123 8080:80 # Default is 127.0.0.1 (localhost only)

Security Warning

Using --address 0.0.0.0 allows anyone on your network to access the forwarded port. Only use this in trusted networks!

3. Run in Background

# Run in background (Linux/Mac) kubectl port-forward pod/webapp-abc123 8080:80 & # Check background jobs jobs # Bring to foreground fg # Or kill by PID kill %1

Common Use Cases

Debugging Application Issues

# 1. Find the problematic Pod kubectl get pods # 2. Forward to that specific Pod kubectl port-forward pod/webapp-abc123 8080:80 # 3. Test with curl or browser curl http://localhost:8080/health # 4. Check logs while testing kubectl logs -f pod/webapp-abc123

Accessing Admin Dashboards

# Access Kubernetes Dashboard kubectl port-forward -n kubernetes-dashboard \ service/kubernetes-dashboard 8443:443 # Access Prometheus kubectl port-forward -n monitoring \ service/prometheus 9090:9090 # Access Grafana kubectl port-forward -n monitoring \ service/grafana 3000:3000

Database Operations

# Connect to MySQL kubectl port-forward pod/mysql-0 3306:3306 mysql -h 127.0.0.1 -P 3306 -u root -p # Connect to MongoDB kubectl port-forward pod/mongodb-0 27017:27017 mongo localhost:27017 # Connect to Redis kubectl port-forward pod/redis-0 6379:6379 redis-cli -h localhost -p 6379

Troubleshooting Port Forward

# Port already in use # Error: Unable to listen on port 8080 # Solution: Use a different local port kubectl port-forward pod/my-pod 8081:80 # Pod not found # Solution: Verify Pod name and namespace kubectl get pods -A | grep my-pod # Connection refused # Solution: Verify the Pod's port is correct kubectl describe pod my-pod kubectl logs pod/my-pod

Port Forward Best Practices

  • Use for debugging and development, not production access
  • Always specify the exact Pod when debugging specific instances
  • Remember to stop port forwarding (Ctrl+C) when done
  • Be cautious with --address 0.0.0.0 for security
  • For permanent access, create a proper Service or Ingress

Alternatives to Port Forwarding

  • kubectl proxy: Access Kubernetes API and dashboard
  • Service (NodePort): Expose on each node's IP
  • Service (LoadBalancer): Cloud load balancer
  • Ingress: HTTP/HTTPS routing with domain names
Lesson 3 of 4

Application Graceful Shutdown

Why Graceful Shutdown Matters

When Kubernetes needs to terminate a Pod (during deployments, scaling down, or node maintenance), it's crucial that your application shuts down gracefully to minimize service disruption and prevent data loss.

Graceful Shutdown Goals

  • Complete in-flight requests: Finish processing ongoing requests
  • Stop accepting new requests: Prevent new work from starting
  • Clean up resources: Close database connections, file handles, etc.
  • Save state: Persist any important data
  • Zero data loss: Ensure all committed work is completed

The Kubernetes Shutdown Process

Understanding the sequence of events when Kubernetes terminates a Pod:

1. Pod marked for deletion
User runs: kubectl delete pod or Deployment update triggered
2. Pod removed from Service endpoints
No new traffic is routed to this Pod
3. SIGTERM sent to container
Application receives termination signal
4. Grace period begins (default 30s)
Application has time to shut down gracefully
5. Application shuts down OR grace period expires
Whichever comes first
6. SIGKILL sent (if still running)
Forceful termination - process immediately stopped
7. Pod fully terminated
Container removed from node

The SIGTERM Signal

When Kubernetes needs to shut down a Pod, the Kubelet first sends a SIGTERM signal to the main process in each container.

How Your Application Should Handle SIGTERM

  1. Catch the signal: Register a signal handler
  2. Stop accepting new requests: Close listening sockets
  3. Complete existing requests: Wait for in-flight operations to finish
  4. Clean up: Close connections, flush buffers, save state
  5. Exit gracefully: Return exit code 0

Example: Node.js/Express Application

const express = require('express'); const app = express(); let server; // Track active connections let connections = new Set(); app.get('/api/data', async (req, res) => { // Long-running request await processRequest(); res.json({ data: 'result' }); }); server = app.listen(3000, () => { console.log('Server started on port 3000'); }); // Track connections server.on('connection', (conn) => { connections.add(conn); conn.on('close', () => connections.delete(conn)); }); // Graceful shutdown handler function gracefulShutdown(signal) { console.log(`Received ${signal}, starting graceful shutdown...`); // 1. Stop accepting new connections server.close(() => { console.log('HTTP server closed'); }); // 2. Wait for existing requests to complete setTimeout(() => { console.log('Forcing shutdown...'); // 3. Close all connections connections.forEach(conn => conn.destroy()); // 4. Exit process.exit(0); }, 25000); // Leave 5s buffer before SIGKILL } // Register signal handlers process.on('SIGTERM', () => gracefulShutdown('SIGTERM')); process.on('SIGINT', () => gracefulShutdown('SIGINT'));

Example: Python/Flask Application

import signal import sys import time from flask import Flask app = Flask(__name__) shutdown_flag = False @app.route('/api/data') def get_data(): # Long-running request if shutdown_flag: return "Service shutting down", 503 time.sleep(5) # Simulate work return {"data": "result"} def graceful_shutdown(signum, frame): global shutdown_flag print(f"Received signal {signum}, starting graceful shutdown...") # Set flag to stop accepting new requests shutdown_flag = True # Give time for in-flight requests to complete print("Waiting for in-flight requests to complete...") time.sleep(10) print("Shutdown complete") sys.exit(0) # Register signal handlers signal.signal(signal.SIGTERM, graceful_shutdown) signal.signal(signal.SIGINT, graceful_shutdown) if __name__ == '__main__': app.run(host='0.0.0.0', port=5000)

The terminationGracePeriodSeconds Setting

The grace period defines how long Kubernetes waits for the application to shut down gracefully before sending SIGKILL.

apiVersion: v1 kind: Pod metadata: name: webapp spec: terminationGracePeriodSeconds: 60 # Default is 30 seconds containers: - name: app image: my-app:1.0

Choosing the Right Grace Period

  • Short tasks (web APIs): 30 seconds (default) is usually sufficient
  • Long-running requests: Increase to 60-120 seconds
  • Batch jobs: May need several minutes or more
  • Message consumers: Time to complete message processing

Set the grace period to be longer than your longest expected request/task.

The SIGKILL Signal

If the application has not terminated itself by the end of the grace period, Kubernetes sends a SIGKILL signal.

SIGKILL Characteristics

  • Cannot be caught or ignored: No signal handler possible
  • Immediate termination: Process is killed instantly
  • No cleanup: Application has no chance to clean up
  • Potential data loss: In-flight operations are aborted

Goal: Your application should always exit gracefully before SIGKILL is sent.

Handling Long-Lived Connections

Special considerations for WebSockets, gRPC streams, and other long-lived connections:

WebSockets and Persistent Connections

// Server-side: Gracefully close WebSocket connections const WebSocket = require('ws'); const wss = new WebSocket.Server({ port: 8080 }); let isShuttingDown = false; wss.on('connection', (ws) => { if (isShuttingDown) { ws.close(1012, 'Service restarting'); return; } ws.on('message', (message) => { // Handle message }); }); process.on('SIGTERM', () => { console.log('Closing WebSocket connections...'); isShuttingDown = true; // Close all active connections wss.clients.forEach((client) => { client.close(1012, 'Service restarting'); }); // Close server wss.close(() => { process.exit(0); }); });

Client-Side: Automatic Reconnection

// Client should handle disconnections and reconnect class ResilientWebSocket { constructor(url) { this.url = url; this.reconnectDelay = 1000; this.connect(); } connect() { this.ws = new WebSocket(this.url); this.ws.onopen = () => { console.log('Connected'); this.reconnectDelay = 1000; // Reset delay }; this.ws.onclose = (event) => { console.log(`Disconnected: ${event.reason}`); // Automatically reconnect setTimeout(() => this.connect(), this.reconnectDelay); this.reconnectDelay = Math.min(this.reconnectDelay * 2, 30000); }; this.ws.onerror = (error) => { console.error('WebSocket error:', error); }; } } // Usage const ws = new ResilientWebSocket('ws://api.example.com');

Best Practices for Long-Lived Connections

  • Server: Send close frame with appropriate code and reason
  • Client: Implement automatic reconnection with exponential backoff
  • Client: Detect disconnections quickly (heartbeats/pings)
  • Client: Be robust - expect connections to drop
  • Application: Design for connection interruptions

PreStop Hook

For additional control, use a preStop hook that runs before SIGTERM:

apiVersion: v1 kind: Pod metadata: name: webapp spec: containers: - name: app image: my-app:1.0 lifecycle: preStop: exec: command: - /bin/sh - -c - | # Custom shutdown script echo "Starting graceful shutdown..." # Drain connections, notify load balancer, etc. sleep 10 terminationGracePeriodSeconds: 60

Complete Shutdown Timeline

  1. Pod marked for deletion
  2. Removed from Service endpoints (can take 1-2s)
  3. preStop hook runs (if configured)
  4. SIGTERM sent to process
  5. Grace period countdown begins
  6. Application shuts down gracefully
  7. SIGKILL sent if still running after grace period

Graceful Shutdown Checklist

  • ✓ Application handles SIGTERM signal
  • ✓ Stops accepting new connections/requests
  • ✓ Completes in-flight operations
  • ✓ Closes database connections and resources
  • ✓ Grace period is longer than longest operation
  • ✓ WebSocket clients auto-reconnect
  • ✓ Readiness probe fails when shutting down
Lesson 4 of 4

Introduction to Service Mesh

What is a Service Mesh?

A Service Mesh is an infrastructure layer that handles service-to-service communication within a distributed application. It runs alongside your application as a set of network proxies.

Core Concept

A Service Mesh is an abstraction that typically intercepts all incoming and outgoing network traffic for your application, allowing it to add capabilities without changing your application code.

How Service Mesh Works

Traditional Architecture
App A → Network → App B
With Service Mesh
App A → Proxy (Sidecar) → Network → Proxy (Sidecar) → App B

Sidecar Pattern

The Service Mesh typically uses a sidecar proxy pattern:

  • A proxy container is injected alongside your application container in each Pod
  • All traffic is routed through this proxy
  • The proxy handles networking concerns automatically
  • Your application code remains unchanged
# Without Service Mesh Pod: - App Container # With Service Mesh Pod: - App Container - Sidecar Proxy Container (e.g., Envoy)

Service Mesh Capabilities

1. Traffic Management

  • Load balancing: Intelligent distribution of requests
  • Retries: Automatically retry failed requests
  • Timeouts: Enforce request timeouts
  • Circuit breaking: Prevent cascading failures
  • Traffic splitting: Canary deployments, A/B testing

2. Automatic Retries

One of the key benefits: if a request fails, the Service Mesh can automatically send it to another instance:

# Without Service Mesh - App code handles retries async function callService() { let retries = 3; while (retries > 0) { try { return await fetch('http://api-service/data'); } catch (error) { retries--; if (retries === 0) throw error; await sleep(1000); } } } # With Service Mesh - Handled automatically # Your app just makes the request once: async function callService() { return await fetch('http://api-service/data'); // Mesh handles retries, failover, load balancing }

3. Security

  • Mutual TLS (mTLS): Automatic encryption between services
  • Authorization: Control which services can communicate
  • Authentication: Verify service identities
  • Certificate management: Automatic cert rotation

4. Observability

  • Distributed tracing: Track requests across services
  • Metrics: Request rates, latencies, error rates
  • Logging: Access logs for all traffic
  • Service graphs: Visualize service dependencies

Benefits of Service Mesh

What Service Mesh Provides

  • Offload complexity: Move networking logic out of application code
  • Consistency: Same capabilities for all services (any language)
  • Reliability: Automatic retries, circuit breaking, failover
  • Security: Zero-trust networking with mTLS
  • Visibility: Deep insights into service behavior
  • Developer productivity: Focus on business logic, not infrastructure

Popular Service Mesh Solutions

1. Istio

  • Most popular and feature-rich
  • Uses Envoy proxy as sidecar
  • Powerful traffic management capabilities
  • Steep learning curve

2. Linkerd

  • Lightweight and simple
  • Easy to get started
  • Lower resource overhead
  • Good for smaller deployments

3. Consul (by HashiCorp)

  • Service discovery + mesh
  • Multi-datacenter support
  • Works beyond Kubernetes

Service Mesh Architecture Example

# Example: Istio architecture Control Plane: - Pilot: Traffic management and service discovery - Citadel: Certificate management and mTLS - Galley: Configuration management Data Plane: - Envoy Proxies: Sidecar proxies in each Pod Flow: 1. Developer defines traffic rules in YAML 2. Control plane pushes config to all Envoy proxies 3. Proxies enforce rules (retries, mTLS, routing, etc.) 4. Application code remains unchanged

Example: Traffic Splitting (Canary Deployment)

# With Service Mesh - No app code changes needed apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: user-type: exact: "beta-tester" route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v1 weight: 90 # 90% traffic to v1 - destination: host: reviews subset: v2 weight: 10 # 10% traffic to v2 (canary)

When to Use a Service Mesh

Good Fit For:

  • Microservices architectures with many services
  • Need for advanced traffic management (retries, circuit breaking)
  • Security requirements (mTLS, zero-trust)
  • Multi-language/polyglot environments
  • Need for deep observability

Consider Alternatives If:

  • Small number of services (simple Services may suffice)
  • Limited resources (Service Mesh adds overhead)
  • Team lacks Kubernetes expertise
  • Simple networking requirements

Service Mesh Trade-offs

Benefits Costs
Powerful traffic management Additional complexity
Security with mTLS Resource overhead (CPU/memory)
Deep observability Learning curve
Language-agnostic Added latency (minimal)
Consistent policies Debugging complexity

Getting Started

Most teams should explore Service Mesh gradually:

  1. Start simple: Master Kubernetes Services first
  2. Identify pain points: What networking problems do you have?
  3. Evaluate options: Try Linkerd (simple) or Istio (powerful)
  4. Pilot deployment: Test on non-critical services
  5. Gradual rollout: Expand to more services over time

Further Learning

To dive deeper into Service Mesh:

  • Explore Istio or Linkerd documentation
  • Take dedicated Service Mesh courses
  • Experiment in a test cluster
  • Study Envoy proxy architecture
  • Learn about mTLS and certificate management

Summary: Advanced Kubernetes Topics

What We've Learned

  1. ConfigMaps: Decouple configuration from code using key-value pairs and volume mounts
  2. Port Forwarding: Debug and access Pods directly with kubectl port-forward
  3. Graceful Shutdown: Handle SIGTERM properly, respect grace periods, support reconnection
  4. Service Mesh: Offload networking complexity to an infrastructure layer
Final Assessment

Test Your Knowledge

Advanced Kubernetes Quiz

Question 1: What is the primary purpose of a ConfigMap?

Question 2: What does the YAML vertical pipe (|) do in a ConfigMap?

Question 3: What is kubectl port-forward primarily used for?

Question 4: What signal does Kubernetes send first when terminating a Pod?

Question 5: What happens if an application doesn't terminate within the grace period?

Question 6: For WebSocket connections, what should the client-side application do?

Question 7: What does a Service Mesh typically do?

Question 8: What is the default terminationGracePeriodSeconds in Kubernetes?