Kubernetes Installation

Kubespray, kubeadm & Best Practices

0%
Lesson 1 of 5

Kubernetes Installation Methods

The Installation Landscape

There are several ways to get a Kubernetes cluster up and running, each with different trade-offs:

Installation Options

1. Managed Cloud Solutions

Cloud providers offer fully managed Kubernetes services:

Google Kubernetes Engine (GKE)
  • Pros: Fully managed, auto-upgrades, integrated with Google Cloud
  • Cons: Vendor lock-in, limited Control Plane customization
  • Best for: Production on Google Cloud
Amazon Elastic Kubernetes Service (EKS)
  • Pros: AWS integration, managed Control Plane, high availability
  • Cons: Additional cost, AWS-specific features
  • Best for: Production on AWS
Azure Kubernetes Service (AKS)
  • Pros: Azure integration, free Control Plane, auto-scaling
  • Cons: Azure ecosystem dependency
  • Best for: Production on Azure

Managed Solution Benefits

  • No Control Plane management overhead
  • Automatic updates and security patches
  • Built-in monitoring and logging
  • High availability out of the box
  • Quick to get started

2. Self-Hosted Solutions

For those requiring full control, self-hosted clusters are essential:

When to Self-Host

  • On-premises requirements: Data must stay in your datacenter
  • Regulatory compliance: Specific security/compliance needs
  • Cost optimization: Existing hardware to utilize
  • Full control: Need to customize every aspect
  • Multi-cloud: Consistent deployment across providers
  • Air-gapped environments: No internet connectivity

Self-Hosted Installation Tools

Tool Approach Complexity Best For
kubeadm Official bootstrapping tool Medium Learning, manual setups
Kubespray Ansible-based automation Low-Medium Production clusters, automation
kops AWS-focused provisioning Medium AWS production clusters
Rancher Complete platform Low Multi-cluster management
Manual From scratch Very High Deep learning only

Why Kubespray?

Kubespray is a crucial tool for self-hosted Kubernetes deployments:

Kubespray Advantages

  • Standardization: Consistent, repeatable deployments
  • Automation: Eliminates manual, error-prone tasks
  • Ansible-based: Uses familiar automation tooling
  • Production-ready: Battle-tested configurations
  • Community support: Active development and maintenance
  • Flexibility: Supports multiple platforms and configurations
  • High availability: Multi-master setup support

The Kubespray Philosophy

Automation Focus

Kubespray's primary benefit is its ability to standardize and automate the labor-intensive process of assembling a Kubernetes cluster.

Instead of manually executing dozens of commands across multiple servers, you:

  1. Configure a host inventory once
  2. Set cluster parameters in YAML files
  3. Execute a single Ansible playbook
  4. Get a fully working cluster

What Kubespray Automates

Manual Steps (That Kubespray Handles)

Without automation tools, setting up a Kubernetes cluster requires:

# Manual installation steps (what you'd do without Kubespray): 1. Prepare nodes: - Install OS and updates - Configure networking - Disable swap - Set up firewall rules 2. Install container runtime: - Install Docker/containerd - Configure runtime settings - Set up cgroup drivers 3. Install Kubernetes binaries: - Download kubelet, kubeadm, kubectl - Install on all nodes - Configure systemd services 4. Initialize Control Plane: - Generate certificates - Start API Server, etcd, Controller Manager, Scheduler - Configure kubeconfig 5. Set up networking: - Install CNI plugin (Calico, Flannel, etc.) - Configure pod network CIDR 6. Join worker nodes: - Generate join tokens - Execute join commands on each worker - Verify node status 7. Configure HA (if multi-master): - Set up load balancer for API Server - Configure etcd cluster - Sync certificates across masters Total time: Several hours to days (depending on experience)

With Kubespray

# Kubespray approach: 1. Clone Kubespray repository 2. Configure inventory (hosts.yml) 3. Set cluster variables (group_vars/all.yml) 4. Run one command: ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml Total time: 15-30 minutes (automated)

Key Takeaways

  • Managed cloud solutions are convenient but offer less control
  • Self-hosted clusters are needed for specific requirements
  • Kubespray automates the complex installation process
  • Standardization ensures consistent, repeatable deployments
Lesson 2 of 5

Kubespray: Under the Hood

What Kubespray Does

Kubespray is an Ansible-based tool that automates the complete Kubernetes cluster installation process.

Kubespray Workflow

1. Inventory Configuration
Define your nodes (masters, workers, etcd) in inventory file
2. Variable Configuration
Set cluster parameters (network CIDR, CNI plugin, versions)
3. Run Ansible Playbook
Execute cluster.yml playbook
4. Node Preparation
Install packages, configure container runtime, set up networking
5. Certificate Generation
Create all necessary PKI certificates for secure communication
6. Control Plane Setup
Launch API Server, Scheduler, Controller Manager via static pods
7. etcd Cluster
Configure and start etcd across master nodes
8. Worker Node Join
Configure kubelet on workers, join to cluster
9. Network Plugin
Deploy CNI plugin (Calico, Flannel, etc.)
10. Verification
Verify all nodes are Ready, system pods are running

Kubespray Installation Example

Step 1: Prepare Infrastructure

# You need: # - 3+ Linux servers (Ubuntu, CentOS, etc.) # - SSH access to all servers # - Python installed on all servers # - Ansible installed on your local machine # Example setup: # master-1: 192.168.1.10 # master-2: 192.168.1.11 # master-3: 192.168.1.12 # worker-1: 192.168.1.20 # worker-2: 192.168.1.21

Step 2: Clone Kubespray

# Clone the repository git clone https://github.com/kubernetes-sigs/kubespray.git cd kubespray # Install dependencies pip install -r requirements.txt # Copy sample inventory cp -rfp inventory/sample inventory/mycluster

Step 3: Configure Inventory

# Edit inventory/mycluster/hosts.yml all: hosts: master1: ansible_host: 192.168.1.10 ip: 192.168.1.10 master2: ansible_host: 192.168.1.11 ip: 192.168.1.11 master3: ansible_host: 192.168.1.12 ip: 192.168.1.12 worker1: ansible_host: 192.168.1.20 ip: 192.168.1.20 worker2: ansible_host: 192.168.1.21 ip: 192.168.1.21 children: kube_control_plane: hosts: master1: master2: master3: kube_node: hosts: worker1: worker2: etcd: hosts: master1: master2: master3: k8s_cluster: children: kube_control_plane: kube_node:

Step 4: Configure Cluster Variables

# Edit inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml # Kubernetes version kube_version: v1.25.5 # Network plugin kube_network_plugin: calico # Pod network CIDR kube_pods_subnet: 10.233.64.0/18 # Service network CIDR kube_service_addresses: 10.233.0.0/18 # DNS domain cluster_name: cluster.local

Step 5: Deploy Cluster

# Run the playbook ansible-playbook -i inventory/mycluster/hosts.yml \ --become --become-user=root \ cluster.yml # This single command: # - Installs Docker/containerd # - Installs Kubernetes components # - Generates certificates # - Configures Control Plane # - Sets up etcd cluster # - Joins worker nodes # - Deploys network plugin # - Configures DNS # Wait 15-30 minutes for completion

Step 6: Verify Cluster

# SSH to any master node ssh master1 # Check nodes kubectl get nodes NAME STATUS ROLES AGE VERSION master1 Ready control-plane 10m v1.25.5 master2 Ready control-plane 10m v1.25.5 master3 Ready control-plane 10m v1.25.5 worker1 Ready 10m v1.25.5 worker2 Ready 10m v1.25.5 # Check system pods kubectl get pods -n kube-system # All pods should be Running

Critical Tasks Automated by Kubespray

1. Package Installation

Kubespray installs all required packages on all nodes:

  • Container runtime (Docker, containerd)
  • Kubernetes binaries (kubelet, kubeadm, kubectl)
  • Network tools and dependencies
  • System utilities

2. Certificate Generation

Security certificates are automatically generated for:

  • API Server (server and client certificates)
  • etcd cluster communication
  • Kubelet client certificates
  • Service account signing keys
  • Front proxy certificates

3. Control Plane via Static Pods

Control Plane components are deployed as static pods:

Static Pods

Static pods are managed directly by kubelet (not by API Server). Manifests are placed in /etc/kubernetes/manifests/

Kubespray creates static pod manifests for:

  • kube-apiserver.yaml
  • kube-controller-manager.yaml
  • kube-scheduler.yaml
  • etcd.yaml

Kubelet reads these files and ensures the pods are always running.

4. Network Configuration

Kubespray sets up the complete network stack:

  • Installs and configures CNI plugin
  • Sets up pod network CIDR
  • Configures service network
  • Deploys kube-proxy
  • Sets up DNS (CoreDNS)

Kubespray Advantages

Why Use Kubespray

  • Idempotent: Can run multiple times safely
  • Upgradeable: Supports cluster upgrades
  • Scalable: Easy to add/remove nodes
  • Configurable: Extensive customization options
  • Production-tested: Used by many organizations
  • Multi-platform: Supports various Linux distributions
  • HA-ready: Multi-master setup out of the box

Kubespray Considerations

  • Requires Ansible knowledge for customization
  • Initial setup more complex than managed solutions
  • You're responsible for infrastructure
  • Playbooks can take 15-30 minutes to run
  • Need to manage Kubespray version compatibility
Lesson 3 of 5

kubeadm: The Community Standard

The Evolution of Kubespray

Kubespray has largely transitioned to using kubeadm under the hood. This shift represents a reliance on the community-accepted method for bootstrapping Kubernetes clusters.

What is kubeadm?

kubeadm is the official Kubernetes tool for bootstrapping clusters. It's maintained by the Kubernetes project itself and follows best practices for cluster initialization.

Why kubeadm Became Standard

Standardization Benefits

  • Official tool: Maintained by Kubernetes project
  • Best practices: Implements recommended configurations
  • Consistency: Same method across different tools
  • Community adoption: Other tools (Kubespray, kops) use it
  • Well-documented: Extensive official documentation
  • Regularly updated: Stays current with Kubernetes versions

How kubeadm Works

1. Prepare Node
Install container runtime (Docker/containerd), kubelet, kubeadm
2. kubeadm init (on first master)
Initialize Control Plane, generate certificates, start components
3. Install Network Plugin
Apply CNI plugin manifest (Calico, Flannel, etc.)
4. kubeadm join (on workers)
Join worker nodes to cluster using token

kubeadm Commands

Initialize First Master

# Initialize cluster kubeadm init \ --pod-network-cidr=10.244.0.0/16 \ --apiserver-advertise-address=192.168.1.10 # What this does: # 1. Pre-flight checks (verify system requirements) # 2. Generate certificates (CA, API Server, kubelet, etc.) # 3. Generate kubeconfig files # 4. Create static pod manifests in /etc/kubernetes/manifests/ # 5. Start Control Plane components # 6. Wait for Control Plane to be healthy # 7. Upload configuration to cluster (ConfigMap) # 8. Mark master node (taint) # 9. Bootstrap tokens and RBAC # 10. Install DNS and kube-proxy addons # Output includes join command for workers: # kubeadm join 192.168.1.10:6443 --token abc123...

Join Worker Nodes

# On worker nodes kubeadm join 192.168.1.10:6443 \ --token abc123.xyz789 \ --discovery-token-ca-cert-hash sha256:abc123... # What this does: # 1. Download cluster information # 2. Verify CA certificate # 3. Configure kubelet # 4. Start kubelet service # 5. Register node with API Server

Kubespray Uses kubeadm

When you run Kubespray, you'll see kubeadm commands being executed:

# Kubespray Ansible tasks often call kubeadm: # Example from Kubespray logs: TASK [kubernetes/control-plane : kubeadm | Initialize first master] changed: [master1] => { "cmd": "kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml" } # Kubespray generates the kubeadm config file # Then executes kubeadm commands # This ensures compliance with community standards

kubeadm's Responsibilities

1. Certificate Management

kubeadm has become the de facto standard for certificate generation:

  • Creates CA (Certificate Authority)
  • Generates all component certificates
  • Manages certificate renewal
  • Follows best practices for PKI
# Certificate locations (created by kubeadm): /etc/kubernetes/pki/ ├── ca.crt # Cluster CA ├── ca.key ├── apiserver.crt # API Server certificate ├── apiserver.key ├── apiserver-kubelet-client.crt ├── front-proxy-ca.crt ├── etcd/ │ ├── ca.crt # etcd CA │ ├── server.crt │ └── peer.crt └── sa.key # Service account signing key

2. Configuration Writing

kubeadm creates standard configuration files:

# Static pod manifests: /etc/kubernetes/manifests/ ├── kube-apiserver.yaml ├── kube-controller-manager.yaml ├── kube-scheduler.yaml └── etcd.yaml # Kubeconfig files: /etc/kubernetes/ ├── admin.conf # Cluster admin ├── controller-manager.conf ├── scheduler.conf └── kubelet.conf

3. Component Deployment

Control Plane components are deployed following standards:

  • Static pods for Control Plane
  • Consistent flags and arguments
  • Standard RBAC configurations
  • Best practice security settings

State Tracking with ConfigMap

When scaling or modifying the cluster, kubeadm records changes in a special ConfigMap:

# kubeadm stores cluster configuration kubectl get configmap -n kube-system kubeadm-config -o yaml apiVersion: v1 kind: ConfigMap metadata: name: kubeadm-config namespace: kube-system data: ClusterConfiguration: | apiServer: certSANs: - 192.168.1.10 - master1 controlPlaneEndpoint: 192.168.1.10:6443 etcd: local: dataDir: /var/lib/etcd kubernetesVersion: v1.25.5 networking: podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 # This ConfigMap is the official configuration source # When adding a second master, kubeadm reads this # Ensures consistency across Control Plane nodes

Industry Adoption

kubeadm as Standard

kubeadm's methods have become the de facto standard that other tools follow:

  • Kubespray: Uses kubeadm for cluster bootstrapping
  • Helm charts: Expect kubeadm-style certificate locations
  • Operators: Follow kubeadm patterns
  • Documentation: Tutorials assume kubeadm structure
  • Tooling: Third-party tools expect kubeadm conventions

Migration Challenge

Legacy Kubespray Migrations

For older Kubespray users who installed clusters before the kubeadm transition:

  • Original (pre-kubeadm) installation method is deprecated
  • Migrating in-place is complex and risky
  • Recommended path: Stand up a new cluster using kubeadm method
  • Migrate applications to new cluster
  • Decommission old cluster
# Migration approach: 1. Deploy new cluster with current Kubespray (uses kubeadm) 2. Set up application migration plan 3. Migrate workloads: - Export resources: kubectl get all -o yaml - Apply to new cluster: kubectl apply -f - Migrate persistent data 4. Update DNS/load balancers to point to new cluster 5. Verify applications working 6. Decommission old cluster
Lesson 4 of 5

Cluster Configuration Best Practices

Node Role Planning

Separation of Concerns

Properly separating Control Plane and worker responsibilities is critical for production clusters.

Bad Practice: Co-located Masters and Workers

While technically possible to configure a cluster where master and worker roles are co-located on the same nodes (e.g., in a small three-node cluster), this is strongly discouraged for production environments.

Why Separate Control Plane and Workers?

# Small cluster example (NOT RECOMMENDED for production): Node 1: master + worker ← Bad Practice Node 2: master + worker ← Bad Practice Node 3: master + worker ← Bad Practice Problems: 1. Resource contention: - User workloads compete with Control Plane - Can cause API Server slowdowns - etcd performance degradation 2. Security concerns: - User pods running on same nodes as Control Plane - Increased attack surface - Potential privilege escalation 3. Stability issues: - Resource-heavy workload can crash Control Plane - Difficult to troubleshoot problems - Unpredictable behavior 4. Upgrade complexity: - Can't upgrade Control Plane without affecting workloads - More difficult to test upgrades

Recommended Architecture

# Production cluster (RECOMMENDED): Control Plane nodes (dedicated): master-1: Control Plane only master-2: Control Plane only master-3: Control Plane only Worker nodes (dedicated): worker-1: Workloads only worker-2: Workloads only worker-3: Workloads only worker-N: Workloads only Benefits: ✓ Control Plane has dedicated resources ✓ Predictable performance ✓ Better security isolation ✓ Easier upgrades and maintenance ✓ Clear troubleshooting

Node Role Best Practices

  • 3 dedicated Control Plane nodes for HA
  • Taint Control Plane nodes to prevent workload scheduling
  • 3+ worker nodes for workload distribution
  • Similar hardware within each role category
  • Monitor separately - different metrics for each role

Network Planning

Proper network configuration is critical for cluster scalability and performance.

Pod Network CIDR

Pod Network Sizing

Recommendation: Allocate at least a /24 CIDR block for pods on each node.

This means 256 IP addresses per node (minus network overhead).

# Calculate Pod network size: Number of nodes: 10 IPs per node: /24 = 256 addresses Total IPs needed: 10 × 256 = 2,560 addresses # Find CIDR that provides 2,560 addresses: /24 = 256 addresses ← Too small /23 = 512 addresses ← Too small /22 = 1,024 addresses ← Too small /21 = 2,048 addresses ← Too small /20 = 4,096 addresses ← Good (2,560 < 4,096) # Recommendation for 10 nodes: kube_pods_subnet: 10.233.64.0/20 # This allows: # - 10 nodes × 256 IPs = 2,560 pods # - Room for growth to ~16 nodes

Service Network CIDR

Service Network Sizing

Recommendation: Allocate a subnet large enough for approximately 5,000 addresses for Services.

# Service network calculation: Target: ~5,000 Services CIDR options: /24 = 256 addresses ← Too small /23 = 512 addresses ← Too small /22 = 1,024 addresses ← Too small /21 = 2,048 addresses ← Too small /20 = 4,096 addresses ← Close /19 = 8,192 addresses ← Good (provides headroom) # Recommendation: kube_service_addresses: 10.233.0.0/19 # This provides: # - 8,192 IP addresses # - Room for 8,000+ Services # - Future-proof for large deployments

Complete Network Configuration Example

Network Type CIDR Size Purpose
Node Network 192.168.1.0/24 256 IPs Physical/VM network
Pod Network 10.233.64.0/18 16,384 IPs Container IPs
Service Network 10.233.0.0/18 16,384 IPs Service ClusterIPs

Network Planning Tips

  • No overlap: Ensure networks don't overlap
  • Plan for growth: Better to over-provision than run out
  • Document everything: Keep network diagrams updated
  • Consider multi-cluster: Reserve CIDR space for future clusters

Container Runtime Configuration

Docker Version Specification

Contemporary Requirement

Modern Kubespray requires explicitly specifying the container runtime version.

# In group_vars/all/docker.yml or similar: # Docker CE version docker_version: '18.09' # Or for containerd: containerd_version: '1.6.15' # Why specify version? # - Ensures consistency across nodes # - Prevents unexpected upgrades # - Enables testing before production # - Required for compliance/audit

Docker Deprecation Note

Kubernetes deprecated Docker as a container runtime (dockershim removal in v1.24). Modern deployments should use:

  • containerd: Recommended, lightweight
  • CRI-O: Alternative, Kubernetes-focused

CNI Plugin Selection

Comparing Network Plugins

Plugin Performance Features Complexity
Calico High Network policies, BGP, IPIP Medium
Flannel Medium Simple overlay network Low
Weave Net Medium Encryption, multicast Medium
Cilium Very High eBPF, advanced policies High

Recommendation: Calico

Why Calico?

Based on production experience, Calico is preferred over Weave Net:

  • Better performance: Faster packet processing
  • Network policies: Built-in security features
  • Scalability: Handles large clusters well
  • BGP support: Integration with datacenter networks
  • Active development: Regular updates and improvements
  • No significant advantages in Weave Net
# Kubespray CNI configuration: # In group_vars/k8s_cluster/k8s-cluster.yml kube_network_plugin: calico # Calico-specific settings: calico_ipip_mode: 'Always' # or 'CrossSubnet', 'Never' calico_vxlan_mode: 'Never' calico_network_backend: 'bird' # Enable network policies: calico_felix_prometheusmeteringsenabled: true

Complete Production Configuration

# Recommended production Kubespray configuration: # Node roles Control Plane: 3 dedicated nodes Workers: 3+ dedicated nodes (scale as needed) # Network configuration kube_pods_subnet: 10.233.64.0/18 # 16K IPs for pods kube_service_addresses: 10.233.0.0/18 # 16K IPs for services kube_network_plugin: calico # Container runtime container_manager: containerd containerd_version: '1.6.15' # Kubernetes version kube_version: v1.25.5 # High availability etcd nodes: 3 (on Control Plane nodes) API Server load balancer: Required for HA # Security kube_encrypt_secret_data: true kube_audit_enabled: true # Monitoring metrics_server_enabled: true prometheus_enabled: true

Configuration Best Practices Summary

  • Dedicated Control Plane: Never co-locate with workloads in production
  • Pod network: At least /24 per node
  • Service network: ~5,000 addresses (/19 recommended)
  • Container runtime: Specify version explicitly (use containerd)
  • CNI plugin: Calico for production (proven performance)
  • Plan for growth: Over-provision network CIDRs
  • Document everything: Keep configuration in version control
Lesson 5 of 5

Summary & Next Steps

What We've Learned

1. Installation Methods
  • Managed cloud solutions (GKE, EKS, AKS) are convenient but offer less control
  • Self-hosted clusters needed for on-premises, compliance, or full control
  • Multiple tools available: kubeadm, Kubespray, kops, Rancher
2. Kubespray Automation
  • Ansible-based tool for automated cluster deployment
  • Standardizes and automates complex installation process
  • Handles package installation, certificates, Control Plane setup
  • Production-ready with HA support
3. kubeadm Standard
  • Official Kubernetes bootstrapping tool
  • Kubespray uses kubeadm under the hood
  • Industry standard for certificate management and configuration
  • Stores cluster state in ConfigMap
4. Best Practices
  • Dedicated Control Plane nodes (never co-locate in production)
  • Pod network: /24 per node minimum
  • Service network: ~5,000 addresses
  • Use containerd as container runtime
  • Calico for network plugin

Quick Reference Guide

Kubespray Installation Commands

# 1. Clone Kubespray git clone https://github.com/kubernetes-sigs/kubespray.git cd kubespray pip install -r requirements.txt # 2. Create inventory cp -rfp inventory/sample inventory/mycluster # Edit inventory/mycluster/hosts.yml # 3. Configure cluster # Edit inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml # 4. Deploy cluster ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml # 5. Verify ssh master1 kubectl get nodes

kubeadm Manual Installation

# On master node: kubeadm init --pod-network-cidr=10.244.0.0/16 # Install CNI: kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml # On worker nodes: kubeadm join :6443 --token --discovery-token-ca-cert-hash sha256:

Recommended Configuration

# Minimum production cluster: - 3 Control Plane nodes (dedicated) - 3+ Worker nodes (dedicated) - containerd runtime - Calico CNI - Pod CIDR: 10.233.64.0/18 - Service CIDR: 10.233.0.0/18 - Kubernetes 1.25+

Common Operations

Add Worker Node

# With Kubespray: # 1. Add node to inventory # 2. Run scale playbook ansible-playbook -i inventory/mycluster/hosts.yml scale.yml # With kubeadm: # 1. Prepare node (install runtime, kubelet, kubeadm) # 2. Generate token on master kubeadm token create --print-join-command # 3. Run join command on new node

Upgrade Cluster

# With Kubespray: # 1. Update kube_version in group_vars # 2. Run upgrade playbook ansible-playbook -i inventory/mycluster/hosts.yml upgrade-cluster.yml # With kubeadm: # 1. Upgrade Control Plane kubeadm upgrade plan kubeadm upgrade apply v1.26.0 # 2. Upgrade worker nodes kubectl drain node1 apt-get update && apt-get install -y kubelet=1.26.0-00 kubectl uncordon node1

Backup and Restore

# Backup etcd ETCDCTL_API=3 etcdctl snapshot save snapshot.db \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key # Restore etcd ETCDCTL_API=3 etcdctl snapshot restore snapshot.db

Troubleshooting Tips

Issue Check Solution
Nodes NotReady CNI plugin status Verify network plugin pods running
Pods not scheduling kubectl describe pod Check taints, resources, node status
API Server down Static pod manifest Check /etc/kubernetes/manifests/
Certificate errors Certificate expiration Renew with kubeadm certs renew all
etcd issues etcdctl endpoint health Check etcd logs, verify quorum

Next Steps

Continue Your Kubernetes Journey

  1. Practice: Deploy a test cluster with Kubespray
  2. Experiment: Try different CNI plugins and configurations
  3. Learn Helm: Package manager for Kubernetes applications
  4. Implement monitoring: Prometheus, Grafana for observability
  5. Security hardening: RBAC, Pod Security, Network Policies
  6. CI/CD integration: Automate deployments
  7. Advanced topics: Service Mesh, Operators, Custom Resources

Resources

Official Documentation

  • Kubernetes: kubernetes.io/docs
  • Kubespray: github.com/kubernetes-sigs/kubespray
  • kubeadm: kubernetes.io/docs/setup/production-environment/tools/kubeadm/
  • Calico: docs.projectcalico.org

Final Recommendations

Production Deployment Checklist

  • ✓ Use Kubespray for automated, standardized deployment
  • ✓ Separate Control Plane and worker nodes
  • ✓ Plan network CIDRs with growth in mind
  • ✓ Use containerd (not Docker) as runtime
  • ✓ Choose Calico for network plugin
  • ✓ Configure HA with 3+ Control Plane nodes
  • ✓ Enable audit logging and monitoring
  • ✓ Regular etcd backups
  • ✓ Document your configuration
  • ✓ Test upgrade procedures
  • ✓ Implement proper RBAC
  • ✓ Plan disaster recovery
Final Assessment

Test Your Knowledge

Kubernetes Installation Quiz

Question 1: What is Kubespray's primary benefit?

Question 2: What tool does Kubespray use under the hood?

Question 3: Where does kubeadm store cluster configuration state?

Question 4: Is it recommended to co-locate master and worker roles in production?

Question 5: What is the recommended minimum CIDR block per node for pods?

Question 6: How many addresses should be allocated for the Service network?

Question 7: Which CNI plugin is recommended based on production experience?

Question 8: What is required for modern Kubespray deployments regarding container runtime?