Post

Ansible Role for High-Availability Kubernetes Cluster

This post introduces my Ansible role for automating the setup of a highly available Kubernetes cluster. The role handles the complex configuration of multiple masters and workers, integrates kube-vip for virtual IP management, and sets up metallb for load balancing.

Ansible Role for High-Availability Kubernetes Cluster

Project Overview

The TalhaJuikar.kubernetes Ansible role is designed with the following goals:

  • Create a production-ready Kubernetes cluster
  • Support high availability with multiple master nodes
  • Simplify the complex HA setup process
  • Provide flexible configuration options
  • Integrate with popular add-ons and tools
  • Ensure compatibility with various Linux distributions

Supported Operating Systems

The role has been thoroughly tested on the following operating systems:

  • Rocky Linux 9
  • CentOS 9 Stream
  • Ubuntu 22.04 / 24.04
  • Debian 11 / 12

Features

  • High Availability Setup: Configure multiple master nodes with kube-vip for virtual IP management
  • Single Master Option: Can also set up a simpler single-master configuration
  • Container Runtime Choice: Support for both containerd and CRI-O runtimes
  • Network Plugin Options: Integrate with Calico or Flannel CNI
  • Load Balancer Integration: Optional MetalLB deployment for LoadBalancer services
  • Flexible Versioning: Configure specific Kubernetes and container runtime versions
  • Automated Configuration: Handles all the complexities of kubeadm initialization and joining

Installation

Install the role directly from Ansible Galaxy:

1
ansible-galaxy install TalhaJuikar.kubernetes

Role Variables

The role comes with sensible defaults that can be customized to your specific needs:

Variable Description Default
kubernetes_version Kubernetes version to install 1.31
crio_version CRI-O version (if selected as runtime) 1.31
container_runtime Container runtime to use crio
kubernetes_pod_network.cni CNI plugin to use calico
kubernetes_pod_network.cidr Pod CIDR range 192.168.0.0/16 for Calico, 10.244.0.0/16 for Flannel
metallb_ip_range IP range for MetalLB load balancer empty (optional)
VIP Virtual IP for the HA control plane empty (required for HA)
copy_kubeconfig Whether to copy kubeconfig to local machine true

Usage Example

Inventory File

1
2
3
4
5
6
7
8
9
10
[k8s_master]
control-1 ansible_host=192.168.1.101

[k8s_other_masters]
control-2 ansible_host=192.168.1.102
control-3 ansible_host=192.168.1.103

[k8s_worker]
worker1 ansible_host=192.168.1.111
worker2 ansible_host=192.168.1.112

Playbook Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
---
- name: Install kubernetes
  hosts: all
  become: true
  roles:
    - TalhaJuikar.kubernetes
  vars:
    kubernetes_version: "1.30"
    container_runtime: "containerd"
    VIP: "192.168.200.50"
    metallb_ip_range: "192.168.203.50-192.168.203.60"
    kubernetes_pod_network:
      cni: 'flannel'
      cidr: '10.244.0.0/16'

Implementation Details

The role follows a clean, modular approach to cluster setup:

  1. Preparation: System requirements, dependencies, and repository configurations
  2. Installation: Kubernetes components and the selected container runtime
  3. Initialization: Control plane setup with kubeadm and kube-vip configuration
  4. Networking: CNI plugin deployment and configuration
  5. Joining: Adding additional control plane and worker nodes to the cluster
  6. Add-ons: Optional components like MetalLB

Deployment Steps

To deploy your cluster:

  1. Install Ansible on your local machine
  2. Create an inventory file with your node information (recommended with key-based SSH authentication)
  3. Install the role using ansible-galaxy as shown above
  4. Create a playbook similar to the example
  5. Run the playbook:
    1
    
    ansible-playbook -i inventory playbook.yml
    
  6. Upon successful completion, the kubeconfig will be available at /tmp/kubeconfig on your local machine (unless copy_kubeconfig is set to false)

Single-Master Deployment

For simpler setups, you can deploy a single master cluster by:

  • Adding only one master node IP to the k8s_master group
  • Leaving the k8s_other_masters group empty
  • Not setting the VIP variable

Contributing

Contributions are welcome! Feel free to submit pull requests or open issues on the GitHub repository.

License

This project is licensed under the MIT License.

Author Information

For more information, visit talhajuikar.cloud.

This post is licensed under CC BY 4.0 by the author.