HOWTO: Kubernetes: K3s Cluster on Ubuntu With Ansible - Author Credit: Fabian Lee

3 years ago   •   5 min read

By CloudNerve.com

K3s is a lightweight Kubernetes deployment by Rancher that is fully compliant, yet also compact enough to run on development boxes and edge devices.

In this article, I will show you how to deploy a  three-node K3s cluster on Ubuntu nodes that are created using Terraform and a local KVM libvirt provider.  Ansible is used for installation of the cluster.

Creating node VMs

We will deploy this K3s cluster on three independent guests running Ubuntu.

These Ubuntu VMs could actually be created using any hypervisor or  hyperscaler, but for this article we will use Terraform and the local KVM libvirt provider to create guests named: k3s-1, k3s-2, k3s-3.

Install Terraform, its libvirt provider, and KVM as described in a previous article.  Also create a ‘br0’ host bridge and KVM ‘host-bridge’ so that the two additional NIC on k3s-1 can be assigned as explained in a previous article.

Then use my github project to create the three Ubuntu guest OS.

# required packages
sudo apt install make git curl -y

# github project with terraform to create guest OS
git clone https://github.com/fabianlee/k3s-cluster-kvm.git
cd k3s-cluster-kvm

# set first 3 octets of your host br0 network
# this is where MetalLB endpoints .143, .144 will be created
sed -i 's/metal_lb_prefix: .*/metal_lb_prefix: 192.168.1/' group_vars/all


# download dependencies and modules
ansible-playbook install_dependencies.yml
# invoke terraform apply from tf-libvirt directory
ansible-playbook playbook_terraform_kvm.yml

 

K3s is a lightweight Kubernetes deployment by Rancher that is fully compliant, yet also compact enough to run on development boxes and edge devices.

In this article, I will show you how to deploy a  three-node K3s cluster on Ubuntu nodes that are created using Terraform and a local KVM libvirt provider.  Ansible is used for installation of the cluster.

Creating node VMs

We will deploy this K3s cluster on three independent guests running Ubuntu.

 

These Ubuntu VMs could actually be created using any hypervisor or  hyperscaler, but for this article we will use Terraform and the local KVM libvirt provider to create guests named: k3s-1, k3s-2, k3s-3.

Install Terraform, its libvirt provider, and KVM as described in a previous article.  Also create a ‘br0’ host bridge and KVM ‘host-bridge’ so that the two additional NIC on k3s-1 can be assigned as explained in a previous article.

Then use my github project to create the three Ubuntu guest OS.

 

 

# required packages
sudo apt install make git curl -y

# github project with terraform to create guest OS
git clone https://github.com/fabianlee/k3s-cluster-kvm.git
cd k3s-cluster-kvm

# set first 3 octets of your host br0 network
# this is where MetalLB endpoints .143, .144 will be created
sed -i 's/metal_lb_prefix: .*/metal_lb_prefix: 192.168.1/' group_vars/all


# download dependencies and modules
ansible-playbook install_dependencies.yml
# invoke terraform apply from tf-libvirt directory
ansible-playbook playbook_terraform_kvm.yml

The KVM guests can be listed using virsh.  I have embedded the IP address in the libvirt domain name to make the address obvious.

# should show three running K3s VMs
$ export LIBVIRT_DEFAULT_URI=qemu:///system
$ virsh list
Id Name State
--------------------------------------------
...
10 k3s-1-192.168.122.213 running
11 k3s-2-192.168.122.214 running
12 k3s-1-192.168.122.215 running

cloud-init has been used to give the ‘ubuntu’ user an ssh keypair for login, which allows us to validate the login for each host using the command below.

# accept key as known_hosts
for octet in $(seq 213 215); do ssh-keyscan -H 192.168.122.$octet >> ~/.ssh/known_hosts; done

# test ssh into remote host
for octet in $(seq 213 215); do ssh -i tf-libvirt/id_rsa ubuntu@192.168.122.$octet "hostname -f; uptime"; done

K3s cluster installation overview

As discussed in detail in my other article where the K3s cluster is deployed manually, K3s is first installed on the primary guest VM which generates a node token.

Then K3s is run on the additional worker nodes using the master IP address and node token as parameters so they join the Kubernetes cluster.

The first guest ‘k3s-1’ will serve as the master, with k3s-2 and k3s-3 joining the Kubernetes cluster.

Ansible configuration

In order for Ansible to do its work, we need to inform it of the inventory of guest VMs available and their group variables.  Then we use an Ansible playbook to run the specific tasks and roles.

Ansible inventory

The available hosts as well as their group memberships are defined in ‘ansible_inventory’.

$ cat ansible_inventory
...
# K3s 'master'
[master]
k3s-1

# K3s hosts participating as worker nodes
[node]
k3s-2
k3s-3

# all nodes in K3s cluster
[k3s_cluster:children]
master
node

...

Ansible group variables

The group variables are found in the ‘group_vars’ directory.

# list of group variables
$ ls group_vars
all master

# show variables applying to every host
# notice that 'servicelb' and 'traefik' are disabled
# which allows us to pick MetalLB and custom ingress later
cat group_vars/all

# variables applying to just master
cat group_vars/master

Ansible playbook

The playbook we use for K3s cluster execution executes a set of roles against the groups defined in the ansible_inventory.  The primary role we leverage is k3s-ansible, which comes directly from the k3s-io team.

$ cat playbook_k3s.yml

...
- hosts: k3s_cluster
  gather_facts: yes
  become: yes
  roles:
    - role: prereq
    - role: download

- hosts: master
  become: yes
  roles:
    - role: k3s/master

- hosts: node
  become: yes
  roles:
    - role: k3s/node

...

We surround this with prerequisites as well as post-configuration steps to apply additional NIC to the master, generate certificates, and finally installing ‘k9s’ as a graphical utility for Kubernetes management.

Ansible external dependencies

We still have dependencies to fulfill before installing the K3s cluster using Ansible.

  1. Downloading the ‘k3s-ansible’ role found on github
  2. Dependent collections found on Ansible galaxy

I have written a playbook that can fetch all dependencies automatically, simply run the command below.

# pulls external dependencies
ansible-playbook install_dependencies.yml

Install K3s cluster using Ansible

We can now have Ansible install the K3s cluster on our three guest VMs by invoking the playbook.

ansible-playbook playbook_k3s.yml

Validate Kubernetes deployment to cluster

As a quick test of the Kubernetes cluster, create a test deployment of nginx.  Then check the pod status from each of the cluster nodes.

# deploy from k3s-1, deploys to entire cluster
$ ssh -i tf-libvirt/id_rsa ubuntu@192.168.122.213 "sudo kubectl create deployment nginx --image=nginx"

deployment.apps/nginx created

# show pod deployment
$ ssh -i tf-libvirt/id_rsa ubuntu@192.168.122.213 "sudo kubectl get pods -l app=nginx"

NAME READY STATUS RESTARTS AGE
nginx-6799fc88d8-txcnh 1/1 Running 0 2m58s

Validate remotely using kubectl

The playbook_k3s.yml contains a role named ‘k3s-get-kubeconfig-local’ that copies the remote kubeconfig to a local file named ‘/tmp/k3s-kubeconfig’.

So if you have kubectl installed on your host VM, you can also query the Kubernetes cluster remotely.

$ kubectl --kubeconfig /tmp/k3s-kubeconfig get pods -l app=nginx

NAME READY STATUS RESTARTS AGE 
nginx-6799fc88d8-txcnh 1/1 Running 0 4m24s

Spread the word

Keep reading