Kubernetes on Hetzner for personal use

Dani Hodovic Jan. 15, 2024 7 min read
Responsive image

I've been running a personal compute setup for a few years now. I use it for various things - hosting sites for friends, scraping sites for freelance work and collecting local news. Anything that I want to automate on my laptop, but can't because it has to be powered on at all times is scheduled in the cluster.

I've used various cloud providers over the years: AWS, DigitalOcean, Google Cloud. Today I use Hetzner. The main advantage about Hetzner is the affordable pricing and the simple developer experience. AWS has been the polar opposite: it eats up far too much of my time to understand and configure. The complexity requires training and certification. I want the cloud provider I use for personal projects to get out of my way: I want to provision a few VM's, set up DNS and deploy my software. I'm based in Europe, so having datacenters on the continent to reduce latency is an advantage.

In the past I used to provision a powerful single server with Ansible and all of the software I wanted to run, but packaging and deployment was tedious. For each new project I would have to write an Ansible role if one didn't exist. Additionally any software that was intended to be deployed over a cluster of machines, such as Airflow or Celery would have to be reconfigured for single-VM deployments. Too much customization, too time consuming.

Table of contents

Why K8s?

Over the past five years I've noticed how Kubernetes has slowly risen to the throne as the king of all things infrastructure. Although I dislike the complexity of Kubernetes, it does most things right and as a consequence it's eaten up the devops market. The Kubernetes community is by far the largest in the infra world and most project you'd like to deploy - from relational databases to NextCloud or Jenkins exist as packaged Helm charts. It's the equivalent of an app store for infrastructure. I don't have to worry about how to package or deploy software anymore, I usually spend some time comparing different Helm charts, reading the documentation for each one and writing variables for the chart templates. Once the chart is applied I have a production-ready installation that usually works out of the box. Bam! That's the main advantage for Kubernetes when it comes to personal use.

Over the past two years I've used K3s to simplify the provisioning of a Kubernetes cluster. It's a lightweight Kubernetes distribution that's well suited for a personal cluster. I prefer K3s because: 1. "production-grade" deployments are a pain in the ass 2. I don't need all the bells and whistles of a robust cluster.

Alex Ellis wrote a clever tool for deploying K3s clusters called k3sup which reduces the steps required to deploy a K3s cluster even further. I used for a year or so, but each time I lost credentials to my cluster or wanted to upgrade it I found myself scrambling through the README once again. I wanted a script to create reproducible builds of Kubernetes clusters, ideally using something I already knew. I looked for various Ansible projectst that deploy K3s and found that k3s-io/k3s-ansible was the simplest to use as it worked out of the box.

Advantages behind this approach:

  • it's simple
  • it uses tools that are standardized in the industry: Terraform & Ansible
  • it works for any cloud platform that offers virtual machines (read: all of them)
  • it works

Emphasis on the last point regarding functionality. I've tried two or three projects that set up Kubernetes and each time something broke.

K8s on Hetzner

I'll outline the steps below that I used to deploy a Kubernetes cluster on Hetzner, using Terraform to provision the machines and Ansible to provision the cluster. Finally I use Terraform once again to deploy the Hetzner Container Storage Interface which allow me to provision Hetzner volumes as Kubernetes volumes for PostgreSQL storage.

Required software:

  • Terraform - to create virtual machines
  • Ansible - to deploy K3s
  • Git - to clone the project that deploys K3s

Example project can be found on Github.

Create the directory that will contain the code:

mkdir my-cluster
cd my-cluster
git init
terraform init
source .venv/bin/activate
python -m venv .venv && source .venv/bin/activate

Create an SSH key for the cluster:

ssh-keygen -t ed25519 -f ~/repos/my-cluster/ssh_key -N ""

Create a Hetzner Cloud API token for use in Terraform

Start by setting up the providers for Terraform in a file called providers.tf:

terraform {
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = "~> 1.31"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.14.0"
    }
  }
}

Re-initialize terraform to install the providers:

terraform init

Next, we'll set up the Hetzner virtual machines. There a few things to note in the file (vms.tf) below:

  • we're creating three servers named: server0, server1, server2
  • we're placing the servers in a Hetzner private network
  • we're using cloud-init to self-bootstrap the servers
    • cloud-init will on server launch create a user "ubuntu" with the public key that we generated earlier
  • we're using the Terraform block lifecycle to ignore diffs on user_data and image changes because:
    • OS images in Hetzner change and I don't know why (security updates I suppose)
    • we might update cloud-init to install additional software for new servers we add
    • we don't want to destroy our servers in either case of terraform apply
locals {
  servers = ["server0", "server1", "server2"]
}

resource "hcloud_network" "vpc" {
  name     = "K8s Network"
  ip_range = "10.10.0.0/16"
}

resource "hcloud_network_subnet" "subnet" {
  network_id   = hcloud_network.vpc.id
  type         = "cloud"
  network_zone = "eu-central"
  ip_range     = "10.10.0.0/24"
}

resource "hcloud_server" "server" {
  for_each    = toset(local.servers)
  name        = each.key
  # Image for Ubuntu 22.04
  image       = "67794396"
  # Falkenberg data center
  location    = "fsn1"
  # 4 vCPU, 8GB Ram, 16 euro monthly (2023-11)
  server_type = "cx31"
  network {
    network_id = hcloud_network.vpc.id
    ip         = "10.10.0.${index(local.servers, each.value) + 5}"
  }
  labels = {
    instance = each.key
  }
  # Use cloud-config to set up a user with our SSH key.
  # We'll use said SSH key to log in to the server via SSH and provision k3s.
  user_data = <<EOT
#cloud-config
package_upgrade: true
users:
  - default
  - name: ubuntu
    sudo: ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - ${file("./ssh_key.pub")}
EOT

  # The image forces changes, so ignore it
  # Also ignore any changes to user_data as we want to manually tear the cluster down.
  lifecycle {
    ignore_changes = [
      user_data, image
    ]
  }
}

output "server_ips" {
  value = [
    for server in hcloud_server.server : server.ipv4_address
  ]
}

Once we add the file above and apply the changes with Terraform, we'll have the server IP's output in the terminal.

Let's make sure cloud-config has provisioned our user with the SSH key. Let's try to log in:

ssh -i ./ssh_key ubuntu@<server-ip-from-terraform-output>

Next, let's use k3s-ansible to bootstrap the cluster.

git clone [email protected]:k3s-io/k3s-ansible.git
cd k3s-ansible

Bootstrap the Ansible environment

python -m venv .venv
source .venv/bin/activate
pip install ansible
ansible-galaxy install -r collections/requirements.yml

Retrieve the server IP's by opening another terminal

terraform output
server_ips = [
  "78.47.93.191",
  "188.34.185.195",
  "49.13.64.140",
]

Copy the sample inventory file in the k3s-ansible directory and modify it

cp inventory-sample.yml inventory.yml

I've modified three things in the sample file: - the IP's of the servers - the ansible_user is set to the same user we've created using cloud-init - the K3s token under token

---
k3s_cluster:
  children:
    server:
      hosts:
        78.47.92.188:
    agent:
      hosts:
        188.34.181.188:
        49.13.60.13:

  vars:
    ansible_port: 22
    ansible_user: ubuntu
    k3s_version: v1.26.9+k3s1
    token: "Batt-Mayhem-5larsen-Cornball-stucco"  # Use ansible vault if you want to keep it secret
    api_endpoint: "{{ hostvars[groups['server'][0]]['ansible_host'] | default(groups['server'][0]) }}"
    extra_server_args: ""
    extra_agent_args: ""

Now let's finally run Ansible to set up the cluster:

ansible-playbook playbook/site.yml -i inventory.yml --private-key ../ssh_key

Once the Ansible playbook has finished running, we can confirm that the cluster is properly set up:

$ kubectl get nodes --context k3s-ansible
Alias tip: k get nodes --context k3s-ansible
NAME      STATUS   ROLES                  AGE     VERSION
server2   Ready    <none>                 5m22s   v1.26.9+k3s1
server1   Ready    <none>                 4m59s   v1.26.9+k3s1
server0   Ready    control-plane,master   34m     v1.26.9+k3s1

Woohoo 🙌 !

Deploying a hello-world application

Let's deploy a simple hello-world application to prove that the cluster works expected. For breivity I'll use Helm as it allows me to deploy an nginx application in two commands. Helm is akin to a package manager, like APT or Homebrew, but for Kubernetes clusters.

The chart I'm deploying is the official Helm example from their Github repository: https://github.com/helm/examples/tree/main. It deploys Nginx as a deployment and registers a Kubernetes service so that it's addressable with HTTP.

helm install ahoy examples/hello-world                                                                          helm repo add examples https://helm.github.io/examples                                                         

Let's see that it's deployed:

$ kubectl get deployment
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
ahoy-hello-world                     1/1     1            1           11m

Let's try to curl the Nginx application by port-forwarding the cluster:

kubectl port-forward service/ahoy-hello-world 8000:80
Forwarding from 127.0.0.1:8000 -> 80
Forwarding from [::1]:8000 -> 80
Handling connection for 8000
Handling connection for 8000

Now curl localhost:8000 on your local machine:

$ curl localhost:8000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

You can also open the browser at localhost:8000 to see the following image:

alt text