Deploying Kubernetes Onto A Single Debian 12.1 Host
There are a few things that I've been wanting to play around with, some of which could do with being deployed into a Kubernetes cluster.
They're only small projects, so I didn't want to devote multiple bits of hardware to this experimentation, nor did I particularly want to mess around with setting up VMs.
Instead, I wanted to run Kubernetes on a single machine, allowing orchestration via kubectl
and other tools without any unnecessary faffing about.
By sheer luck, I started doing this a few days after the release of Debian 12.1 (Bookworm), otherwise this'd probably be a post about running on Debian 11 (the steps for that, though, are basically identical).
In this post, I'll detail the process that I followed to install Kubernetes on Debian 12.1 as well as the subsequent steps to allow it to function as a single node "cluster". If you're looking to install Kubernetes on multiple Debian 12.1 boxes these instructions will also work for you - just skip the section "Single-Node Specific Changes" and use kubectl join
as you normally would.
Aside from a few steps, it's assumed that you're running through the doc as root
so run sudo -i
first if necessary.
System Resources
The resources that you need will depend on the workload that you're planning on dropping onto the cluster, however as a guide, the kubernetes install itself will run fine within the following
- 2 CPU
- 2GB RAM
- 10GB Disk
Realistically, though, you'll almost certainly want to allocate more to ensure that your applications can get what they need.
System Setup
There are a few steps that we need to complete before moving onto installing Kubernetes itself.
First, disable Swap
swapoff -a
nano /etc/fstab # comment out the swap line
Allow interface bridging
cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
cat <<EOF | tee /etc/sysctl.d/99-kubernetes-k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
Install containerd
apt update
apt install -y containerd
Generate the containerd
configuration
containerd config default > /etc/containerd/config.toml
Open the new config file in your text editor
nano /etc/containerd/config.toml
And look for the section [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
, under which there should be an attribute called SystemdCgroup
: change this to true
so that SystemD cgroups are used:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Enable containerd
and then restart it
systemctl enable containerd
systemctl restart containerd
Firewall Ports
If you're building a single node cluster, it's (hopefully) for a dev environment, so it might be that you haven't got any firewall rules in place.
If you have, though, you'll want to ensure that the following are allowed through
- TCP/6443: The port used by the Kubernetes API server
- TCP/2379-2380: Used by the
etcd
API - TCP/10250: The Kubelet API
- TCP/10257: Kube Controller Manager (see note below)
- TCP/10259: Kube scheduler (see note below)
- TCP/30000-32767: Nodeport services
Note: In Kubernetes versions prior to v1.17
the ports 10251
and 10252
were in use, however they were replaced
You'll also need to allow any ports that you're intending to expose services on.
Kubernetes Install
With the system set up, it's time to install Kubernetes itself.
Add the key and repo
apt install gnupg gnupg2 curl software-properties-common -y
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmour -o /etc/apt/trusted.gpg.d/cgoogle.gpg
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Update and then install
apt update
apt install kubelet kubeadm kubectl -y
Then, prevent accidental upgrade or uninstall
apt-mark hold kubelet kubeadm kubectl
Create the cluster
kubeadm init --control-plane-endpoint=$HOSTNAME
This should generate lots of output
This will eventually complete and give you a success message containing some extra steps
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join bumblebee:6443 --token 5j9a3v.st7yanrdclnhwozc \
--discovery-token-ca-cert-hash sha256:c567b323fb95fa7da4c06ef3cd8727f977ebd6a564935f7b7f18ff9c8096e8f7 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join bumblebee:6443 --token 5j9a3v.st7yanrdclnhwozc \
--discovery-token-ca-cert-hash sha256:c567b323fb95fa7da4c06ef3cd8727f977ebd6a564935f7b7f18ff9c8096e8f7
If it didn't complete successfully, then:
- Run
journalctl -xeu kubelet
to view the logs - When you've found and resolved the issue, run
kubeadm reset
before re-running theinit
As an unprivileged user, make a copy of the config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
That user should now be able to interact with Kubernetes:
kubectl get nodes
For convenience's sake, take the opportunity to also set up auto-complete support
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" | tee -a ~/.bashrc
Now, when you can't remember the right syntax, you can mash Tab
until it appears.
Calico and Nginx Ingress
Calico is a widely used network provider, so we're going to use that for the pod network.
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
Nginx ingress is much more optional but is incredibly useful, so you may want to install it to ensure that it's available for use later
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
Single-Node Specific Changes
So far, everything that we've done is exactly the same as if you were building a multi-node kubernetes cluster - you'd now just run kubeadm join
on the other systems to have them enrol.
Without those other nodes, though, it isn't currently possible to run anything: Kubernetes applies a taint to the master so that pods can't be scheduled on it. In order to have pods actually run on our 1 node cluster we need to remove that taint.
Get the node name
kubectl get nodes
List taints
kubectl get nodes -o json | jq '.items[].spec.taints'
This will output something like
[
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/control-plane"
}
]
(If, for some reason, you're installing an older version, the taint may instead have been called node-role.kubernetes.io/master
)
We then combine this and the hostname into a kubectl taint
call
kubectl taint node <nodename> node-role.kubernetes.io/control-plane:NoSchedule-
(Note the -
at the end: we want to remove, not add the taint)
If you run
kubectl get nodes -o json | jq '.items[].spec.taints'
You should now get an empty response.
After a few seconds, pods should be running:
kubectl get pods -n kube-system
Each of the pods should display RUNNING
in the status column.
Adding Routes
If you're only going to be running tests directly from the host, there's no need to do this. However, if you want other devices on your network to be able to connect to services in your cluster, you'll need to ensure that they have the means to route to service addresses.
The first thing to do, is confirm the network range in use
kubectl cluster-info dump | egrep -e '(service-cluster-ip-range|cluster-cidr)'
This should give something like
"--service-cluster-ip-range=10.96.0.0/12",
You also need the IP of your host
ip route get 1 | grep -o -P "src [0-9\.]+"
On other boxes (or your router), it's then just a case of adding a route to the service network, using the host's IP as a gateway
ip route add 10.96.0.0/12 via 192.168.3.23
With the route in place, other nodes on the network will be able to communicate with services that your cluster exposes, so lets move onto deploying one to test against.
Deploying an App
With Kubernetes up and running, we now need to deploy into it.
My SSH Tarpit is handy for quick tests: it exposes a network service but doesn't require any volumes or additional configuration.
Create ssh_tarpit.yml
with the following contents
# Create 2 pods running my tarpit
apiVersion: apps/v1
kind: Deployment
metadata:
name: sshtarpit
spec:
replicas: 2
selector:
matchLabels:
bb: sshtarpit
template:
metadata:
labels:
bb: sshtarpit
spec:
containers:
- name: sshtarpit
image: bentasker12/go_ssh_tarpit
imagePullPolicy: IfNotPresent
---
# Create a load balancer
apiVersion: v1
kind: Service
metadata:
name: tarpit-entrypoint
namespace: default
spec:
type: LoadBalancer
selector:
bb: sshtarpit
ports:
- port: 22
targetPort: 2222
Create the resources
kubectl apply -f ssh_tarpit.yml
Verify that the sshtarpit
pods have been created and are running
ben@bumblebee:~/charts$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
influxdb1x-86977d758c-szwm4 1/1 Running 0 21h 172.16.237.14 bumblebee <none> <none>
sshtarpit-9f59b569f-bt7xk 1/1 Running 0 2m9s 172.16.237.15 bumblebee <none> <none>
sshtarpit-9f59b569f-lcpm2 1/1 Running 0 2m9s 172.16.237.16 bumblebee <none> <none>
A load balancer should also have been created, we can retrieve its details (including the load balanced IP) with a kubectl describe
command
ben@bumblebee:~/charts$ kubectl describe services tarpit-entrypoint
Name: tarpit-entrypoint
Namespace: default
Labels: <none>
Annotations: <none>
Selector: bb=sshtarpit
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.238.154
IPs: 10.100.238.154
Port: <unset> 22/TCP
TargetPort: 2222/TCP
NodePort: <unset> 32417/TCP
Endpoints: 172.16.237.15:2222,172.16.237.16:2222
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now, we should be able to SSH to the load-balanced IP and have our client get stuck in the tarpit
ben@optimus:~$ ssh -v 10.100.238.154
OpenSSH_8.9p1 Ubuntu-3ubuntu0.3, OpenSSL 3.0.2 15 Mar 2022
debug1: Reading configuration data /home/ben/.ssh/config
debug1: /home/ben/.ssh/config line 147: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug1: Connecting to 10.100.238.154 [10.100.238.154] port 22.
debug1: Connection established.
debug1: identity file /home/ben/.ssh/id_rsa2 type 0
debug1: identity file /home/ben/.ssh/id_rsa2-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.3
debug1: kex_exchange_identification: banner line 0: dsS?WnZN3.VWof5jlglBdrfr7CNW'KRBGAL99httME6>Sr87e'3C5fjbLZlA=r'=cTS1Wd!Te.0MEpQ8GAUm>aI5'<7jf?@dSaib>l41ejG#UHLPbp
debug1: kex_exchange_identification: banner line 1: hLM7gHcHUe?YU80cUcVseq2s<=s
Things are working, so we can tear the deployment back down with
kubectl delete -f ssh_tarpit.yml
The cluster's now ready for whatever you were actually intending to deploy.
Optional: NFS Storage Volumes
If you're building a cluster (single-node or otherwise) it's inevitable that you're going to want to deploy something which uses persistent storage.
With a single node setup, you might be tempted to use a local volume, I prefer to use a nfs volume (it makes life easier if the cluster is ever scaled out).
Assuming that you've already got a NAS capable of exporting volumes via NFS, getting set up is easy.
Ensure the host has the NFS client installed
apt install nfs-client
If your NFS server requires clients to be permitted, it's the host IP which needs to be allowlisted.
Specifying a NFS volume within a deployment is then trivial
apiVersion: apps/v1
kind: Deployment
metadata:
name: influxdb1x
spec:
replicas: 1
selector:
matchLabels:
bb: influxdb
template:
metadata:
labels:
bb: influxdb
spec:
containers:
- name: influxdb
image: influxdb:1.8.10
imagePullPolicy: IfNotPresent
# Specify the volumes to mount in
# the container
volumeMounts:
- mountPath: /root/.influxdb/
name: influxdb-datadir
- mountPath: /etc/influxdb
name: influxdb-confdir
# Define the volumes
volumes:
- name: influxdb-datadir
nfs:
server: 192.168.3.233
path: /volume1/kubernetes_influxdb_data
readOnly: false
- name: influxdb-confdir
nfs:
server: 192.168.3.233
path: /volume1/kubernetes_influxdb_conf
readOnly: false
Moving to Multi-Node
If you later decide that you do want the cluster to run on multiple hosts, moving to doing so is quite simple:
- Install Kubernetes on the new boxes
- On each new box, run the
join
command you were given when runningkubeadm init
(if you didn't take a note of it, you can regenerate the join command withkubeadm token create --print-join-command
) - Re-add the taint to the control-plane box by running the
kubectl taint
but without the trailing hyphen
Conclusion
Installing Kubernetes onto a single node isn't particularly useful for a production environment, however it can be helpful when building a dev or staging environment, because it allows configuration in prod to be mirrored more easily than is possible when using something like microk8s or MiniKube.
Installation onto Debian 12 is, in practice, no different to installation onto Debian 11 and, given the relatively complexity of using Kubernetes, is misleadingly straight-forward.
With the master un-tainted and allowed to schedule pods, deployment of apps can happen in whatever way suits, whether that's helm charts, kubectl apply
or (shudder) kubectl create deployment
invocations.