Setting Up a Raspberry Pi as a K3s Worker Node
Complete guide to transforming a hardened Raspberry Pi into a production-ready K3s worker node. Covers firmware configuration, cgroup setup, K3s agent installation, firewall rules, and cluster joining procedures for ARM64 Kubernetes deployments.
This entry documents the complete process of transforming a hardened Raspberry Pi into a production-ready K3s worker node. The guide covers all firmware prerequisites, system configuration, K3s installation, and cluster joining procedures needed to deploy a reliable Kubernetes worker on ARM architecture.
K3s is a lightweight, certified Kubernetes distribution designed specifically for resource-constrained and edge environments. It is an ideal choice for Raspberry Pi infrastructure, providing full Kubernetes functionality with significantly reduced overhead compared to standard K8s distributions.
đ§Phase 0: Prerequisites & Assumptions
This guide assumes you are starting with a properly hardened Raspberry Pi system. If you haven't already hardened your Pi, it is strongly recommended that you follow the comprehensive hardening guide first.
- Raspberry Pi 4 or 5 running Debian-based OS (64-bit)
- System hardened following security best practices (SSH, firewall, etc.)
- Static IP address configured
- At least 2GB RAM (4GB+ recommended for production workloads)
- SSH access with sudo privileges
- Existing K3s server (control plane) available with join token
âī¸Phase 1: Firmware Configuration
Kubernetes requires specific kernel features to manage containers effectively. On Raspberry Pi, these features must be explicitly enabled at boot time through firmware configuration. Without these settings, Kubernetes components will fail to start or behave unpredictably.
The critical requirements are:
- cgroups v2 - Required for container resource limits and isolation.
- cgroup memory controller - Enables memory accounting and limits per container.
- 64-bit kernel mode - Ensures proper operation of container runtimes.
These settings are configured by editing the boot configuration file:
sudo nano /boot/firmware/config.txtAdd or verify the following lines at the end of the file:
# Kubernetes / K3s requirements
arm_64bit=1Next, configure kernel command-line parameters by editing/boot/firmware/cmdline.txt:
sudo nano /boot/firmware/cmdline.txtcmdline.txt file must be a single line with no line breaks. Add the parameters to the existing line with a space before them.Append the following parameters to the end of the existing line:
cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memoryAfter making these changes, reboot the system to apply the new firmware configuration:
sudo rebootOnce the system comes back online, verify that cgroups are properly enabled:
# Check cgroup subsystems
cat /proc/cgroups
# Verify cgroup v2 is mounted
mount | grep cgroup
# Check kernel command line
cat /proc/cmdlineYou should see memory and cpuset listed in the cgroups output, and the parameters you added should appear in/proc/cmdline.
đĻPhase 2: System Prerequisites
Before installing K3s, ensure the system has all necessary dependencies and that conflicting packages are not present. K3s is designed to be self-contained, but certain system-level requirements must be met.
First, ensure the system is fully up to date:
sudo apt update
sudo apt full-upgrade -yK3s includes its own container runtime (containerd), so Docker or other container runtimes should not be installed. If Docker is present, it should be removed to avoid conflicts:
# Check if Docker is installed
dpkg -l | grep docker
# If Docker is present, remove it
sudo apt purge -y docker.io docker-ce docker-ce-cli containerd.io
sudo apt autoremove -yInstall curl if it's not already present (required for K3s installation):
sudo apt install -y curlEnsure that legacy iptables is not configured in a way that conflicts with modern netfilter. K3s will handle firewall rules through its embedded components, but the system must be in a clean state:
# Check iptables version and backend
sudo iptables --version
sudo update-alternatives --display iptablesđĨPhase 3: Firewall Configuration
K3s worker nodes require several network ports to be accessible for cluster communication. If you followed the hardening guide, UFW is already configured with a deny-by-default policy, so these ports must be explicitly allowed.
The following ports are required for K3s worker nodes:
- 10250/TCP â Kubelet API (required for metrics, logs, and exec)
- 8472/UDP â Flannel VXLAN (if using Flannel as CNI)
- 51820/UDP â Flannel WireGuard (if using Flannel with WireGuard backend)
- 51821/UDP â Flannel WireGuard IPv6 (if using IPv6)
For security reasons, these ports should only be accessible from your trusted network or specific control plane IPs. Replace 192.168.1.0/24 below with your actual trusted network range:
# Allow Kubelet API from trusted network
sudo ufw allow from 192.168.1.0/24 to any port 10250 proto tcp comment 'K3s Kubelet'
# Allow Flannel VXLAN from trusted network
sudo ufw allow from 192.168.1.0/24 to any port 8472 proto udp comment 'K3s Flannel VXLAN'
# (Optional) Allow Flannel WireGuard if you plan to use it
# sudo ufw allow from 192.168.1.0/24 to any port 51820 proto udp comment 'K3s Flannel WireGuard'
# Reload firewall
sudo ufw reload
# Verify rules
sudo ufw status verboseIf you are running K3s behind a more sophisticated network setup with VLANs or multiple network interfaces, ensure that the K3s traffic can flow between the control plane and worker nodes on these ports.
đPhase 4: K3s Agent Installation
K3s uses a simple installation script that handles all binary downloads, service configuration, and systemd unit setup. The installation distinguishes between server (control plane) and agent (worker) modes through environment variables.
Before running the installer, you need two pieces of information from your K3s server:
- K3S_URL â The API endpoint of your K3s server (e.g.,
https://k3s-server:6443) - K3S_TOKEN â The node join token from your K3s server
To retrieve the join token from your K3s server node, run:
# On your K3s server node
sudo cat /var/lib/rancher/k3s/server/node-tokenWith the token in hand, install K3s in agent mode on your Raspberry Pi:
# Set environment variables (replace with your actual values)
export K3S_URL="https://192.168.1.10:6443"
export K3S_TOKEN="K10abc123def456::server:abc123def456"
# Run K3s installer in agent mode
curl -sfL https://get.k3s.io | sh -The installer will:
- Download the appropriate K3s binary for ARM64 architecture
- Install the binary to
/usr/local/bin/k3s - Create a systemd service unit (
k3s-agent.service) - Configure the agent to connect to your K3s server
- Start the K3s agent service
The installation typically completes in under a minute. Once finished, verify that the K3s agent service is running:
# Check service status
sudo systemctl status k3s-agent
# View recent logs
sudo journalctl -u k3s-agent -fThe agent should show as active (running). The logs will show the node registering with the control plane and starting up the kubelet.
â Phase 5: Cluster Verification
After the agent installation completes, the node should automatically register with the cluster. Verification should be performed from the K3s server (control plane) using kubectl.
On your K3s server node, check that the new worker node appears in the cluster:
# On your K3s server node
sudo k3s kubectl get nodes
# Get detailed node information
sudo k3s kubectl get nodes -o wide
# Describe the node for full details
sudo k3s kubectl describe node <node-name>The node should appear with a status of Ready. If the status is NotReady, check the logs on the worker node:
# On the worker node
sudo journalctl -u k3s-agent -n 100 --no-pagerCommon issues that prevent a node from becoming ready include:
- Firewall blocking required ports
- Incorrect K3S_URL or K3S_TOKEN
- Missing cgroup configuration (from Phase 1)
- Network connectivity issues between worker and control plane
Once the node shows as Ready, verify that pods can be scheduled onto it:
# On the K3s server, create a test deployment
sudo k3s kubectl create deployment nginx-test --image=nginx:alpine --replicas=1
# Check pod placement
sudo k3s kubectl get pods -o wide
# Clean up
sudo k3s kubectl delete deployment nginx-testđˇī¸Phase 6: Node Labeling & Configuration
Node labels provide metadata that can be used for scheduling decisions, monitoring, and organization. Properly labeling your nodes makes cluster management more intuitive and enables sophisticated workload placement strategies.
Add custom labels to identify the node's role, hardware, or location:
# On the K3s server
# Label by hardware type
sudo k3s kubectl label node <node-name> hardware=raspberry-pi
# Label by location or purpose
sudo k3s kubectl label node <node-name> location=homelab
sudo k3s kubectl label node <node-name> workload-type=edge
# Label by model for ARM-specific workloads
sudo k3s kubectl label node <node-name> arm.architecture=aarch64
# Verify labels
sudo k3s kubectl get node <node-name> --show-labelsThese labels can be used in pod specifications to control scheduling:
apiVersion: v1
kind: Pod
metadata:
name: arm-optimized-app
spec:
nodeSelector:
hardware: raspberry-pi
arm.architecture: aarch64
containers:
- name: app
image: myapp:arm64
resources:
limits:
memory: "512Mi"
cpu: "500m"If you want to dedicate this node to specific workloads, you can apply taints to prevent general-purpose pods from being scheduled:
# Apply a taint (only pods with matching tolerations will schedule)
sudo k3s kubectl taint nodes <node-name> workload=edge:NoSchedule
# Remove a taint
sudo k3s kubectl taint nodes <node-name> workload=edge:NoSchedule-đ§Phase 7: Ongoing Maintenance
K3s worker nodes require minimal ongoing maintenance, but certain operational tasks should be performed regularly to ensure stability and security.
K3s Updates:
K3s provides a simple upgrade path. When a new version is available, you can upgrade by re-running the installer with the desired version:
# Upgrade to latest stable
export K3S_URL="https://192.168.1.10:6443"
export K3S_TOKEN="your-token-here"
curl -sfL https://get.k3s.io | sh -
# Or upgrade to a specific version
export INSTALL_K3S_VERSION="v1.28.5+k3s1"
curl -sfL https://get.k3s.io | sh -
# Verify the version
k3s --versionSystem Updates:
Continue applying system updates regularly. K3s is resilient to OS-level updates and will automatically recover if system packages are updated:
# Regular system updates
sudo apt update
sudo apt full-upgrade -y
# If kernel was updated, reboot
sudo rebootMonitoring:
Monitor node health and resource usage from the control plane:
# Check node resource usage
sudo k3s kubectl top node <node-name>
# Check node conditions
sudo k3s kubectl describe node <node-name> | grep -A 5 Conditions
# View node events
sudo k3s kubectl get events --field-selector involvedObject.name=<node-name>Decommissioning:
If you need to remove a node from the cluster:
# On the K3s server, drain the node
sudo k3s kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
# Remove from cluster
sudo k3s kubectl delete node <node-name>
# On the worker node, uninstall K3s
sudo /usr/local/bin/k3s-agent-uninstall.shđ¯Final State & Best Practices
At the completion of this guide, your Raspberry Pi is operating as a fully functional Kubernetes worker node, ready to run production workloads in a cluster environment.
Key characteristics of the resulting system:
- Firmware properly configured with cgroup support for container isolation
- K3s agent running with automatic service recovery and cluster integration
- Firewall configured to allow cluster communication while maintaining security
- Node registered in cluster with appropriate labels for intelligent scheduling
- Ready to accept workload pods and participate in service load balancing
Architecture Considerations:
When deploying workloads to Raspberry Pi nodes, keep these considerations in mind:
- Always use ARM64-compatible container images(aarch64). AMD64/x86_64 images will not run on Raspberry Pi.
- Set appropriate resource limits on pods. Raspberry Pi has limited RAM compared to traditional servers.
- Consider using multi-architecture images(manifest lists) if you have mixed x86/ARM clusters.
- Be mindful of I/O limitations when running stateful workloads on SD cards. Use USB3 SSDs for better performance.
This setup methodology can be replicated across multiple Raspberry Pi units to build a multi-node cluster, enabling high availability and horizontal scaling of containerized applications.
Changelog
Initial release documenting complete K3s worker node setup for Raspberry Pi 4/5, including firmware configuration, installation, and cluster integration.
Filed under: Kubernetes, K3s, Raspberry Pi, ARM64, Linux, Container Orchestration, Infrastructure, Home Lab, Edge Computing