Knowledge Base
Infrastructure
2026-01-16
15 min

Setting Up a Raspberry Pi as a K3s Worker Node

Complete guide to transforming a hardened Raspberry Pi into a production-ready K3s worker node. Covers firmware configuration, cgroup setup, K3s agent installation, firewall rules, and cluster joining procedures for ARM64 Kubernetes deployments.

Kubernetes
K3s
Raspberry Pi
ARM64
Linux
Container Orchestration
Infrastructure
Home Lab
Edge Computing
Debian

This entry documents the complete process of transforming a hardened Raspberry Pi into a production-ready K3s worker node. The guide covers all firmware prerequisites, system configuration, K3s installation, and cluster joining procedures needed to deploy a reliable Kubernetes worker on ARM architecture.

K3s is a lightweight, certified Kubernetes distribution designed specifically for resource-constrained and edge environments. It is an ideal choice for Raspberry Pi infrastructure, providing full Kubernetes functionality with significantly reduced overhead compared to standard K8s distributions.

🧭Phase 0: Prerequisites & Assumptions

This guide assumes you are starting with a properly hardened Raspberry Pi system. If you haven't already hardened your Pi, it is strongly recommended that you follow the comprehensive hardening guide first.

  • Raspberry Pi 4 or 5 running Debian-based OS (64-bit)
  • System hardened following security best practices (SSH, firewall, etc.)
  • Static IP address configured
  • At least 2GB RAM (4GB+ recommended for production workloads)
  • SSH access with sudo privileges
  • Existing K3s server (control plane) available with join token
â„šī¸
This guide focuses on adding a worker node to an existing K3s cluster. For setting up a K3s server (control plane), refer to the official K3s documentation.

âš™ī¸Phase 1: Firmware Configuration

Kubernetes requires specific kernel features to manage containers effectively. On Raspberry Pi, these features must be explicitly enabled at boot time through firmware configuration. Without these settings, Kubernetes components will fail to start or behave unpredictably.

The critical requirements are:

  • cgroups v2 - Required for container resource limits and isolation.
  • cgroup memory controller - Enables memory accounting and limits per container.
  • 64-bit kernel mode - Ensures proper operation of container runtimes.

These settings are configured by editing the boot configuration file:

edit-boot-config.sh
sudo nano /boot/firmware/config.txt

Add or verify the following lines at the end of the file:

/boot/firmware/config.txt
# Kubernetes / K3s requirements
arm_64bit=1

Next, configure kernel command-line parameters by editing/boot/firmware/cmdline.txt:

edit-cmdline.sh
sudo nano /boot/firmware/cmdline.txt
âš ī¸
Critical: The cmdline.txt file must be a single line with no line breaks. Add the parameters to the existing line with a space before them.

Append the following parameters to the end of the existing line:

/boot/firmware/cmdline.txt
cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

After making these changes, reboot the system to apply the new firmware configuration:

reboot.sh
sudo reboot

Once the system comes back online, verify that cgroups are properly enabled:

verify-cgroups.sh
# Check cgroup subsystems
cat /proc/cgroups

# Verify cgroup v2 is mounted
mount | grep cgroup

# Check kernel command line
cat /proc/cmdline

You should see memory and cpuset listed in the cgroups output, and the parameters you added should appear in/proc/cmdline.

â„šī¸
These firmware changes are permanent and will persist across system updates. They are safe to leave enabled even if you later remove K3s from the system.

đŸ“ĻPhase 2: System Prerequisites

Before installing K3s, ensure the system has all necessary dependencies and that conflicting packages are not present. K3s is designed to be self-contained, but certain system-level requirements must be met.

First, ensure the system is fully up to date:

system-update.sh
sudo apt update
sudo apt full-upgrade -y

K3s includes its own container runtime (containerd), so Docker or other container runtimes should not be installed. If Docker is present, it should be removed to avoid conflicts:

remove-docker.sh
# Check if Docker is installed
dpkg -l | grep docker

# If Docker is present, remove it
sudo apt purge -y docker.io docker-ce docker-ce-cli containerd.io
sudo apt autoremove -y

Install curl if it's not already present (required for K3s installation):

install-curl.sh
sudo apt install -y curl

Ensure that legacy iptables is not configured in a way that conflicts with modern netfilter. K3s will handle firewall rules through its embedded components, but the system must be in a clean state:

verify-iptables.sh
# Check iptables version and backend
sudo iptables --version
sudo update-alternatives --display iptables
â„šī¸
K3s will automatically configure the necessary iptables rules for container networking and service load balancing. Your existing UFW firewall will remain active and effective for host-level protection.

đŸ”ĨPhase 3: Firewall Configuration

K3s worker nodes require several network ports to be accessible for cluster communication. If you followed the hardening guide, UFW is already configured with a deny-by-default policy, so these ports must be explicitly allowed.

The following ports are required for K3s worker nodes:

  • 10250/TCP — Kubelet API (required for metrics, logs, and exec)
  • 8472/UDP — Flannel VXLAN (if using Flannel as CNI)
  • 51820/UDP — Flannel WireGuard (if using Flannel with WireGuard backend)
  • 51821/UDP — Flannel WireGuard IPv6 (if using IPv6)

For security reasons, these ports should only be accessible from your trusted network or specific control plane IPs. Replace 192.168.1.0/24 below with your actual trusted network range:

configure-firewall.sh
# Allow Kubelet API from trusted network
sudo ufw allow from 192.168.1.0/24 to any port 10250 proto tcp comment 'K3s Kubelet'

# Allow Flannel VXLAN from trusted network
sudo ufw allow from 192.168.1.0/24 to any port 8472 proto udp comment 'K3s Flannel VXLAN'

# (Optional) Allow Flannel WireGuard if you plan to use it
# sudo ufw allow from 192.168.1.0/24 to any port 51820 proto udp comment 'K3s Flannel WireGuard'

# Reload firewall
sudo ufw reload

# Verify rules
sudo ufw status verbose
âš ī¸
Never expose Kubelet port (10250) to the public internet. This port provides privileged access to the node and must be restricted to cluster members only.

If you are running K3s behind a more sophisticated network setup with VLANs or multiple network interfaces, ensure that the K3s traffic can flow between the control plane and worker nodes on these ports.

🚀Phase 4: K3s Agent Installation

K3s uses a simple installation script that handles all binary downloads, service configuration, and systemd unit setup. The installation distinguishes between server (control plane) and agent (worker) modes through environment variables.

Before running the installer, you need two pieces of information from your K3s server:

  • K3S_URL — The API endpoint of your K3s server (e.g., https://k3s-server:6443)
  • K3S_TOKEN — The node join token from your K3s server

To retrieve the join token from your K3s server node, run:

get-token.sh
# On your K3s server node
sudo cat /var/lib/rancher/k3s/server/node-token
âš ī¸
The node token is sensitive. Treat it like a password — anyone with this token can join nodes to your cluster. Store it securely and rotate it periodically.

With the token in hand, install K3s in agent mode on your Raspberry Pi:

install-k3s-agent.sh
# Set environment variables (replace with your actual values)
export K3S_URL="https://192.168.1.10:6443"
export K3S_TOKEN="K10abc123def456::server:abc123def456"

# Run K3s installer in agent mode
curl -sfL https://get.k3s.io | sh -

The installer will:

  • Download the appropriate K3s binary for ARM64 architecture
  • Install the binary to /usr/local/bin/k3s
  • Create a systemd service unit (k3s-agent.service)
  • Configure the agent to connect to your K3s server
  • Start the K3s agent service

The installation typically completes in under a minute. Once finished, verify that the K3s agent service is running:

verify-service.sh
# Check service status
sudo systemctl status k3s-agent

# View recent logs
sudo journalctl -u k3s-agent -f

The agent should show as active (running). The logs will show the node registering with the control plane and starting up the kubelet.

â„šī¸
K3s automatically configures the containerd runtime, CNI plugins, and all necessary Kubernetes components. No manual configuration of these components is required.

✅Phase 5: Cluster Verification

After the agent installation completes, the node should automatically register with the cluster. Verification should be performed from the K3s server (control plane) using kubectl.

On your K3s server node, check that the new worker node appears in the cluster:

verify-node.sh
# On your K3s server node
sudo k3s kubectl get nodes

# Get detailed node information
sudo k3s kubectl get nodes -o wide

# Describe the node for full details
sudo k3s kubectl describe node <node-name>

The node should appear with a status of Ready. If the status is NotReady, check the logs on the worker node:

troubleshoot.sh
# On the worker node
sudo journalctl -u k3s-agent -n 100 --no-pager

Common issues that prevent a node from becoming ready include:

  • Firewall blocking required ports
  • Incorrect K3S_URL or K3S_TOKEN
  • Missing cgroup configuration (from Phase 1)
  • Network connectivity issues between worker and control plane

Once the node shows as Ready, verify that pods can be scheduled onto it:

test-workload.sh
# On the K3s server, create a test deployment
sudo k3s kubectl create deployment nginx-test --image=nginx:alpine --replicas=1

# Check pod placement
sudo k3s kubectl get pods -o wide

# Clean up
sudo k3s kubectl delete deployment nginx-test
â„šī¸
If you have multiple worker nodes, Kubernetes will distribute pods across them based on available resources and scheduling policies. You can influence placement using node selectors, taints, tolerations, and affinity rules.

đŸˇī¸Phase 6: Node Labeling & Configuration

Node labels provide metadata that can be used for scheduling decisions, monitoring, and organization. Properly labeling your nodes makes cluster management more intuitive and enables sophisticated workload placement strategies.

Add custom labels to identify the node's role, hardware, or location:

label-node.sh
# On the K3s server
# Label by hardware type
sudo k3s kubectl label node <node-name> hardware=raspberry-pi

# Label by location or purpose
sudo k3s kubectl label node <node-name> location=homelab
sudo k3s kubectl label node <node-name> workload-type=edge

# Label by model for ARM-specific workloads
sudo k3s kubectl label node <node-name> arm.architecture=aarch64

# Verify labels
sudo k3s kubectl get node <node-name> --show-labels

These labels can be used in pod specifications to control scheduling:

example-pod-with-selector.yaml
apiVersion: v1
kind: Pod
metadata:
  name: arm-optimized-app
spec:
  nodeSelector:
    hardware: raspberry-pi
    arm.architecture: aarch64
  containers:
  - name: app
    image: myapp:arm64
    resources:
      limits:
        memory: "512Mi"
        cpu: "500m"

If you want to dedicate this node to specific workloads, you can apply taints to prevent general-purpose pods from being scheduled:

taint-node.sh
# Apply a taint (only pods with matching tolerations will schedule)
sudo k3s kubectl taint nodes <node-name> workload=edge:NoSchedule

# Remove a taint
sudo k3s kubectl taint nodes <node-name> workload=edge:NoSchedule-
â„šī¸
Taints are particularly useful in heterogeneous clusters where you want to reserve certain nodes for specific applications or prevent resource-intensive workloads from running on Raspberry Pi nodes.

🔧Phase 7: Ongoing Maintenance

K3s worker nodes require minimal ongoing maintenance, but certain operational tasks should be performed regularly to ensure stability and security.

K3s Updates:

K3s provides a simple upgrade path. When a new version is available, you can upgrade by re-running the installer with the desired version:

upgrade-k3s.sh
# Upgrade to latest stable
export K3S_URL="https://192.168.1.10:6443"
export K3S_TOKEN="your-token-here"
curl -sfL https://get.k3s.io | sh -

# Or upgrade to a specific version
export INSTALL_K3S_VERSION="v1.28.5+k3s1"
curl -sfL https://get.k3s.io | sh -

# Verify the version
k3s --version
âš ī¸
Always upgrade worker nodes after upgrading the control plane, and ensure version compatibility. K3s supports skew of one minor version between server and agent.

System Updates:

Continue applying system updates regularly. K3s is resilient to OS-level updates and will automatically recover if system packages are updated:

system-maintenance.sh
# Regular system updates
sudo apt update
sudo apt full-upgrade -y

# If kernel was updated, reboot
sudo reboot

Monitoring:

Monitor node health and resource usage from the control plane:

monitor-node.sh
# Check node resource usage
sudo k3s kubectl top node <node-name>

# Check node conditions
sudo k3s kubectl describe node <node-name> | grep -A 5 Conditions

# View node events
sudo k3s kubectl get events --field-selector involvedObject.name=<node-name>

Decommissioning:

If you need to remove a node from the cluster:

remove-node.sh
# On the K3s server, drain the node
sudo k3s kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data

# Remove from cluster
sudo k3s kubectl delete node <node-name>

# On the worker node, uninstall K3s
sudo /usr/local/bin/k3s-agent-uninstall.sh

đŸŽ¯Final State & Best Practices

At the completion of this guide, your Raspberry Pi is operating as a fully functional Kubernetes worker node, ready to run production workloads in a cluster environment.

Key characteristics of the resulting system:

  • Firmware properly configured with cgroup support for container isolation
  • K3s agent running with automatic service recovery and cluster integration
  • Firewall configured to allow cluster communication while maintaining security
  • Node registered in cluster with appropriate labels for intelligent scheduling
  • Ready to accept workload pods and participate in service load balancing

Architecture Considerations:

When deploying workloads to Raspberry Pi nodes, keep these considerations in mind:

  • Always use ARM64-compatible container images(aarch64). AMD64/x86_64 images will not run on Raspberry Pi.
  • Set appropriate resource limits on pods. Raspberry Pi has limited RAM compared to traditional servers.
  • Consider using multi-architecture images(manifest lists) if you have mixed x86/ARM clusters.
  • Be mindful of I/O limitations when running stateful workloads on SD cards. Use USB3 SSDs for better performance.
â„šī¸
Raspberry Pi K3s clusters are excellent for edge computing, home labs, development environments, and learning Kubernetes. They provide a cost-effective way to run real Kubernetes workloads with minimal power consumption.

This setup methodology can be replicated across multiple Raspberry Pi units to build a multi-node cluster, enabling high availability and horizontal scaling of containerized applications.

Changelog

2026-01-16v1.0

Initial release documenting complete K3s worker node setup for Raspberry Pi 4/5, including firmware configuration, installation, and cluster integration.

Filed under: Kubernetes, K3s, Raspberry Pi, ARM64, Linux, Container Orchestration, Infrastructure, Home Lab, Edge Computing

Last updated: 2026-01-16