Setting Up Ubuntu Server 24.04 as a K3s Worker Node
Complete guide to configuring a hardened Ubuntu Server 24.04 LTS (x86_64) system as a production-ready K3s worker node. Covers kernel configuration via GRUB, system prerequisites, K3s agent installation, firewall rules, cluster joining, and maintenance procedures.
This entry documents the process of configuring a hardened Ubuntu Server 24.04 LTS (x86_64) system as a production-ready K3s worker node. The guide covers kernel configuration, system prerequisites, K3s installation, firewall rules, and cluster joining procedures for x86_64 architecture.
K3s is a lightweight, certified Kubernetes distribution ideal for edge computing, IoT, and resource-efficient deployments. This guide focuses on deploying K3s worker nodes on standard x86_64 hardware running Ubuntu Server 24.04 LTS, such as repurposed laptops, desktops, or dedicated servers.
🧭Phase 0: Prerequisites & Assumptions
This guide assumes you are starting with a properly hardened Ubuntu Server 24.04 LTS system. If you haven't already hardened your server, it is strongly recommended that you follow the comprehensive Ubuntu Server hardening guide first.
- Ubuntu Server 24.04 LTS (64-bit x86_64) installed and hardened
- Static IP address configured
- At least 2GB RAM (4GB+ recommended for production workloads)
- At least 20GB available disk space
- SSH access with sudo privileges
- Existing K3s server (control plane) available with join token
- Network connectivity to the K3s server
⚙️Phase 1: Kernel Configuration (cgroups)
Kubernetes requires specific kernel features to manage containers effectively. Modern Ubuntu Server installations typically have cgroups v2 enabled by default, but certain memory cgroup features may need explicit activation.
First, verify the current cgroup configuration:
# Check cgroup version
stat -fc %T /sys/fs/cgroup/
# Should return "cgroup2fs" for cgroups v2
# Check available cgroup controllers
cat /sys/fs/cgroup/cgroup.controllers
# Check kernel command line
cat /proc/cmdline | grep -o 'cgroup_[^[:space:]]*'If cgroups v2 is active but memory controller is not enabled, or if you need to ensure memory accounting is active, update the GRUB configuration:
# Edit GRUB default configuration
sudo nano /etc/default/grubFind the GRUB_CMDLINE_LINUX line and add cgroup parameters:
# Before:
GRUB_CMDLINE_LINUX=""
# After (add these parameters):
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1 systemd.unified_cgroup_hierarchy=1"systemd.unified_cgroup_hierarchy=1 ensures cgroups v2 is used. Ubuntu Server 24.04 uses this by default, but adding it explicitly ensures consistency.After editing the GRUB configuration, update GRUB and reboot:
# Update GRUB configuration
sudo update-grub
# Reboot to apply kernel parameters
sudo rebootAfter reboot, verify the changes took effect:
# Verify kernel command line includes new parameters
cat /proc/cmdline
# Check cgroup controllers
cat /sys/fs/cgroup/cgroup.controllers
# Should include: cpuset cpu io memory hugetlb pids rdma misc📦Phase 2: System Prerequisites
Before installing K3s, ensure the system has all necessary dependencies and that conflicting packages are not present. K3s is designed to be self-contained, but certain system-level requirements must be met.
First, ensure the system is fully up to date:
sudo apt update
sudo apt full-upgrade -yK3s includes its own container runtime (containerd), so Docker or other container runtimes should not be installed. If Docker is present, it should be removed to avoid conflicts:
# Check if Docker is installed
dpkg -l | grep -E 'docker|containerd'
# If Docker is present, remove it
sudo systemctl stop docker
sudo apt purge -y docker.io docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo apt autoremove --purge -y
# Remove Docker data (optional, only if you don't need it)
# sudo rm -rf /var/lib/docker
# sudo rm -rf /var/lib/containerdInstall required packages if not already present:
# Install curl (required for K3s installation script)
sudo apt install -y curl
# Install additional useful tools
sudo apt install -y iptables ipsetDisable swap if enabled (Kubernetes requires swap to be disabled):
# Check if swap is enabled
swapon --show
# If swap is active, disable it
sudo swapoff -a
# Disable swap permanently by commenting out swap entries in fstab
sudo sed -i '/ swap / s/^/#/' /etc/fstab
# Verify swap is disabled
free -h📥Phase 3: Install K3s Agent
With prerequisites in place, you can now install K3s in agent (worker) mode. You will need the join token from your K3s server to authenticate the worker node.
Obtaining the join token from the K3s server:
On your existing K3s server, retrieve the node token:
# On the K3s server (control plane)
sudo cat /var/lib/rancher/k3s/server/node-tokenThe token will look something like:
K10abc123def456ghi789jkl012mno345pqr678::server:abc123def456ghi789Installing K3s in agent mode:
On your Ubuntu Server worker node, run the K3s installation script with agent configuration:
# Set environment variables for K3s installation
export K3S_URL="https://<k3s-server-ip>:6443"
export K3S_TOKEN="<your-node-token-here>"
# Install K3s agent
curl -sfL https://get.k3s.io | sh -
# Example with actual values:
# export K3S_URL="https://192.168.1.100:6443"
# export K3S_TOKEN="K10abc123def456ghi789jkl012mno345pqr678::server:abc123def456ghi789"
# curl -sfL https://get.k3s.io | sh -The installation script will:
- Download the K3s binary for x86_64 architecture
- Install containerd as the container runtime
- Create and enable the
k3s-agentsystemd service - Configure the agent to connect to your K3s server
- Start the agent and join the cluster
Verify the K3s agent is running:
# Check K3s agent service status
sudo systemctl status k3s-agent
# Check agent logs
sudo journalctl -u k3s-agent -factive (running). Initial startup may take 30-60 seconds while the agent downloads images and registers with the server.🔥Phase 4: Firewall Configuration
K3s requires several ports to be open for cluster communication. Since we're working with a hardened system using UFW, we need to add explicit firewall rules.
Required ports for K3s worker nodes:
- 10250/tcp - Kubelet metrics (required)
- 8472/udp - Flannel VXLAN (if using Flannel CNI)
- 51820/udp - Flannel WireGuard (if using WireGuard backend)
- 51821/udp - Flannel WireGuard IPv6 (if using WireGuard)
Add UFW rules to allow K3s communication:
# Allow kubelet metrics
sudo ufw allow 10250/tcp comment 'K3s kubelet'
# Allow Flannel VXLAN (default K3s CNI)
sudo ufw allow 8472/udp comment 'K3s Flannel VXLAN'
# If using WireGuard backend (optional)
sudo ufw allow 51820/udp comment 'K3s Flannel WireGuard'
sudo ufw allow 51821/udp comment 'K3s Flannel WireGuard IPv6'
# Reload UFW
sudo ufw reload
# Verify rules
sudo ufw status numberedfrom directive.Optional: Restrict to cluster subnet only
# Remove previous unrestricted rules
sudo ufw delete allow 10250/tcp
sudo ufw delete allow 8472/udp
# Add restricted rules (adjust subnet to match your cluster)
sudo ufw allow from 192.168.1.0/24 to any port 10250 proto tcp comment 'K3s kubelet'
sudo ufw allow from 192.168.1.0/24 to any port 8472 proto udp comment 'K3s Flannel VXLAN'
# Reload
sudo ufw reload✅Phase 5: Verify Cluster Membership
After installation and firewall configuration, verify that your worker node has successfully joined the K3s cluster.
From the K3s server (control plane):
# List all nodes in the cluster
sudo k3s kubectl get nodes
# Should show your new worker node with STATUS: Ready
# Get detailed node information
sudo k3s kubectl get nodes -o wide
# Describe the node for detailed status
sudo k3s kubectl describe node <node-name>From the worker node itself:
# Check K3s agent logs
sudo journalctl -u k3s-agent -n 50
# Look for successful registration messages like:
# "Node registered successfully"
# Check agent status
sudo systemctl status k3s-agent
# Check running containers
sudo k3s crictl psThe worker node should appear in the node list with a Ready status within 1-2 minutes of installation. The node name will default to the system hostname.
NotReady, check the agent logs for errors. Common issues include firewall blocking, incorrect server URL, or invalid token.Testing workload scheduling:
Deploy a test pod to verify the worker node can run workloads:
# On the K3s server, create a test deployment
sudo k3s kubectl run test-nginx --image=nginx:alpine --port=80
# Check pod status
sudo k3s kubectl get pods -o wide
# The pod should be scheduled on your worker node
# Clean up
sudo k3s kubectl delete pod test-nginx🏷️Phase 6: Node Labels & Taints (Optional)
For production environments, you may want to label your worker nodes to control workload placement, or apply taints to prevent certain workloads from scheduling.
Adding node labels:
# Add custom labels to identify node characteristics
sudo k3s kubectl label node <node-name> node-type=worker
sudo k3s kubectl label node <node-name> hardware=x86_64
sudo k3s kubectl label node <node-name> environment=production
# Verify labels
sudo k3s kubectl get node <node-name> --show-labelsYou can then use node selectors in your pod specifications:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
nodeSelector:
hardware: x86_64
environment: production
containers:
- name: app
image: myapp:latestApplying node taints:
# Taint a node to prevent general workloads (example)
sudo k3s kubectl taint nodes <node-name> dedicated=database:NoSchedule
# Only pods with matching tolerations will be scheduled
# Remove a taint
sudo k3s kubectl taint nodes <node-name> dedicated:NoSchedule-📊Phase 7: Monitoring & Maintenance
Once your worker node is operational, establish monitoring and maintenance procedures to ensure reliable cluster operation.
Monitoring K3s agent:
# Check agent service health
sudo systemctl status k3s-agent
# View recent logs
sudo journalctl -u k3s-agent -n 100
# Follow logs in real-time
sudo journalctl -u k3s-agent -f
# Check resource usage
sudo k3s crictl stats
# View running pods on this node
sudo k3s crictl podsNode resource monitoring:
# Check node resource allocations (from server)
sudo k3s kubectl top node <node-name>
# Check pod resource usage on this node
sudo k3s kubectl top pods --all-namespaces --field-selector spec.nodeName=<node-name>
# Describe node to see capacity and allocatable resources
sudo k3s kubectl describe node <node-name> | grep -A 5 "Allocated resources"System updates and reboots:
# Before updating the system, drain the node (from server)
sudo k3s kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
# This will move pods to other nodes
# Perform system updates
sudo apt update && sudo apt full-upgrade -y
# Reboot if kernel was updated
sudo reboot
# After reboot, uncordon the node (from server)
sudo k3s kubectl uncordon <node-name>
# Verify node is ready and pods are rescheduling
sudo k3s kubectl get nodes
sudo k3s kubectl get pods --all-namespaces -o wideLog rotation:
K3s and container logs can grow large over time. Ensure log rotation is configured:
/var/log/pods/*/*.log {
daily
rotate 7
compress
missingok
notifempty
maxsize 100M
create 0644 root root
}🎯Final State & Best Practices
At the conclusion of this guide, your Ubuntu Server 24.04 LTS system is operating as a fully functional K3s worker node, ready to accept and run containerized workloads.
The resulting system state:
- Hardened Ubuntu Server 24.04 LTS with K3s agent installed and running.
- Kernel configured with proper cgroup support for container orchestration.
- Firewall configured to allow K3s cluster communication while maintaining security.
- Node registered with the K3s cluster and showing
Readystatus. - Swap disabled per Kubernetes requirements.
- Monitoring and maintenance procedures established.
Best practices for production deployments:
- Resource limits: Always set resource requests and limits in pod specifications to prevent resource starvation.
- Pod Disruption Budgets: Configure PDBs for critical workloads to ensure availability during maintenance.
- Multiple workers: Deploy at least 2-3 worker nodes for high availability.
- Regular updates: Keep both the OS and K3s version up to date, but test updates in non-production first.
- Backup strategy: While workers are generally stateless, ensure your K3s server is backed up regularly.
- Monitoring integration: Deploy Prometheus and Grafana for comprehensive cluster monitoring.
Troubleshooting common issues:
- Node shows NotReady: Check
sudo journalctl -u k3s-agentfor errors. Verify network connectivity to server and firewall rules. - Pods stuck in Pending: Check node resources with
kubectl describe node. May indicate insufficient CPU/memory. - Network issues between pods: Verify Flannel VXLAN port (8472/udp) is open and not blocked by firewall.
- Agent fails to start: Verify token is correct and K3S_URL points to the right server. Check
/etc/rancher/k3s/k3s.yamlpermissions.
Changelog
Initial release documenting K3s worker node setup on Ubuntu Server 24.04 LTS (x86_64).
Filed under: K3s, Kubernetes, Ubuntu Server, Container Orchestration, x86_64, Worker Node, Infrastructure, DevOps, Production