Knowledge Base
Infrastructure
2026-01-19
14 min

Setting Up Ubuntu Server 24.04 as a K3s Worker Node

Complete guide to configuring a hardened Ubuntu Server 24.04 LTS (x86_64) system as a production-ready K3s worker node. Covers kernel configuration via GRUB, system prerequisites, K3s agent installation, firewall rules, cluster joining, and maintenance procedures.

K3s
Kubernetes
Ubuntu Server
Container Orchestration
x86_64
Worker Node
Infrastructure
DevOps
Production

This entry documents the process of configuring a hardened Ubuntu Server 24.04 LTS (x86_64) system as a production-ready K3s worker node. The guide covers kernel configuration, system prerequisites, K3s installation, firewall rules, and cluster joining procedures for x86_64 architecture.

K3s is a lightweight, certified Kubernetes distribution ideal for edge computing, IoT, and resource-efficient deployments. This guide focuses on deploying K3s worker nodes on standard x86_64 hardware running Ubuntu Server 24.04 LTS, such as repurposed laptops, desktops, or dedicated servers.

🧭Phase 0: Prerequisites & Assumptions

This guide assumes you are starting with a properly hardened Ubuntu Server 24.04 LTS system. If you haven't already hardened your server, it is strongly recommended that you follow the comprehensive Ubuntu Server hardening guide first.

  • Ubuntu Server 24.04 LTS (64-bit x86_64) installed and hardened
  • Static IP address configured
  • At least 2GB RAM (4GB+ recommended for production workloads)
  • At least 20GB available disk space
  • SSH access with sudo privileges
  • Existing K3s server (control plane) available with join token
  • Network connectivity to the K3s server
ℹ️
This guide focuses on adding a worker node to an existing K3s cluster. For setting up a K3s server (control plane), refer to the official K3s documentation.

⚙️Phase 1: Kernel Configuration (cgroups)

Kubernetes requires specific kernel features to manage containers effectively. Modern Ubuntu Server installations typically have cgroups v2 enabled by default, but certain memory cgroup features may need explicit activation.

First, verify the current cgroup configuration:

verify-cgroups.sh
# Check cgroup version
stat -fc %T /sys/fs/cgroup/

# Should return "cgroup2fs" for cgroups v2

# Check available cgroup controllers
cat /sys/fs/cgroup/cgroup.controllers

# Check kernel command line
cat /proc/cmdline | grep -o 'cgroup_[^[:space:]]*'

If cgroups v2 is active but memory controller is not enabled, or if you need to ensure memory accounting is active, update the GRUB configuration:

configure-grub.sh
# Edit GRUB default configuration
sudo nano /etc/default/grub

Find the GRUB_CMDLINE_LINUX line and add cgroup parameters:

/etc/default/grub
# Before:
GRUB_CMDLINE_LINUX=""

# After (add these parameters):
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1 systemd.unified_cgroup_hierarchy=1"
ℹ️
systemd.unified_cgroup_hierarchy=1 ensures cgroups v2 is used. Ubuntu Server 24.04 uses this by default, but adding it explicitly ensures consistency.

After editing the GRUB configuration, update GRUB and reboot:

update-grub.sh
# Update GRUB configuration
sudo update-grub

# Reboot to apply kernel parameters
sudo reboot

After reboot, verify the changes took effect:

verify-after-reboot.sh
# Verify kernel command line includes new parameters
cat /proc/cmdline

# Check cgroup controllers
cat /sys/fs/cgroup/cgroup.controllers

# Should include: cpuset cpu io memory hugetlb pids rdma misc
ℹ️
These kernel parameters are permanent and will persist across system updates. They are safe to leave enabled even if you later remove K3s.

📦Phase 2: System Prerequisites

Before installing K3s, ensure the system has all necessary dependencies and that conflicting packages are not present. K3s is designed to be self-contained, but certain system-level requirements must be met.

First, ensure the system is fully up to date:

system-update.sh
sudo apt update
sudo apt full-upgrade -y

K3s includes its own container runtime (containerd), so Docker or other container runtimes should not be installed. If Docker is present, it should be removed to avoid conflicts:

check-docker.sh
# Check if Docker is installed
dpkg -l | grep -E 'docker|containerd'

# If Docker is present, remove it
sudo systemctl stop docker
sudo apt purge -y docker.io docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo apt autoremove --purge -y

# Remove Docker data (optional, only if you don't need it)
# sudo rm -rf /var/lib/docker
# sudo rm -rf /var/lib/containerd

Install required packages if not already present:

install-prerequisites.sh
# Install curl (required for K3s installation script)
sudo apt install -y curl

# Install additional useful tools
sudo apt install -y iptables ipset
⚠️
If you have AppArmor enabled (default on Ubuntu Server), ensure it's properly configured. K3s will work with AppArmor, but certain profiles may need adjustment for specific workloads.

Disable swap if enabled (Kubernetes requires swap to be disabled):

disable-swap.sh
# Check if swap is enabled
swapon --show

# If swap is active, disable it
sudo swapoff -a

# Disable swap permanently by commenting out swap entries in fstab
sudo sed -i '/ swap / s/^/#/' /etc/fstab

# Verify swap is disabled
free -h
ℹ️
Disabling swap is a Kubernetes requirement. K3s kubelet will refuse to start if swap is enabled unless explicitly configured to allow it.

📥Phase 3: Install K3s Agent

With prerequisites in place, you can now install K3s in agent (worker) mode. You will need the join token from your K3s server to authenticate the worker node.

Obtaining the join token from the K3s server:

On your existing K3s server, retrieve the node token:

get-token.sh
# On the K3s server (control plane)
sudo cat /var/lib/rancher/k3s/server/node-token

The token will look something like:

example-token.txt
K10abc123def456ghi789jkl012mno345pqr678::server:abc123def456ghi789
⚠️
Treat this token as a secret. Anyone with this token can join your K3s cluster. Store it securely and rotate it periodically.

Installing K3s in agent mode:

On your Ubuntu Server worker node, run the K3s installation script with agent configuration:

install-k3s-agent.sh
# Set environment variables for K3s installation
export K3S_URL="https://<k3s-server-ip>:6443"
export K3S_TOKEN="<your-node-token-here>"

# Install K3s agent
curl -sfL https://get.k3s.io | sh -

# Example with actual values:
# export K3S_URL="https://192.168.1.100:6443"
# export K3S_TOKEN="K10abc123def456ghi789jkl012mno345pqr678::server:abc123def456ghi789"
# curl -sfL https://get.k3s.io | sh -

The installation script will:

  • Download the K3s binary for x86_64 architecture
  • Install containerd as the container runtime
  • Create and enable the k3s-agent systemd service
  • Configure the agent to connect to your K3s server
  • Start the agent and join the cluster

Verify the K3s agent is running:

verify-k3s-agent.sh
# Check K3s agent service status
sudo systemctl status k3s-agent

# Check agent logs
sudo journalctl -u k3s-agent -f
ℹ️
The agent service should show active (running). Initial startup may take 30-60 seconds while the agent downloads images and registers with the server.

🔥Phase 4: Firewall Configuration

K3s requires several ports to be open for cluster communication. Since we're working with a hardened system using UFW, we need to add explicit firewall rules.

Required ports for K3s worker nodes:

  • 10250/tcp - Kubelet metrics (required)
  • 8472/udp - Flannel VXLAN (if using Flannel CNI)
  • 51820/udp - Flannel WireGuard (if using WireGuard backend)
  • 51821/udp - Flannel WireGuard IPv6 (if using WireGuard)

Add UFW rules to allow K3s communication:

configure-ufw.sh
# Allow kubelet metrics
sudo ufw allow 10250/tcp comment 'K3s kubelet'

# Allow Flannel VXLAN (default K3s CNI)
sudo ufw allow 8472/udp comment 'K3s Flannel VXLAN'

# If using WireGuard backend (optional)
sudo ufw allow 51820/udp comment 'K3s Flannel WireGuard'
sudo ufw allow 51821/udp comment 'K3s Flannel WireGuard IPv6'

# Reload UFW
sudo ufw reload

# Verify rules
sudo ufw status numbered
ℹ️
If your K3s server and worker nodes are on a trusted private network, you may want to restrict these rules to only allow traffic from your cluster subnet using UFW's from directive.

Optional: Restrict to cluster subnet only

restrict-to-subnet.sh
# Remove previous unrestricted rules
sudo ufw delete allow 10250/tcp
sudo ufw delete allow 8472/udp

# Add restricted rules (adjust subnet to match your cluster)
sudo ufw allow from 192.168.1.0/24 to any port 10250 proto tcp comment 'K3s kubelet'
sudo ufw allow from 192.168.1.0/24 to any port 8472 proto udp comment 'K3s Flannel VXLAN'

# Reload
sudo ufw reload

Phase 5: Verify Cluster Membership

After installation and firewall configuration, verify that your worker node has successfully joined the K3s cluster.

From the K3s server (control plane):

check-nodes-from-server.sh
# List all nodes in the cluster
sudo k3s kubectl get nodes

# Should show your new worker node with STATUS: Ready

# Get detailed node information
sudo k3s kubectl get nodes -o wide

# Describe the node for detailed status
sudo k3s kubectl describe node <node-name>

From the worker node itself:

check-from-worker.sh
# Check K3s agent logs
sudo journalctl -u k3s-agent -n 50

# Look for successful registration messages like:
# "Node registered successfully"

# Check agent status
sudo systemctl status k3s-agent

# Check running containers
sudo k3s crictl ps

The worker node should appear in the node list with a Ready status within 1-2 minutes of installation. The node name will default to the system hostname.

⚠️
If the node shows NotReady, check the agent logs for errors. Common issues include firewall blocking, incorrect server URL, or invalid token.

Testing workload scheduling:

Deploy a test pod to verify the worker node can run workloads:

test-pod.sh
# On the K3s server, create a test deployment
sudo k3s kubectl run test-nginx --image=nginx:alpine --port=80

# Check pod status
sudo k3s kubectl get pods -o wide

# The pod should be scheduled on your worker node

# Clean up
sudo k3s kubectl delete pod test-nginx

🏷️Phase 6: Node Labels & Taints (Optional)

For production environments, you may want to label your worker nodes to control workload placement, or apply taints to prevent certain workloads from scheduling.

Adding node labels:

add-labels.sh
# Add custom labels to identify node characteristics
sudo k3s kubectl label node <node-name> node-type=worker
sudo k3s kubectl label node <node-name> hardware=x86_64
sudo k3s kubectl label node <node-name> environment=production

# Verify labels
sudo k3s kubectl get node <node-name> --show-labels

You can then use node selectors in your pod specifications:

pod-with-selector.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  nodeSelector:
    hardware: x86_64
    environment: production
  containers:
  - name: app
    image: myapp:latest

Applying node taints:

add-taints.sh
# Taint a node to prevent general workloads (example)
sudo k3s kubectl taint nodes <node-name> dedicated=database:NoSchedule

# Only pods with matching tolerations will be scheduled

# Remove a taint
sudo k3s kubectl taint nodes <node-name> dedicated:NoSchedule-
ℹ️
Taints and tolerations are powerful tools for workload isolation. Use them to dedicate nodes to specific applications or prevent resource-intensive workloads from running on less powerful nodes.

📊Phase 7: Monitoring & Maintenance

Once your worker node is operational, establish monitoring and maintenance procedures to ensure reliable cluster operation.

Monitoring K3s agent:

monitoring.sh
# Check agent service health
sudo systemctl status k3s-agent

# View recent logs
sudo journalctl -u k3s-agent -n 100

# Follow logs in real-time
sudo journalctl -u k3s-agent -f

# Check resource usage
sudo k3s crictl stats

# View running pods on this node
sudo k3s crictl pods

Node resource monitoring:

resource-monitoring.sh
# Check node resource allocations (from server)
sudo k3s kubectl top node <node-name>

# Check pod resource usage on this node
sudo k3s kubectl top pods --all-namespaces --field-selector spec.nodeName=<node-name>

# Describe node to see capacity and allocatable resources
sudo k3s kubectl describe node <node-name> | grep -A 5 "Allocated resources"

System updates and reboots:

maintenance.sh
# Before updating the system, drain the node (from server)
sudo k3s kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data

# This will move pods to other nodes

# Perform system updates
sudo apt update && sudo apt full-upgrade -y

# Reboot if kernel was updated
sudo reboot

# After reboot, uncordon the node (from server)
sudo k3s kubectl uncordon <node-name>

# Verify node is ready and pods are rescheduling
sudo k3s kubectl get nodes
sudo k3s kubectl get pods --all-namespaces -o wide
⚠️
Always drain nodes before maintenance to ensure graceful workload migration. Abruptly rebooting a node can cause pod disruptions and potential data loss for stateful workloads.

Log rotation:

K3s and container logs can grow large over time. Ensure log rotation is configured:

/etc/logrotate.d/k3s
/var/log/pods/*/*.log {
    daily
    rotate 7
    compress
    missingok
    notifempty
    maxsize 100M
    create 0644 root root
}

🎯Final State & Best Practices

At the conclusion of this guide, your Ubuntu Server 24.04 LTS system is operating as a fully functional K3s worker node, ready to accept and run containerized workloads.

The resulting system state:

  • Hardened Ubuntu Server 24.04 LTS with K3s agent installed and running.
  • Kernel configured with proper cgroup support for container orchestration.
  • Firewall configured to allow K3s cluster communication while maintaining security.
  • Node registered with the K3s cluster and showing Ready status.
  • Swap disabled per Kubernetes requirements.
  • Monitoring and maintenance procedures established.

Best practices for production deployments:

  • Resource limits: Always set resource requests and limits in pod specifications to prevent resource starvation.
  • Pod Disruption Budgets: Configure PDBs for critical workloads to ensure availability during maintenance.
  • Multiple workers: Deploy at least 2-3 worker nodes for high availability.
  • Regular updates: Keep both the OS and K3s version up to date, but test updates in non-production first.
  • Backup strategy: While workers are generally stateless, ensure your K3s server is backed up regularly.
  • Monitoring integration: Deploy Prometheus and Grafana for comprehensive cluster monitoring.
ℹ️
K3s worker nodes are designed to be cattle, not pets. They should be easily replaceable. Document your configuration process (like this guide) so new workers can be added quickly.

Troubleshooting common issues:

  • Node shows NotReady: Check sudo journalctl -u k3s-agent for errors. Verify network connectivity to server and firewall rules.
  • Pods stuck in Pending: Check node resources with kubectl describe node. May indicate insufficient CPU/memory.
  • Network issues between pods: Verify Flannel VXLAN port (8472/udp) is open and not blocked by firewall.
  • Agent fails to start: Verify token is correct and K3S_URL points to the right server. Check /etc/rancher/k3s/k3s.yaml permissions.

Changelog

2026-01-19v1.0

Initial release documenting K3s worker node setup on Ubuntu Server 24.04 LTS (x86_64).

Filed under: K3s, Kubernetes, Ubuntu Server, Container Orchestration, x86_64, Worker Node, Infrastructure, DevOps, Production

Last updated: 2026-01-19