
Orrery Stack
A Celestial ARM64 GitOps Infrastructure Laboratory.
Internal Designation: Cassini Cluster
Originally conceived as a lightweight homelab cluster, the Orrery Stack evolved into a fully GitOps-driven, multi-node ARM64 Kubernetes platform. It integrates private PKI via step-ca, encrypted secret management with KSOPS, advanced observability through Prometheus and Grafana, and multi-architecture CI/CD pipelines. The system is designed to be fully reproducible from Git and serves as both a production environment and an R&D laboratory for infrastructure innovation.
Project Milestones
12/15/2025
Project Inception
Initial Raspberry Pi cluster assembled and baseline K3s installed.
12/27/2025
GitOps Migration
Argo CD app-of-apps pattern implemented for declarative cluster control.
12/28/2025
Private PKI Integration
Integrated step-ca and cert-manager for internal certificate authority automation.
1/7/2026
Full Observability Stack
Prometheus and Grafana integrated with structured telemetry dashboards.
Project Strategy
Key Decisions
- Selected K3s over kubeadm for ARM64 optimization and lightweight control plane.
- Adopted Argo CD app-of-apps pattern for declarative cluster management.
- Integrated KSOPS for encrypted secret management in Git.
- Designed private PKI using step-ca to avoid external certificate dependency.
- Implemented multi-architecture CI pipelines for ARM64 and x86 compatibility.
Challenges Solved
- Bootstrapping GitOps without circular dependency on Argo self-management.
- Handling multi-arch container builds across ARM64 Raspberry Pi nodes.
- Designing internal certificate authority with automated issuance.
- Ensuring reproducible cluster rebuild from Git alone.
Future Plans
- Introduce automated disaster recovery cluster restore procedures.
- Expand to hybrid ARM64/x86 edge topology.
- Integrate advanced telemetry agent for node-level intelligence.
- Experiment with self-hosted AI inference workloads.
Control Plane setup
The following setup guide is now part of the Orrery project lifecycle documentation.
Standing up a fresh K3s control plane with Argo CD and essential cluster services โ from bare metal to a fully GitOps-managed environment.
๐Prerequisites
This guide assumes you are starting from a freshly installed and hardened Linux server. The control plane node is where K3s will run in server mode and where Argo CD will be deployed to manage your cluster's desired state via GitOps.
- Host OS: Linux โ Ubuntu Server 20.04 LTS or later recommended (
x86_64oraarch64) - Hardware: Minimum 2 CPU cores and 2 GB RAM for the control plane node (4 GB+ recommended when co-hosting workloads)
- Networking: Static IP address (or reserved DHCP lease) and outbound internet access for pulling container images and manifests
- DNS: A wildcard DNS entry (e.g.
*.k8s.example.com) pointing to the control plane node's IP simplifies Ingress setup later on - SSH access: Key-based SSH access with a user that has
sudoprivileges - Git repository: A remote Git repository (GitHub, GitLab, or Azure DevOps) that Argo CD will use as its source of truth for cluster state
๐Installing K3s
K3s is installed using the official installation script provided by Rancher. The script downloads the K3s binary, sets up the systemd service, and starts the K3s server process. On a control plane node this runs K3s in server mode, which includes both the Kubernetes API server and an embedded containerd runtime.
Run the following on the control plane node to install the latest stable release:
curl -sfL https://get.k3s.io | sh -To pin a specific version โ which is recommended for production โ set the INSTALL_K3S_VERSION environment variable before running the installer:
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.31.4+k3s1" sh -Once the installation completes, verify that the K3s service is active:
# Check the K3s service status
systemctl status k3s
# Confirm the node is in a Ready state
sudo k3s kubectl get nodesThe K3s binary is installed at /usr/local/bin/k3s, and a kubeconfig file is automatically created at /etc/rancher/k3s/k3s.yaml. This kubeconfig is restricted to root by default, so you will need to set the KUBECONFIG environment variable or copy the file to makekubectl work from your shell user.
# Quick method โ export for the current session
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# Verify access
kubectl get nodesFor a persistent setup, copy the kubeconfig to your user's home directory so that kubectl works without elevated privileges:
# Create the .kube directory if it doesn't exist
mkdir -p $HOME/.kube
# Copy the K3s kubeconfig
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
# Fix ownership
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Persist the KUBECONFIG variable in your shell profile
echo 'export KUBECONFIG=$HOME/.kube/config' >> ~/.bashrc
source ~/.bashrc600 and never commit it to source control.If you want a dedicated service account for day-to-day administration rather than using the default admin credentials, create a separate user on the host and copy the kubeconfig across:
# Create a dedicated K3s admin user
sudo adduser k3s-admin
# Grant sudo privileges
sudo usermod -aG sudo k3s-admin
# Switch to the new user
su - k3s-admin
# Set up kubeconfig for this user
mkdir -p $HOME/.kube
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Add to shell profile
echo 'export KUBECONFIG=$HOME/.kube/config' >> ~/.bashrc
source ~/.bashrc
# Verify
kubectl get nodesAt this point you should have a single-node K3s cluster in a Ready state with full kubectl access from your shell user.
๐Bootstrapping Argo CD
Argo CD is a declarative GitOps continuous delivery tool for Kubernetes. It monitors one or more Git repositories and automatically reconciles the live cluster state to match the desired state defined in those repositories. Deploying Argo CD onto the fresh K3s control plane is the foundation for managing every subsequent cluster service through Git.
Start by creating a dedicated namespace for Argo CD:
kubectl create namespace argocdNext, apply the official Argo CD installation manifests. The --server-side and --force-conflicts flags are required because several Argo CD CRDs exceed the annotation size limit for client-side apply:
kubectl apply -n argocd \
--server-side \
--force-conflicts \
-f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yamlstable branch. For production deployments, pin to a specific release tag (e.g. v2.14.3) to avoid unexpected changes. The installation manifests include ClusterRoleBinding resources that reference the argocd namespace โ if you install into a different namespace you must update those references.Wait for all pods in the argocd namespace to reach a Running state before continuing:
kubectl -n argocd get pods -wBy default the argocd-server service is configured as ClusterIP, which is only reachable from inside the cluster. To access the Argo CD UI over your LAN during initial setup, temporarily patch the service to NodePort:
# Patch the service type
kubectl -n argocd patch svc argocd-server \
-p '{"spec": {"type": "NodePort"}}'
# View the assigned port
kubectl -n argocd get svc argocd-serverThe output will show a high-numbered port (e.g. 30080) mapped to ports 80 and 443. Navigate to https://<node-ip>:<node-port> in your browser to reach the Argo CD UI. You will see a certificate warning because the default installation uses a self-signed TLS certificate โ this is expected and can be bypassed for initial setup.
Retrieve the auto-generated admin password to log in:
kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d && echoLog in with the username admin and the password returned above. It is strongly recommended to change this password or configure SSO before exposing Argo CD outside your local network.
argocd-server service back to ClusterIP and route traffic through Ingress instead. The NodePort exposure is a temporary bootstrapping convenience only.Creating a Project
Projects in Argo CD provide logical grouping and access control for applications. Create a dedicated project (e.g. infra) to manage your infrastructure applications separately from workload deployments. This can be done from the Argo CD UI under Settings โ Projects โ New Project.
Connecting a Git Repository
Argo CD needs access to the Git repository that holds your cluster manifests. Navigate to Settings โ Repositories โ Connect Repo in the UI and add your repository using either SSH or HTTPS authentication.
- SSH: Provide the SSH URL (e.g.
[email protected]:v3/org/project/repo) and paste the contents of your private key. The corresponding public key must be added to the Git provider's deploy keys. - HTTPS with PAT: Provide the HTTPS URL (e.g.
https://dev.azure.com/org/project/_git/repo) and enter your Personal Access Token in the password field. This method is often simpler and avoids SSH key format compatibility issues.
ed25519 SSH keys. If connecting via SSH to Azure DevOps, generate an RSA key pair instead (ssh-keygen -t rsa -b 4096). For most setups, HTTPS with a PAT is the simpler and more reliable approach.Configuring Project Permissions
Once the repository is connected, go back to the project and configure the following scopes so Argo CD is permitted to deploy resources into your cluster:
- Source Repositories: Add the newly connected repository URL.
- Destinations: Add a destination with Server URL
https://kubernetes.default.svc, Name*, and Namespace*. This permits Argo CD to deploy into any namespace on the local cluster. - Namespace Resource Allow List: Set to
*so Argo CD can manage resources across all namespaces.
You should also configure the Cluster Resource Allow List so Argo CD can manage cluster-scoped resources that infrastructure services commonly require:
# Group Kind
"" Namespace
apps Deployment
"" Service
networking.k8s.io Ingress
traefik.io IngressRoute
apiextensions.k8s.io CustomResourceDefinition
rbac.authorization.k8s.io ClusterRole
rbac.authorization.k8s.io ClusterRoleBinding
admissionregistration.k8s.io MutatingWebhookConfiguration
admissionregistration.k8s.io ValidatingWebhookConfiguration
apiregistration.k8s.io APIServiceCertificate from cert-manager or HelmRelease from Flux).GitOps Repository Structure
A well-structured GitOps repository separates application manifests from infrastructure manifests and uses per-cluster overlays to manage environment-specific differences. A common layout using Kustomize looks like this:
โโโ apps/
โ โโโ base/
โ โ โโโ kustomization.yaml
โ โ โโโ namespace.yaml
โ โ โโโ deployment.yaml
โ โ โโโ service.yaml
โ โโโ overlays/
โ โโโ staging/
โ โ โโโ kustomization.yaml
โ โ โโโ patch.yaml
โ โโโ production/
โ โโโ kustomization.yaml
โ โโโ patch.yaml
โโโ infrastructure/
โ โโโ base/
โ โ โโโ kustomization.yaml
โ โ โโโ namespace.yaml
โ โ โโโ ingress.yaml
โ โ โโโ certmanager.yaml
โ โโโ overlays/
โ โโโ staging/
โ โ โโโ kustomization.yaml
โ โ โโโ patch.yaml
โ โโโ production/
โ โโโ kustomization.yaml
โ โโโ patch.yaml
โโโ clusters/
โโโ dev-cluster/
โ โโโ root-app.yaml
โ โโโ apps/
โโโ staging-cluster/
โโโ root-app.yaml
โโโ apps/The clusters/ directory contains per-cluster entry points. Each cluster has a root-app.yaml that acts as an app-of-apps โ a single Argo CD Application resource that points to a directory of child application manifests, allowing Argo CD to recursively discover and deploy everything the cluster needs.
Deploying the Root Application
The root application is the single manifest you apply manually to bootstrap the entire GitOps pipeline. Argo CD will then take ownership of deploying everything else automatically by reading from the Git repository.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: infra-root
namespace: argocd
spec:
project: infra
source:
repoURL: [email protected]:v3/your-org/your-project/your-repo
targetRevision: main
path: clusters/dev-cluster/apps
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
syncOptions:
- ApplyOutOfSyncOnly=trueMake sure this file has been pushed to the connected Git repository, then apply it from the control plane node:
kubectl apply -n argocd -f clusters/dev-cluster/root-app.yamlroot-app.yaml file and apply it directly โ Argo CD will pull the full manifest tree from the remote repository when it performs its first sync.After applying, the root application should appear in the Argo CD UI. Once it syncs, all child applications defined under clusters/dev-cluster/apps/ will be created and reconciled automatically. From this point forward, any change pushed to the Git repository will be detected and applied by Argo CD without manual intervention.
๐Configuring KSOPS for Encrypted Secrets
KSOPS is a Kustomize plugin that integrates SOPS (Secrets Operations) with Kustomize and Argo CD, enabling you to store encrypted secrets directly in your Git repository. This is essential for managing sensitive data โ TLS certificates, database credentials, API keys, and other secrets โ in a GitOps workflow without exposing plaintext values.
KSOPS is configured as an infrastructure plugin rather than a standalone service. It works by intercepting Kustomize build operations in Argo CD, decrypting secrets on the fly, and injecting them into your manifests at sync time. This requires modifications to the Argo CD configuration and deployment.
Step 1: Enable Argo CD Kustomize Plugins
First, patch the argocd-cm ConfigMap to enable Argo CD alpha plugins, which are required for KSOPS to function:
kubectl -n argocd patch configmap argocd-cm --type merge \
-p '{
"data": {
"kustomize.buildOptions.v5.8.1": "--enable-alpha-plugins --enable-exec --load-restrictor=LoadRestrictionsNone",
"kustomize.path.v5.8.1": "/custom-tools/kustomize"
}
}'kustomize.buildOptions.v5.8.1 key specifies flags for Kustomize v5.8.1: --enable-alpha-plugins enables custom Kustomize plugins, --enable-exec allows the exec function used by KSOPS to run external binaries, and --load-restrictor=LoadRestrictionsNone permits loading resources from outside the kustomization root directory. The kustomize.path.v5.8.1 key explicitly points Argo CD to the custom Kustomize binary installed in /custom-tools by the init container.Step 2: Generate and Store Age Keys
KSOPS uses the Age encryption tool to encrypt and decrypt secrets. The Age private key must be stored in a Kubernetes secret before you patch the argocd-repo-server Deployment, since the deployment patch references this secret in its volume definition. Generate an Age key pair on your local machine (or a secure system):
# Install Age if not already present
sudo apt update && sudo apt install -y age
# Generate a new Age key pair
age-keygen -o age.agekey
# Display the public key for reference
cat age.agekey | grep "public key"
# Example output: # public key: age1xxx...age.agekey is sensitive. Store it securely and never commit it to Git. The public key can be shared freely; it is only used during encryption to encrypt secrets for recipients.Create a Kubernetes secret in the argocd namespace to store the private key:
kubectl -n argocd create secret generic sops-age \
--from-file=keys.txt=age.agekeyStep 3: Inject KSOPS into the Repo Server
Now that the sops-age secret exists, you can safely patch theargocd-repo-server Deployment. KSOPS must be available as a binary inside the pod. This is done by adding an init container that downloads the KSOPS binary and places it in a shared volume. Create a patch file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-repo-server
namespace: argocd
spec:
template:
spec:
initContainers:
- name: install-ksops
image: alpine:3.19
command:
- /bin/sh
- -c
args:
- |
echo "Installing KSOPS and Kustomize..."
apk add --no-cache curl tar
wget -O- https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv5.8.1/kustomize_v5.8.1_linux_amd64.tar.gz | tar xz -C /custom-tools
wget -O- https://github.com/viaduct-ai/kustomize-sops/releases/download/v4.4.0/ksops_4.4.0_Linux_x86_64.tar.gz | tar xz -C /custom-tools
chmod +x /custom-tools/ksops
mkdir -p /custom-tools/plugin/viaduct.ai/v1/ksops
cp /custom-tools/ksops /custom-tools/plugin/viaduct.ai/v1/ksops/ksops
chmod +x /custom-tools/plugin/viaduct.ai/v1/ksops/ksops
echo "Done."
volumeMounts:
- mountPath: /custom-tools
name: custom-tools
containers:
- name: argocd-repo-server
env:
- name: PATH
value: /custom-tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: XDG_CONFIG_HOME
value: /home/argocd/.config
- name: KUSTOMIZE_PLUGIN_HOME
value: /custom-tools/plugin
- name: SOPS_AGE_KEY_FILE
value: /home/argocd/.config/sops/age/keys.txt
volumeMounts:
- mountPath: /custom-tools
name: custom-tools
- mountPath: /home/argocd/.config/sops/age
name: sops-age
readOnly: true
volumes:
- name: custom-tools
emptyDir: {}
- name: sops-age
secret:
secretName: sops-ageApply the patch to the argocd-repo-server deployment:
kubectl -n argocd apply -f argocd-repo-server-patch.yamlApplying the patch automatically triggers a rollout of the argocd-repo-server Deployment. Kubernetes detects the changes to the pod template spec (init containers, volumes, and volume mounts) and creates a new ReplicaSet, gradually replacing old pods with new ones that include the KSOPS binary and Age secret volume. Wait for the rollout to complete before proceeding:
kubectl -n argocd rollout status deployment/argocd-repo-serverOnce the rollout completes successfully, all argocd-repo-server pods will have the KSOPS binary available and the Age secret mounted at/home/argocd/.config/sops/age/keys.txt.
viaductoss/ksops image contains both ksops and kustomize binaries. Pinning to a specific version tag (like v4.4.0) ensures reproducible deployments. Check the KSOPS Docker Hub for available versions.Step 4: Installing SOPS and Age Locally
Before encrypting secrets on your local workstation, you need to install both sops (Secrets Operations) and age (the encryption tool). These tools allow you to encrypt and decrypt secrets outside the cluster before committing them to Git.
On Ubuntu/Debian:
# Install Age
sudo apt update && sudo apt install -y age
# Install SOPS
# Download the latest release from GitHub
SOPS_VERSION="3.8.1" # Check https://github.com/getsops/sops/releases for the latest version
wget https://github.com/getsops/sops/releases/download/v${SOPS_VERSION}/sops-v${SOPS_VERSION}.linux.amd64 -O sops
chmod +x sops
sudo mv sops /usr/local/bin/
# Verify installation
sops --version
age --versionOn macOS (using Homebrew):
# Install both Age and SOPS
brew install sops age
# Verify installation
sops --version
age --versionVerification:
Once installed, verify that both tools are available in your PATH:
which sops
which age
# Both commands should output the paths to the binariesPATH. Similarly, Age binaries are available on the Age GitHub releases page.Step 5: Encrypting Secrets Locally
To encrypt secrets using SOPS and Age, create a .sops.yaml configuration file in the directory where you will create and encrypt secrets. This file defines encryption rules, specifies which fields to encrypt, and identifies the Age public key to use. While SOPS_AGE_KEY_FILE can be set as an environment variable, it is primarily used for decryption purposes. For encryption, the .sops.yaml file is the recommended approach.
# Extract the Age public key from your key file
grep "public key:" age.agekey
# Example output: # public key: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Create a .sops.yaml configuration file
cat > .sops.yaml << EOF
creation_rules:
- path_regex: .*\.yaml$
encrypted_regex: '^(data|stringData)$'
age: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
EOF
# Create a plaintext secret YAML file
cat > my-secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: my-database-secret
namespace: default
type: Opaque
stringData:
username: admin
password: super-secret-password
connection-string: postgresql://admin:[email protected]:5432/mydb
EOF
# Encrypt the file (SOPS reads .sops.yaml automatically)
sops -e my-secret.yaml > my-secret.enc.yaml
# Display the encrypted result (it should be unreadable)
cat my-secret.enc.yaml.sops.yaml configuration defines encryption behavior: path_regex specifies which files to encrypt (e.g., .*\.yaml$ matches all YAML files), encrypted_regex defines which fields to encrypt (e.g., ^(data|stringData)$ encrypts only the data and stringData sections of Kubernetes secrets), and age specifies the Age public key used for encryption.sops -e (encrypt) and never commit plaintext secret files to Git. Commit only the .enc.yaml versions. The plaintext files should be kept locally and excluded via .gitignore. The .sops.yaml file contains only the public key and can be safely committed to Git.Step 6: Integrating KSOPS into Kustomize
KSOPS works as a Kustomize generator. Create a generator configuration file that references your encrypted secrets:
apiVersion: viaduct.ai/v1
kind: ksops
metadata:
name: my-secret-generator
annotations:
config.kubernetes.io/function: |
exec:
path: ksops
files:
- my-secret.enc.yamlThen reference this generator in your kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Include any base resources
resources:
- some-deployment.yaml
# Reference the KSOPS generator to decrypt and inject secrets
generators:
- ksopsconfig.yaml
# Apply any patches or overlays as needed
patchesStrategicMerge:
- deployment-patch.yamlStep 7: Deploying via Argo CD
Once your encrypted secrets and Kustomize configuration are in your Git repository, create an Argo CD Application resource that points to the directory containing yourkustomization.yaml:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app-with-secrets
namespace: argocd
spec:
project: infra
source:
repoURL: [email protected]:v3/your-org/your-project/your-repo
targetRevision: main
path: apps/my-app
destination:
server: https://kubernetes.default.svc
namespace: default
syncPolicy:
syncOptions:
- ApplyOutOfSyncOnly=trueWhen Argo CD syncs this application, it will call Kustomize to build the manifests, Kustomize will invoke KSOPS to decrypt my-secret.enc.yaml, KSOPS will use the Age private key from the sops-age secret, and the final plaintext secret will be applied to the cluster. The encrypted file remains safely stored in Git.
Adding More Age Recipients
If multiple team members or CI/CD systems need to encrypt secrets, generate separate Age key pairs for each and create a .sops.yaml file in your repository to specify all recipients:
creation_rules:
- path_regex: .enc.yaml$
age: |
age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
age1yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
age1zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzThen encrypt with sops -e โ SOPS will automatically encrypt for all recipients listed in .sops.yaml, allowing any of them to decrypt the file.
โ Verification
Steps to verify the control plane, Argo CD, and services are running correctly.
Check cluster health:
# TODO: Add verification commands๐งTroubleshooting
Common issues encountered while setting up and configuring KSOPS with Argo CD. Each issue below includes identification steps, root cause analysis, and remediation.
๐ฅ Issue 1: Deployment Patch Applied Before Sops-Age Secret Exists
If you patch the argocd-repo-server Deployment before creating the sops-age secret, the new pod will fail to start. The deployment will get stuck trying to mount a non-existent secret volume, and the old pods will continue running to avoid a complete outage.
Identification
Check the rollout status:
kubectl -n argocd rollout status deployment/argocd-repo-serverThe status will show waiting for rollout to finish... indefinitely. List pods to see a mix of old and new:
kubectl -n argocd get pods -l app.kubernetes.io/name=argocd-repo-server
# Old pod (still running):
# argocd-repo-server-5d8f4c9b6f-abcde 1/1 Running 0 30m
# New pod (stuck in init):
# argocd-repo-server-7c9d2e3f1a-wxyz 0/2 Init:0/1 0 5mRoot Cause
Find the stuck pod and describe it to see the mount error:
kubectl -n argocd describe pod <stuck-pod-name>
# Look for a FailedMount event:
# Warning FailedMount 50s kubelet MountVolume.SetUp failed for volume "sops-age" : secret "sops-age" not foundRemedy
Create the sops-age secret by following Step 2:
# Generate the Age key locally (if not already done)
age-keygen -o age.agekey
# Create the Kubernetes secret
kubectl -n argocd create secret generic sops-age \\
--from-file=keys.txt=age.agekeyThen restart the rollout:
kubectl -n argocd rollout restart deployment/argocd-repo-server
kubectl -n argocd rollout status deployment/argocd-repo-serversops-age secret before patching the Deployment. The patch references this secret in its volume definition.๐ฅ Issue 2: exec: "/bin/sh": stat /bin/sh: no such file or directory
If you use the viaductoss/ksops image as the init container with a shell command like ["/bin/sh", "-c"], you may see:
exec: "/bin/sh": stat /bin/sh: no such file or directoryRoot Cause
The viaductoss/ksops image is distroless โ it contains only KSOPS binaries with no shell, package managers, or utilities. You cannot execute shell commands in a distroless image. The official KSOPS documentation incorrectly recommends this approach, leading to this error.
Remedy
Use alpine:3.19 as the init container image instead. Alpine has /bin/sh and package management, allowing you to run the init script. The deployment patch in Step 3 uses Alpine and downloads KSOPS and Kustomize binaries from GitHub releases.
If you already applied the viaductoss patch, update it:
# Edit the deployment to change the init container image
kubectl -n argocd set image deployment/argocd-repo-server \\
install-ksops=alpine:3.19
# Then apply the updated init script (Step 3 patch)๐ฅ Issue 3: Init Container Failed Due to Incorrect wget URLs
The install-ksops init container downloads KSOPS and Kustomize from GitHub release URLs. If a URL is typed incorrectly or a release version doesn't exist, the init container will fail silently and the pod will never complete initialization.
Identification
Check the pod status. If it shows Init:0/1 and doesn't progress, the init container may have exited with an error. List the pods:
kubectl -n argocd get pods -l app.kubernetes.io/name=argocd-repo-server
# Look for a stuck pod:
# argocd-repo-server-abc123-xyz 0/2 Init:0/1 0 10mRoot Cause
Use kubectl logs to inspect the init container output:
kubectl -n argocd logs deploy/argocd-repo-server -c install-ksops
# Example error:
# wget: bad status for URL (404 not found)
# This indicates the URL is incorrect or the release doesn't existCommon causes:
- Typo in the GitHub release URL (e.g.
ksopsvskustomize-sops) - Version tag doesn't exist (e.g.
v4.5.0when onlyv4.4.0is released) - Incorrect asset filename (e.g.
_linux_x86_64vs_Linux_x86_64) - Network connectivity issue preventing GitHub access
Remedy
Verify the URLs are correct by visiting them in a browser or using curl:
# Check Kustomize release URL
curl -I https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv5.0.0/kustomize_v5.0.0_linux_amd64.tar.gz
# Check KSOPS release URL
curl -I https://github.com/viaduct-ai/kustomize-sops/releases/download/v4.4.0/ksops_4.4.0_Linux_x86_64.tar.gz
# Should return 200 OK if URLs are validUpdate the deployment patch with the correct URLs, then restart:
# Re-apply the corrected patch
kubectl -n argocd apply -f argocd-repo-server-patch.yaml
# Monitor the rollout
kubectl -n argocd rollout status deployment/argocd-repo-server
# If still stuck, manually restart
kubectl -n argocd rollout restart deployment/argocd-repo-serverFor reference, the correct URLs in Step 3 are:
# Kustomize v5.0.0
https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv5.0.0/kustomize_v5.0.0_linux_amd64.tar.gz
# KSOPS v4.4.0
https://github.com/viaduct-ai/kustomize-sops/releases/download/v4.4.0/ksops_4.4.0_Linux_x86_64.tar.gzkubectl logs deploy/argocd-repo-server -c install-ksops. This is the most reliable way to diagnose download errors, URL issues, and other init failures.