Patching Helm-Deployed Workloads with kubectl
Helm is Kubernetes' de-facto package manager, but in urgent cases it's often quicker to apply temporary patches directly with kubectl.
Helm is the de-facto package manager for Kubernetes, but sometimes making changes via helm upgrade isn't always the fastest option. For urgent scenarios - like updating a container image, tweaking environment variables, or debugging workloads - you can apply temporary patches directly using kubectl.
This entry explores how to patch Helm-deployed workloads pragmatically, while keeping Helm's release state intact.
πWhy Patch Instead of Upgrade?
While helm upgrade is the canonical way to update a release, there are scenarios where a direct patch via kubectl is preferable:
- Rapid debugging: When a rollout is failing and you want to experiment quickly.
- Hotfixes: Apply a critical environment variable, secret, or image update without waiting for chart updates.
- Cluster survival: Keep agents or workloads alive during migration or repository/image pull issues.
- Ephemeral changes: Changes that don't necessarily need to be persisted in chart values.
values.yaml.πCore Concepts
- Deployments: The Kubernetes object Helm most often manages. Patching usually targets them.
kubectl patch: Applies a JSON merge patch or strategic patch to an existing object.- Rollouts: After patching, Kubernetes performs a rolling update to bring new replicas online.
- Drift awareness: Helm will eventually overwrite manual patches unless values.yaml is updated.
β‘Basic Usage
kubectl -n default patch deployment az-agent-blue-agent \
--type='json' \
-p='[{"op":"replace","path":"/spec/template/spec/containers/0/image","value":"myregistry.azurecr.io/azure-pipelines-agent:ubuntu-16.04"}]'β What happens here:
--type=json: Specifies a JSON patch (more precise than merge patch).op: replace: Overwrites the existing container image.- The deployment triggers a new rollout with updated pods.
πScaling for Safe Rollouts
Sometimes a pod replica resists being replaced cleanly (e.g., CrashLoopBackOff). One trick is to temporarily scale the deployment up, let new replicas roll out, and then scale back:
# Scale up to 4
kubectl -n default scale deploy az-agent-blue-agent --replicas=4
# Watch rollout
kubectl -n default rollout status deploy/az-agent-blue-agent
# Scale back down once healthy
kubectl -n default scale deploy az-agent-blue-agent --replicas=3- 1Scale up to 4
kubectl -n default scale deploy az-agent-blue-agent --replicas=4 - 2Watch rollout status
kubectl -n default rollout status deploy/az-agent-blue-agent - 3Scale back down
kubectl -n default scale deploy az-agent-blue-agent --replicas=3
This helps Kubernetes bypass sticky replicas by creating extra headroom before trimming back.
πInspecting Pods Post-Patch
To debug pods after patching:
# Describe pod events (scheduling, image pulls, restarts)
kubectl -n default describe pod <pod-name>
# View logs from the primary container
kubectl -n default logs <pod-name> -c <container-name> --tail=200
# Get container names dynamically
kubectl -n default get pod <pod-name> -o jsonpath='{.spec.containers[*].name}'π§©Real-World Example: Fixing an Agent Image
We had an Azure DevOps self-hosted agent deployed via a Helm chart.
Problem: The image ubuntu-22.04 wasn't available.
Solution: Patch the deployment to use the working ubuntu-16.04 tag:
kubectl -n default patch deploy az-agent-blue-agent \
--type='json' \
-p='[{"op":"replace","path":"/spec/template/spec/containers/0/image","value":"acr.azurecr.io/azure-pipelines-agent:ubuntu-16.04"}]'Result: The deployment rolled out successfully, pulling the valid tag from our private ACR.
β Best Practices
- Patch for survival, upgrade for consistency: Reconcile changes into Helm values.yaml afterwards.
- Keep replicas healthy: Use rollout status and scaling to stabilize deployments.
- Leverage ACR or private registries: Prevent public pull issues by mirroring images into your own registry.
- Audit drift: Periodically run
helm get values <release>vs.kubectl get deploy -o yamlto spot mismatches.
πConclusion
Patching with kubectl provides agility when Helm upgrades are too heavy or slow. Used wisely, it's a powerful tool for firefighting and hotfixing - but remember: Helm is still the source of truth. Always back-port changes into your chart to ensure long-term consistency.
Filed under: Kubernetes, Helm, kubectl, Patching, DevOps