Debugging Distroless Containers: kubectl debug, Ephemeral Containers, and When to Use Each

Developer inspecting a distroless container with magnifying glass

The container works fine in CI. It deploys successfully to staging. Then something goes wrong in production and you type the command you always type: kubectl exec -it my-pod -- /bin/bash. The response is immediate: OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/bash": stat /bin/bash: no such file or directory.

You try /bin/sh. Same error. You try ls. Same error. The container image is distroless — it ships only your application binary and its runtime dependencies, with no shell, no package manager, no debugging tools of any kind. This is intentional and correct from a security standpoint. It is also a significant operational challenge the first time you face it in production.

This article covers every practical technique for debugging distroless containers in Kubernetes: kubectl debug with ephemeral containers (the standard approach), pod copy strategy (for Kubernetes versions without ephemeral container support, or when you need to modify the running pod spec), debug image variants (the pragmatic developer shortcut), cdebug (a purpose-built tool that simplifies the process), and node-level debugging (the last resort with the most power). For each technique I will explain what it can and cannot do, what Kubernetes version or RBAC permissions it requires, and in which scenario — developer in local, platform engineer in staging, ops in production — it is the appropriate choice.

Why Distroless Breaks the Normal Debugging Workflow

Traditional container debugging assumes you can exec into the container and use shell tools: ps, netstat, strace, curl, a text editor. Distroless images remove all of this by design. The Google distroless project, Chainguard’s Wolfi-based images, and the broader minimal image ecosystem deliberately exclude everything that is not required to run the application. The result is a dramatically smaller attack surface: no shell means no RCE via shell injection, no package manager means no easy escalation path, fewer binaries means fewer CVEs in the image scan.

The tradeoff is operational: when something goes wrong, you cannot use the tools that the process itself is not allowed to run. A Java application in gcr.io/distroless/java17-debian12 has the JRE and nothing else. A Go binary compiled with CGO disabled and shipped in gcr.io/distroless/static-debian12 has literally only the binary and the necessary CA certificates and timezone data. There is no wget to download a debug binary, no apt to install one, no bash to run a script.

Kubernetes solves this at the platform level with ephemeral containers, added as stable in Kubernetes 1.25. The principle is that a debug container — which can have a full shell and any tools you want — can be injected into a running pod and share its process namespace, network namespace, and filesystem mounts without modifying the original container or restarting the pod.

Option 1: kubectl debug with Ephemeral Containers

Ephemeral containers are the canonical solution. Since Kubernetes 1.25 (stable), kubectl debug can inject a temporary container into a running pod. The container shares the target pod’s network namespace by default, and with --target it can also share the process namespace of a specific container, allowing you to inspect its running processes and open file descriptors.

The basic invocation is:

kubectl debug -it my-pod \
  --image=busybox:latest \
  --target=my-container

The --target flag is the critical piece. Without it, the ephemeral container gets its own process namespace. With it, it shares the process namespace of the specified container — meaning you can run ps aux and see the application’s processes, use ls -la /proc/<pid>/fd to inspect open file descriptors, and read the application’s environment via cat /proc/<pid>/environ.

For a more capable debug environment, replace busybox with a richer image:

kubectl debug -it my-pod \
  --image=nicolaka/netshoot \
  --target=my-container

nicolaka/netshoot includes tcpdump, curl, dig, nmap, ss, iperf3, and dozens of other network diagnostic tools, making it the standard choice for network debugging scenarios.

What You Can and Cannot Do

Ephemeral containers share the pod’s network namespace and, when --target is used, the process namespace. This gives you:

  • Full visibility into the application’s network traffic from inside the pod (tcpdump, ss, netstat)
  • Process inspection via /proc/<pid> — open files, memory maps, environment variables, CPU/memory usage
  • Access to the pod’s DNS resolution context — exactly the same /etc/resolv.conf the application sees
  • Ability to make outbound network calls from the same network namespace (testing service endpoints, DNS resolution)

What you do not get with ephemeral containers:

  • Access to the application container’s filesystem. The ephemeral container has its own root filesystem. You cannot cat /app/config.yaml from the application container’s filesystem unless you access it via /proc/<pid>/root/.
  • Ability to remove the container once added. Ephemeral containers are permanent until the pod is deleted. This is by design — the Kubernetes API does not allow removing them after creation.
  • Volume mount modifications via CLI. You cannot add volume mounts to an ephemeral container via kubectl debug (though the API spec supports it, the CLI does not expose this).
  • Resource limits. Ephemeral containers do not support resource requests and limits in the kubectl debug CLI, though this is evolving.

Accessing the Application Filesystem

The most common surprise for developers new to ephemeral containers is that they cannot directly browse the application container’s filesystem. The workaround is the /proc filesystem:

# Find the application's PID
ps aux

# Browse its filesystem via /proc
ls /proc/1/root/app/
cat /proc/1/root/etc/config.yaml

# Or set the root to the application's root
chroot /proc/1/root /bin/sh  # only if /bin/sh exists in the app image

The /proc/<pid>/root path is a symlink to the container’s root filesystem as seen from the process namespace. Because the ephemeral container shares the process namespace with --target, the application’s PID is typically 1, and /proc/1/root gives you full read access to its filesystem.

RBAC Requirements

Ephemeral containers require the pods/ephemeralcontainers subresource permission. This is separate from pods/exec, which controls kubectl exec. A common mistake is to grant pods/exec for debugging purposes without realizing that ephemeral containers require an additional grant:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ephemeral-debugger
rules:
- apiGroups: [""]
  resources: ["pods/ephemeralcontainers"]
  verbs: ["update", "patch"]
- apiGroups: [""]
  resources: ["pods/attach"]
  verbs: ["create", "get"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]

In production environments, this permission should be tightly scoped: time-limited via RoleBinding rather than permanent ClusterRoleBinding, restricted to specific namespaces, and ideally gated behind an approval workflow. The debug container runs as root by default, which can create privilege escalation paths if the application container runs as a non-root user with shared process namespace — the debug container can attach to the application’s processes with higher privileges.

Option 2: kubectl debug –copy-to (Pod Copy Strategy)

When you need to modify the pod’s container spec — replace the image, change environment variables, add a sidecar with a shared filesystem — the --copy-to flag creates a full copy of the pod with your modifications applied:

kubectl debug my-pod \
  -it \
  --copy-to=my-pod-debug \
  --image=my-app:debug \
  --share-processes

This creates a new pod named my-pod-debug that is a copy of my-pod but with the container image replaced by my-app:debug. If my-app:debug is your application image built with debug tooling included (or a debug variant from your registry), this lets you interact with the exact same binary in the exact same configuration as the original pod.

A more common use of --copy-to is to attach a debug container alongside the existing application container while keeping the original image unchanged:

kubectl debug my-pod \
  -it \
  --copy-to=my-pod-debug \
  --image=busybox \
  --share-processes \
  --container=debugger

This creates the copy-pod with both the original containers and a new debugger container sharing the process namespace. Unlike ephemeral containers, this approach supports volume mounts and resource limits, and the debug pod can be deleted cleanly when you are done.

Limitations of the Copy Strategy

The pod copy approach has a critical limitation: it is not debugging the original pod. It creates a new pod that may behave differently because:

  • It does not share the original pod’s in-memory state — if the issue is a goroutine leak or heap corruption that has been accumulating for hours, the fresh copy will not exhibit it immediately
  • It creates a new Pod UID, which means any admission webhooks, network policies, or pod-level security contexts that depend on pod identity may apply differently
  • If the original pod is crashing (CrashLoopBackOff), the copy will also crash — this technique does not help for crash debugging unless you also change the entrypoint

For crash debugging specifically, combine --copy-to with a modified entrypoint to keep the container alive:

kubectl debug my-crashing-pod \
  -it \
  --copy-to=my-pod-debug \
  --image=busybox \
  --share-processes \
  -- sleep 3600

Option 3: Debug Image Variants

The most pragmatic approach — and the one most appropriate for developer workflows — is to maintain a debug variant of your application image that includes shell tooling. Both the Google distroless project and Chainguard provide this pattern officially.

Google distroless images have a :debug tag that adds BusyBox to the image:

# Production image
FROM gcr.io/distroless/java17-debian12

# Debug variant — identical but with BusyBox shell
FROM gcr.io/distroless/java17-debian12:debug

Chainguard images follow a similar convention with :latest-dev variants that include apk, a shell, and common utilities:

# Production (zero shell, minimal footprint)
FROM cgr.dev/chainguard/go:latest

# Development/debug variant
FROM cgr.dev/chainguard/go:latest-dev

If you build your own base images, the recommended approach is to use multi-stage builds and maintain separate build targets:

FROM golang:1.22 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp .

# Production: static distroless image
FROM gcr.io/distroless/static-debian12 AS production
COPY --from=builder /app/myapp /myapp
ENTRYPOINT ["/myapp"]

# Debug variant: same binary, with shell tools
FROM gcr.io/distroless/static-debian12:debug AS debug
COPY --from=builder /app/myapp /myapp
ENTRYPOINT ["/myapp"]

In your CI/CD pipeline, build both targets and push my-app:${VERSION} (production) and my-app:${VERSION}-debug (debug variant) to your registry. The debug image is never deployed to production by default, but it exists and is ready to be used with kubectl debug --copy-to when needed.

Security Considerations for Debug Variants

Debug image variants defeat much of the security benefit of distroless if they are used in production, even temporarily. Track usage carefully: log when debug images are deployed, require explicit approval, and ensure they are removed after the debugging session. In regulated environments, consider whether deploying a debug variant to production namespaces is permitted by your security policy — in many cases it is not, and you must use ephemeral containers (which add a debug process to the pod without modifying the application image) instead.

Option 4: cdebug

cdebug is an open-source CLI tool that simplifies distroless debugging by wrapping kubectl debug with more ergonomic defaults and additional capabilities. Its primary value is in making ephemeral container debugging feel like a native shell experience:

# Install
brew install cdebug
# or: go install github.com/iximiuz/cdebug@latest

# Debug a running pod
cdebug exec -it my-pod

# Specify a namespace and container
cdebug exec -it -n production my-pod -c my-container

# Use a specific debug image
cdebug exec -it my-pod --image=nicolaka/netshoot

What cdebug adds over raw kubectl debug:

  • Automatic filesystem chroot. cdebug exec automatically sets the filesystem root of the debug container to the target container’s filesystem, so you browse / and see the application’s files — not the debug image’s files. This addresses the most common friction point with kubectl debug.
  • Docker integration. cdebug exec works identically for Docker containers (cdebug exec -it <container-id>), making it the same muscle memory for local and cluster debugging.
  • No RBAC complications for Docker-based local development — useful for developer workflows before the code reaches Kubernetes.

The tradeoff: cdebug is a third-party dependency and requires installation. In environments with strict tooling policies (regulated industries, air-gapped clusters), it may not be an option. In those cases, the raw kubectl debug workflow with /proc/1/root filesystem navigation is the baseline.

Option 5: Node-Level Debugging

When everything else fails — the pod is in CrashLoopBackOff too fast to attach to, the issue is a kernel-level problem, or you need tools like strace that require elevated privileges — node-level debugging gives you direct access to the container’s processes from the host node.

kubectl debug node/ creates a privileged pod on the target node that mounts the node’s root filesystem under /host:

kubectl debug node/my-node-name \
  -it \
  --image=nicolaka/netshoot

From this privileged pod, you can use nsenter to enter the namespaces of any container running on the node:

# Find the container's PID on the node
# (from within the node debug pod)
crictl ps | grep my-container
crictl inspect <container-id> | grep pid

# Enter the container's namespaces
nsenter -t <pid> -m -u -i -n -p -- /bin/sh

# Or just the network namespace (for network debugging)
nsenter -t <pid> -n -- ip a

The nsenter approach lets you run tools from the node’s or debug container’s toolset while operating in the namespaces of the target container. This is how you run strace against a distroless process: strace is not in the application container, but you can run it from the node level while targeting the application’s PID.

# Trace all syscalls from the application process
nsenter -t <pid> -- strace -p <pid> -f -e trace=network

RBAC and Security for Node Debugging

Node-level debugging requires nodes/proxy and the ability to create privileged pods, which in most production clusters is restricted to cluster administrators. The debug pod runs with hostPID: true and hostNetwork: true, giving it visibility into all processes and network traffic on the node — not just the target container. This is significant: every process running on the node, including those in other tenants’ namespaces, is visible.

This technique should be treated as a break-glass procedure: log the access, require dual approval in production environments, and clean up immediately after the debugging session with kubectl delete pod --selector=app=node-debugger.

Choosing the Right Approach: Access Profile and Environment Matrix

The technique you should use depends on two axes: who you are (developer, platform engineer, ops/SRE) and where the issue is (local development, staging, production). The requirements and constraints differ significantly across these combinations.

Developer — Local or Development Cluster

Goal: Reproduce and understand a bug, inspect configuration, verify network connectivity to services.
Constraints: None material — full cluster admin on local or personal dev namespace.
Recommended approach: Debug image variants or cdebug.

In local development (Minikube, Kind, Docker Desktop), the fastest path is to build the debug variant of your image and deploy it directly. If you are working with another team’s service, cdebug exec gives you a shell in the container with automatic filesystem root without any special RBAC. The goal is speed and iteration — reserve the more structured approaches for higher environments.

Developer — Staging Cluster

Goal: Debug integration issues, inspect live configuration, verify environment-specific behavior.
Constraints: Shared cluster — cannot deploy arbitrary workloads to other teams’ namespaces, but has pods/ephemeralcontainers in own namespace.
Recommended approach: kubectl debug with ephemeral containers (--target), scoped to own namespace.

Staging is where ephemeral containers earn their keep. You can attach to a running pod without restarting it, without modifying the deployment spec, and without affecting other users of the same cluster. Grant developers pods/ephemeralcontainers in their team’s namespaces and they can self-service debug without needing ops involvement.

Platform Engineer / SRE — Production

Goal: Diagnose a live production incident. The pod is behaving unexpectedly — high latency, memory growth, unexpected connections, incorrect responses.
Constraints: Changes to running pods are high-risk. Any debug image deployment must be gated. The issue is live and affecting users.
Recommended approach: kubectl debug with ephemeral containers (ephemeral containers do not restart the pod, do not modify the deployment, and are auditable via API audit logs).

The key production requirements are auditability and minimal blast radius. Ephemeral containers satisfy both: they are recorded in the Kubernetes API audit log (who attached, when, to which pod), they do not modify the running application container, and they are limited to the pod’s own network and process namespaces. Document the debug session in your incident ticket: pod name, time, what was observed, who ran the debug container.

The --copy-to strategy is generally inappropriate for production incident response: it creates a new pod that may or may not exhibit the issue, it adds load to the cluster during an incident, and if it is attached to the same services (databases, downstream APIs), it produces additional traffic that complicates forensics.

Platform Engineer — Production, Node-Level Issue

Goal: Diagnose a kernel-level issue, a container runtime problem, a networking issue that spans multiple pods, or a situation where the pod is crashing too fast to attach to.
Constraints: Maximum privilege required. High operational risk.
Recommended approach: Node-level debug pod with nsenter. Treat as break-glass.

For this scenario, create a dedicated RBAC role that grants nodes/proxy access and the ability to create pods with hostPID: true in a dedicated debug namespace. Bind it only to specific users, require a separate authentication step (e.g., kubectl auth can-i check against a time-limited binding), and log all access. This level of access should generate a PagerDuty-style alert so that the security team knows a privileged debug session is active in production.

Common Errors and Solutions

Error: “ephemeral containers are disabled for this cluster”

Ephemeral containers require Kubernetes 1.16+ (alpha, behind feature gate) and are stable from 1.25. If you are on 1.16–1.22, you need to enable the EphemeralContainers feature gate on the API server and kubelet. From 1.23 it was beta and enabled by default. From 1.25 it is stable and always on. On managed Kubernetes services (EKS, GKE, AKS), check the cluster version — versions older than 1.25 may still have it disabled depending on your configuration.

Error: “cannot update ephemeralcontainers” (RBAC)

You have pods/exec but not pods/ephemeralcontainers. Add the grant shown in the RBAC section above. Note that pods/exec and pods/ephemeralcontainers are separate subresources — having one does not imply the other.

Error: “container not found” with –target

The container name in --target must match exactly the container name as defined in the Pod spec — not the image name. Check with kubectl get pod my-pod -o jsonpath='{.spec.containers[*].name}' to get the exact container names.

Error: Can see processes but cannot read /proc/1/root

The application container runs as a non-root user (e.g., UID 1000) and the ephemeral container runs as root. The application’s filesystem may have files owned by UID 1000 that are not readable by other UIDs depending on permissions. The /proc/<pid>/root path itself requires CAP_SYS_PTRACE capability. If your cluster’s PodSecurityStandards (PSS) are set to restricted, the debug container may not have this capability. Use the Baseline PSS profile for debug namespaces or explicitly add SYS_PTRACE to the ephemeral container’s securityContext.

Error: tcpdump shows no traffic

When using nicolaka/netshoot for network debugging, ensure the ephemeral container is created without --target if your goal is to capture all traffic on the pod’s network interface (not just the specific container’s process). With --target, you share the process namespace but the network namespace is shared at the pod level regardless. Run tcpdump -i any to capture on all interfaces including loopback, which is where inter-container traffic within a pod travels.

Decision Framework

Use this as a starting point to select the right technique for your situation:

ScenarioTechniqueRequirement
Active production incident, pod runningkubectl debug + ephemeral containerpods/ephemeralcontainers RBAC, k8s 1.25+
Pod crashing too fast to attachkubectl debug –copy-to + modified entrypointAbility to create pods in namespace
Developer debugging in dev/stagingcdebug exec or kubectl debugpods/ephemeralcontainers or pod create
Need full filesystem accesskubectl debug –copy-to + debug image variantDebug image in registry, pod create
Need strace or kernel tracingNode-level debug with nsenternodes/proxy, cluster admin equivalent
Network packet capturekubectl debug + nicolaka/netshootpods/ephemeralcontainers
Local Docker debuggingcdebug exec <container-id>Docker socket access
CI-reproducible debug environmentDebug image variant in separate build targetSeparate image tag in registry

Production RBAC Design

A clean RBAC design for production distroless debugging separates three roles with different privilege levels:

# Tier 1: Developer self-service in team namespaces
# Allows attaching ephemeral containers, no node access
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: distroless-debugger
  namespace: team-namespace
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["pods/ephemeralcontainers"]
  verbs: ["update", "patch"]
- apiGroups: [""]
  resources: ["pods/attach"]
  verbs: ["create", "get"]
---
# Tier 2: SRE production incident access
# Ephemeral containers across all namespaces
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: sre-distroless-debugger
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["pods/ephemeralcontainers"]
  verbs: ["update", "patch"]
- apiGroups: [""]
  resources: ["pods/attach"]
  verbs: ["create", "get"]
---
# Tier 3: Break-glass node access
# Only for platform team, time-limited binding recommended
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: node-debugger
rules:
- apiGroups: [""]
  resources: ["nodes/proxy"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create", "get", "list", "delete"]
  # Restrict to debug namespace via RoleBinding, not ClusterRoleBinding

Bind Tier 1 permanently to your developers. Bind Tier 2 to SREs permanently but with audit alerts on use. Bind Tier 3 only on-demand (via a Kubernetes operator that creates time-limited RoleBindings) and never as a permanent ClusterRoleBinding.

Summary

Distroless containers are the correct choice for production workloads. They reduce attack surface, eliminate unnecessary CVEs, and force a cleaner separation between application and tooling. The operational cost is that your traditional debugging workflow — exec into the container, run some commands — no longer works by default.

Kubernetes provides a clean answer with ephemeral containers and kubectl debug: inject a debug container with whatever tools you need into the running pod, sharing its network and process namespaces, without restarting or modifying the application. For scenarios where ephemeral containers are insufficient — filesystem access, crash debugging, kernel-level investigation — the copy strategy and node-level debug fill the remaining gaps.

The key to making this work at scale is not the technique itself but the access model: developers get self-service ephemeral container access in their own namespaces, SREs get cluster-wide ephemeral container access for production incidents, and node-level access is a break-glass procedure with audit trail and time limits. With that model in place, distroless becomes an operational non-issue rather than an obstacle.

Building a Kubernetes Migration Framework: Lessons from Ingress-NGINX

Building a Kubernetes Migration Framework: Lessons from Ingress-NGINX

The recent announcement regarding the deprecation of the Ingress-NGINX controller sent a ripple through the Kubernetes community. For many organizations, it’s the first major deprecation of a foundational, widely-adopted ecosystem component. While the immediate reaction is often tactical—”What do we replace it with?”—the more valuable long-term question is strategic: “How do we systematically manage this and future migrations?”

This event isn’t an anomaly; it’s a precedent. As Kubernetes matures, core add-ons, APIs, and patterns will evolve or sunset. Platform engineering teams need a repeatable, low-risk framework for navigating these changes. Drawing from the Ingress-NGINX transition and established deployment management principles, we can abstract a robust Kubernetes Migration Framework applicable to any major component, from service meshes to CSI drivers.

Why Ad-Hoc Migrations Fail in Production

Attempting a “big bang” replacement or a series of manual, one-off changes is a recipe for extended downtime, configuration drift, and undetected regression. Production Kubernetes environments are complex systems with deep dependencies:

  • Interdependent Workloads: Multiple applications often share the same ingress controller, relying on specific annotations, custom snippets, or behavioral quirks.
  • Automation and GitOps Dependencies: Helm charts, Kustomize overlays, and ArgoCD/Flux manifests are tightly coupled to the existing component’s API and schema.
  • Observability and Security Integration: Monitoring dashboards, logging parsers, and security policies are tuned for the current implementation.
  • Knowledge Silos: Tribal knowledge about workarounds and specific configurations isn’t documented.

A structured framework mitigates these risks by enforcing discipline, creating clear validation gates, and ensuring the capability to roll back at any point.

The Four-Phase Kubernetes Migration Framework

This framework decomposes the migration into four distinct phases: Assessment, Parallel Run, Cutover, and Decommission. Each phase has defined inputs, activities, and exit criteria.

Phase 1: Deep Assessment & Dependency Mapping

Before writing a single line of new configuration, understand the full scope. The goal is to move from “we use Ingress-NGINX” to a precise inventory of how it’s used.

  • Inventory All Ingress Resources: Use kubectl get ingress --all-namespaces as a starting point, but go deeper.
  • Analyze Annotation Usage: Script an analysis to catalog every annotation in use (e.g., nginx.ingress.kubernetes.io/rewrite-target, nginx.ingress.kubernetes.io/configuration-snippet). This reveals functional dependencies.
  • Map to Backend Services: For each Ingress, identify the backend Services and Namespaces. This highlights critical applications and potential blast radius.
  • Review Customizations: Document any custom ConfigMaps for main NGINX configuration, custom template patches, or modifications to the controller deployment itself.
  • Evaluate Alternatives: Based on the inventory, evaluate candidate replacements (e.g., Gateway API with a compatible implementation, another Ingress controller like Emissary-ingress or Traefik). The Google Cloud migration framework provides a useful decision tree for ingress-specific migrations.

The output of this phase is a migration manifesto: a concrete list of what needs to be converted, grouped by complexity and criticality.

Phase 2: Phased Rollout & Parallel Run

This is the core of a low-risk migration. Instead of replacing, you run the new and old systems in parallel, shifting traffic gradually. For ingress, this often means installing the new controller alongside the old one.

  • Dual Installation: Deploy the new ingress controller in the same cluster, configured with a distinct ingress class (e.g., ingressClassName: gateway vs. nginx).
  • Create Canary Ingress Resources: For a low-risk application, create a parallel Ingress or Gateway resource pointing to the new controller. Use techniques like managed deployments with canary patterns to control exposure.
    # Example: A new Gateway API HTTPRoute for a canary service
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
    name: app-canary
    spec:
    parentRefs:
    - name: company-gateway
    rules:
    - backendRefs:
    - name: app-service
    port: 8080
    weight: 10 # Start with 10% of traffic

  • Validate Equivalency: Use traffic mirroring (if supported) or direct synthetic testing against both ingress paths. Compare logs, response headers, latency, and error rates.
  • Iterate and Expand: Gradually increase traffic weight or add more applications to the new stack, group by group, based on the assessment from Phase 1.

This phase relies heavily on your observability stack. Dashboards comparing error rates, latency (p50, p99), and throughput between the old and new paths are essential.

Phase 3: Validation & Automated Cutover

The cutover is not a manual event. It’s the final step in a validation process.

  • Define Validation Tests: Create a suite of tests that must pass before full cutover. This includes:
    • Smoke tests for all critical user journeys.
    • Load tests to verify performance under expected traffic patterns.
    • Security scan validation (e.g., no unintended ports open).
    • Compliance checks (e.g., specific headers are present).
  • Automate the Switch: For each application, the cutover is ultimately a change in its Ingress or Gateway resource. This should be done via your GitOps pipeline. Update the source manifests (e.g., change the ingressClassName), merge, and let automation apply it. This ensures the state is declarative and recorded.
  • Maintain Rollback Capacity: The old system must remain operational and routable (with reduced capacity) during this phase. The GitOps rollback is simply reverting the manifest change.

Phase 4: Observability & Decommission

Once all traffic is successfully migrated and validated over a sustained period (e.g., 72 hours), you can decommission the old component.

  • Monitor Aggressively: Keep a close watch on all key metrics for at least one full business cycle (a week).
  • Remove Old Resources: Delete the old controller’s Deployment, Service, ConfigMaps, and CRDs (if no longer needed).
  • Clean Up Auxiliary Artifacts: Remove old RBAC bindings, service accounts, and any custom monitoring alerts or dashboards specific to the old component.
  • Document Lessons Learned: Update runbooks and architecture diagrams. Note any surprises, gaps in the process, or validation tests that were particularly valuable.

Key Principles for a Resilient Framework

Beyond the phases, these principles should guide your framework’s design:

  • Always Maintain Rollback Capability: Every step should be reversible with minimal disruption. This is a core tenet of managing Kubernetes deployments.
  • Leverage GitOps for State Management: All desired state changes (Ingress resources, controller deployments) must flow through version-controlled manifests. This provides an audit trail, consistency, and the simplest rollback mechanism (git revert).
  • Validate with Production Traffic Patterns: Synthetic tests are insufficient. Use canary weights and traffic mirroring to validate with real user traffic in a controlled manner.
  • Communicate Transparently: Platform teams should maintain a clear migration status page for internal stakeholders, showing which applications have been migrated, which are in progress, and the overall timeline.

Conclusion: Building a Migration-Capable Platform

The deprecation of Ingress-NGINX is a wake-up call. The next major change is a matter of “when,” not “if.” By investing in a structured migration framework now, platform teams transform a potential crisis into a manageable, repeatable operational procedure.

This framework—Assess, Run in Parallel, Validate, and Decommission—abstracts the specific lessons from the ingress migration into a generic pattern. It can be applied to migrating from PodSecurityPolicies to Pod Security Standards, from a deprecated CSI driver, or from one service mesh to another. The tools (GitOps, canary deployments, observability) are already in your stack. The value is in stitching them together into a disciplined process that ensures platform evolution doesn’t compromise platform stability.

Start by documenting this framework as a runbook template. Then, apply it to your next significant component update, even a minor one, to refine the process. When the next major deprecation announcement lands in your inbox, you’ll be ready.

Kubernetes Dashboard Alternatives in 2026: Best Web UI Options After Official Retirement

Kubernetes Dashboard Alternatives in 2026: Best Web UI Options After Official Retirement

The Kubernetes Dashboard, once a staple tool for cluster visualization and management, has been officially archived and is no longer maintained. For many teams who relied on its straightforward web interface to monitor pods, deployments, and services, this retirement marks the end of an era. But it also signals something important: the Kubernetes ecosystem has evolved far beyond what the original dashboard was designed to handle.

Today’s Kubernetes environments are multi-cluster by default, driven by GitOps principles, guarded by strict RBAC policies, and operated by platform teams serving dozens or hundreds of developers. The operating model has simply outgrown the traditional dashboard’s capabilities.

So what comes next? If you’ve been using Kubernetes Dashboard and need to migrate to something more capable, or if you’re simply curious about modern alternatives, this guide will walk you through the best options available in 2026.

Why Kubernetes Dashboard Was Retired

The Kubernetes Dashboard served its purpose well in the early days of Kubernetes adoption. It provided a simple, browser-based interface for viewing cluster resources without needing to master kubectl commands. But as Kubernetes matured, several limitations became apparent:

  • Single-cluster focus: Most organizations now manage multiple clusters across different environments, but the dashboard was designed for viewing one cluster at a time
  • Limited RBAC capabilities: Modern platform teams need fine-grained access controls at the cluster, namespace, and workload levels
  • No GitOps integration: Contemporary workflows rely on declarative configuration and continuous deployment pipelines
  • Minimal observability: Beyond basic resource listing, the dashboard lacked advanced monitoring, alerting, and troubleshooting features
  • Security concerns: The dashboard’s architecture required careful configuration to avoid exposing cluster access

The community recognized these constraints, and the official recommendation now points toward Headlamp as the successor. But Headlamp isn’t the only option worth considering.

Top Kubernetes Dashboard Alternatives for 2026

1. Headlamp: The Official Successor

Headlamp is now the official recommendation from the Kubernetes SIG UI group. It’s a CNCF Sandbox project developed by Kinvolk (now part of Microsoft) that brings a modern approach to cluster visualization.

Key Features:

  • Clean, intuitive interface built with modern web technologies
  • Extensive plugin system for customization
  • Works both as an in-cluster deployment and desktop application
  • Uses your existing kubeconfig file for authentication
  • OpenID Connect (OIDC) support for enterprise SSO
  • Read and write operations based on RBAC permissions

Installation Options:

# Using Helm
helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
helm install my-headlamp headlamp/headlamp --namespace kube-system

# As Minikube addon
minikube addons enable headlamp
minikube service headlamp -n headlamp

Headlamp excels at providing a familiar dashboard experience while being extensible enough to grow with your needs. The plugin architecture means you can customize it for your specific workflows without waiting for upstream changes.

Best for: Teams transitioning from Kubernetes Dashboard who want a similar experience with modern features and official backing.

2. Portainer: Enterprise Multi-Cluster Management

Portainer has evolved from a Docker management tool into a comprehensive Kubernetes platform. It’s particularly strong when you need to manage multiple clusters from a single interface. We already covered in detail Portainer so you can also take a look

Key Features:

  • Multi-cluster management dashboard
  • Enterprise-grade RBAC with fine-grained access controls
  • Visual workload deployment and scaling
  • GitOps integration support
  • Comprehensive audit logging
  • Support for both Kubernetes and Docker environments

Best for: Organizations managing multiple clusters across different environments who need enterprise RBAC and centralized control.

3. Skooner (formerly K8Dash): Lightweight and Fast

Skooner keeps things simple. If you appreciated the straightforward nature of the original Kubernetes Dashboard, Skooner delivers a similar philosophy with a cleaner, faster interface.

Key Features:

  • Fast, real-time updates
  • Clean and minimal interface
  • Easy installation with minimal configuration
  • Real-time metrics visualization
  • Built-in OIDC authentication

Best for: Teams that want a simple, no-frills dashboard without complex features or steep learning curves.

4. Devtron: Complete DevOps Platform

Devtron goes beyond simple cluster visualization to provide an entire application delivery platform built on Kubernetes.

Key Features:

  • Multi-cluster application deployment
  • Built-in CI/CD pipelines
  • Advanced security scanning and compliance
  • Application-centric view rather than resource-centric
  • Support for seven different SSO providers
  • Chart store for Helm deployments

Best for: Platform teams building internal developer platforms who need comprehensive deployment pipelines alongside cluster management.

5. KubeSphere: Full-Stack Container Platform

KubeSphere positions itself as a distributed operating system for cloud-native applications, using Kubernetes as its kernel.

Key Features:

  • Multi-tenant architecture
  • Integrated DevOps workflows
  • Service mesh integration (Istio)
  • Multi-cluster federation
  • Observability and monitoring built-in
  • Plug-and-play architecture for third-party integrations

Best for: Organizations building comprehensive container platforms who want an opinionated, batteries-included experience.

6. Rancher: Battle-Tested Enterprise Platform

Rancher from SUSE has been in the Kubernetes management space for years and offers one of the most mature platforms available.

Key Features:

  • Manage any Kubernetes cluster (EKS, GKE, AKS, on-premises)
  • Centralized authentication and RBAC
  • Built-in monitoring with Prometheus and Grafana
  • Application catalog with Helm charts
  • Policy management and security scanning

Best for: Enterprise organizations managing heterogeneous Kubernetes environments across multiple cloud providers.

7. Octant: Developer-Focused Cluster Exploration

Octant (originally developed by VMware) takes a developer-centric approach to cluster visualization with a focus on understanding application architecture.

Key Features:

  • Plugin-based extensibility
  • Resource relationship visualization
  • Port forwarding directly from the UI
  • Log streaming
  • Context-aware resource inspection

Best for: Application developers who need to understand how their applications run on Kubernetes without being cluster administrators.

Desktop and CLI Alternatives Worth Considering

While this article focuses on web-based dashboards, it’s worth noting that not everyone needs a browser interface. Some of the most powerful Kubernetes management tools work as desktop applications or terminal UIs.

If you’re considering client-side tools, you might find these articles on my blog helpful:

These client tools offer advantages that web dashboards can’t match: offline access, better performance, and tighter integration with your local development workflow. FreeLens, in particular, has emerged as the lowest-risk choice for most organizations looking for a desktop Kubernetes IDE.

Choosing the Right Alternative for Your Team

With so many options available, how do you choose? Here’s a decision framework:

Choose Headlamp if:

  • You want the officially recommended path forward
  • You need a lightweight dashboard similar to what you had before
  • Plugin extensibility is important for future customization
  • You prefer CNCF-backed open source projects

Choose Portainer if:

  • You manage multiple Kubernetes clusters
  • Enterprise RBAC is a critical requirement
  • You also work with Docker environments
  • Visual deployment tools would benefit your team

Choose Skooner if:

  • You want the simplest possible alternative
  • Your needs are straightforward: view and manage resources
  • You don’t need advanced features or multi-cluster support

Choose Devtron or KubeSphere if:

  • You’re building an internal developer platform
  • You need integrated CI/CD pipelines
  • Application-centric workflows matter more than resource-centric views

Choose Rancher if:

  • You’re managing enterprise-scale, multi-cloud Kubernetes
  • You need battle-tested stability and vendor support
  • Policy management and compliance are critical

Consider desktop tools like FreeLens if:

  • You work primarily from a local development environment
  • You need offline access to cluster information
  • You prefer richer desktop application experiences

Migration Considerations

If you’re actively using Kubernetes Dashboard today, here’s what to think about when migrating:

  1. Authentication method: Most modern alternatives support OIDC/SSO, but verify your specific identity provider is supported
  2. RBAC policies: Review your existing ClusterRole and RoleBinding configurations to ensure they translate properly
  3. Custom workflows: If you’ve built automation around Dashboard URLs or specific features, you’ll need to adapt these
  4. User training: Even similar-looking alternatives have different UIs and workflows; budget time for team training
  5. Ingress configuration: If you expose your dashboard externally, you’ll need to reconfigure ingress rules

The Future of Kubernetes UI Management

The retirement of Kubernetes Dashboard isn’t a step backward—it’s recognition that the ecosystem has matured. Modern platforms need to handle multi-cluster management, GitOps workflows, comprehensive observability, and sophisticated RBAC out of the box.

The alternatives listed here represent different philosophies about what a Kubernetes interface should be:

  • Minimalist dashboards (Headlamp, Skooner) that stay close to the original vision
  • Enterprise platforms (Portainer, Rancher) that centralize multi-cluster management
  • Developer platforms (Devtron, KubeSphere) that integrate the entire application lifecycle
  • Desktop experiences (FreeLens, OpenLens) that bring IDE-like capabilities

The right choice depends on your team’s size, your infrastructure complexity, and whether you’re managing platforms or building applications. For most teams migrating from Kubernetes Dashboard, starting with Headlamp makes sense—it’s officially recommended, actively maintained, and provides a familiar experience. From there, you can evaluate whether you need to scale up to more comprehensive platforms.

Whatever you choose, the good news is that the Kubernetes ecosystem in 2026 offers more sophisticated, capable, and secure dashboard alternatives than ever before.

Frequently Asked Questions (FAQ)

Is Kubernetes Dashboard officially deprecated or just unmaintained?

The Kubernetes Dashboard has been officially archived by the Kubernetes project and is no longer actively maintained. While it may still run in existing clusters, it no longer receives security updates, bug fixes, or new features, making it unsuitable for production use in modern environments.

What is the official replacement for Kubernetes Dashboard?

Headlamp is the officially recommended successor by the Kubernetes SIG UI group. It provides a modern web interface, supports plugins, integrates with existing kubeconfig files, and aligns with current Kubernetes security and RBAC best practices.

Is Headlamp production-ready for enterprise environments?

Yes. Headlamp supports OIDC authentication, fine-grained RBAC, and can run either in-cluster or as a desktop application. While still evolving, it is actively maintained and suitable for many production use cases, especially when combined with proper access controls.

Are there lightweight alternatives similar to the old Kubernetes Dashboard?

Yes. Skooner is a lightweight, fast alternative that closely mirrors the simplicity of the original Kubernetes Dashboard while offering a cleaner UI and modern authentication options like OIDC.

Do I still need a web-based dashboard to manage Kubernetes?

Not necessarily. Many teams prefer desktop or CLI-based tools such as FreeLens, OpenLens, or K9s. These tools often provide better performance, offline access, and deeper integration with developer workflows compared to browser-based dashboards.

Is it safe to expose Kubernetes dashboards over the internet?

Exposing any Kubernetes dashboard publicly requires extreme caution. If external access is necessary, always use:
Strong authentication (OIDC / SSO)
Strict RBAC policies
Network restrictions (VPN, IP allowlists)
TLS termination and hardened ingress rules
In many cases, dashboards should only be accessible from internal networks.

Can these dashboards replace kubectl?

No. Dashboards are complementary tools, not replacements for kubectl. While they simplify visualization and some management tasks, advanced operations, automation, and troubleshooting still rely heavily on CLI tools and GitOps workflows.

What should I consider before migrating away from Kubernetes Dashboard?

Before migrating, review:
Authentication and identity provider compatibility
Existing RBAC roles and permissions
Multi-cluster requirements
GitOps and CI/CD integrations
Training needs for platform teams and developers
Starting with Headlamp is often the lowest-risk migration path

Which Kubernetes dashboard is best for developers rather than platform teams?

Tools like Octant and Devtron are more developer-focused. They emphasize application-centric views, resource relationships, and deployment workflows, making them ideal for developers who want insight without managing cluster infrastructure directly.

Which Kubernetes dashboard is best for multi-cluster management?

For multi-cluster environments, Portainer, Rancher, and KubeSphere are strong options. These platforms are designed to manage multiple clusters from a single control plane and offer enterprise-grade RBAC, auditing, and centralized authentication.

Helm Chart Testing in Production: Layers, Tools, and a Minimum CI Pipeline

Helm Chart Testing in Production: Layers, Tools, and a Minimum CI Pipeline

When a Helm chart fails in production, the impact is immediate and visible. A misconfigured ServiceAccount, a typo in a ConfigMap key, or an untested conditional in templates can trigger incidents that cascade through your entire deployment pipeline. The irony is that most teams invest heavily in testing application code while treating Helm charts as “just configuration.”

Chart testing is fundamental for production-quality Helm deployments. For comprehensive coverage of testing along with all other Helm best practices, visit our complete Helm guide.

Helm charts are infrastructure code. They define how your applications run, scale, and integrate with the cluster. Treating them with less rigor than your application logic is a risk most production environments cannot afford.

The Real Cost of Untested Charts

In late 2024, a medium-sized SaaS company experienced a 4-hour outage because a chart update introduced a breaking change in RBAC permissions. The chart had been tested locally with helm install --dry-run, but the dry-run validation doesn’t interact with the API server’s RBAC layer. The deployment succeeded syntactically but failed operationally.

The incident revealed three gaps in their workflow:

  1. No schema validation against the target Kubernetes version
  2. No integration tests in a live cluster
  3. No policy enforcement for security baselines

These gaps are common. According to a 2024 CNCF survey on GitOps practices, fewer than 40% of organizations systematically test Helm charts before production deployment.

The problem is not a lack of tools—it’s understanding which layer each tool addresses.

Testing Layers: What Each Level Validates

Helm chart testing is not a single operation. It requires validation at multiple layers, each catching different classes of errors.

Layer 1: Syntax and Structure Validation

What it catches: Malformed YAML, invalid chart structure, missing required fields

Tools:

  • helm lint: Built-in, minimal validation following Helm best practices
  • yamllint: Strict YAML formatting rules

Example failure caught:

# Invalid indentation breaks the chart
resources:
  limits:
      cpu: "500m"
    memory: "512Mi"  # Incorrect indentation

Limitation: Does not validate whether the rendered manifests are valid Kubernetes objects.

Layer 2: Schema Validation

What it catches: Manifests that would be rejected by the Kubernetes API

Primary tool: kubeconform

Kubeconform is the actively maintained successor to the deprecated kubeval. It validates against OpenAPI schemas for specific Kubernetes versions and can include custom CRDs.

Project Profile:

  • Maintenance: Active, community-driven
  • Strengths: CRD support, multi-version validation, fast execution
  • Why it matters: helm lint validates chart structure, but not if rendered manifests match Kubernetes schemas

Example failure caught:

apiVersion: apps/v1
kind: Deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: app
        image: nginx:latest
# Missing required field: spec.selector

Configuration example:

helm template my-chart . | kubeconform \
  -kubernetes-version 1.30.0 \
  -schema-location default \
  -schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' \
  -summary

Example CI integration:

#!/bin/bash
set -e

KUBE_VERSION="1.30.0"

echo "Rendering chart..."
helm template my-release ./charts/my-chart > manifests.yaml

echo "Validating against Kubernetes $KUBE_VERSION..."
kubeconform \
  -kubernetes-version "$KUBE_VERSION" \
  -schema-location default \
  -summary \
  -output json \
  manifests.yaml | jq -e '.summary.invalid == 0'

Alternative: kubectl --dry-run=server (requires cluster access, validates against actual API server)

Layer 3: Unit Testing

What it catches: Logic errors in templates, incorrect conditionals, wrong value interpolation

Unit tests validate that given a set of input values, the chart produces the expected manifests. This is where template logic is verified before reaching a cluster.

Primary tool: helm-unittest

helm-unittest is the most widely adopted unit testing framework for Helm charts.

Project Profile:

  • GitHub: 3.3k+ stars, ~100 contributors
  • Maintenance: Active (releases every 2-3 months)
  • Primary maintainer: Quentin Machu (originally @QubitProducts, now independent)
  • Commercial backing: None
  • Bus Factor: Medium-High (no institutional backing, but consistent community engagement)

Strengths:

  • Fast execution (no cluster required)
  • Familiar test syntax (similar to Jest/Mocha)
  • Snapshot testing support
  • Good documentation

Limitations:

  • Doesn’t validate runtime behavior
  • Cannot test interactions with admission controllers
  • No validation against actual Kubernetes API

Example test scenario:

# tests/deployment_test.yaml
suite: test deployment
templates:
  - deployment.yaml
tests:
  - it: should set resource limits when provided
    set:
      resources.limits.cpu: "1000m"
      resources.limits.memory: "1Gi"
    asserts:
      - equal:
          path: spec.template.spec.containers[0].resources.limits.cpu
          value: "1000m"
      - equal:
          path: spec.template.spec.containers[0].resources.limits.memory
          value: "1Gi"

  - it: should not create HPA when autoscaling disabled
    set:
      autoscaling.enabled: false
    template: hpa.yaml
    asserts:
      - hasDocuments:
          count: 0

Alternative: Terratest (Helm module)

Terratest is a Go-based testing framework from Gruntwork that includes first-class Helm support. Unlike helm-unittest, Terratest deploys charts to real clusters and allows programmatic assertions in Go.

Example Terratest test:

func TestHelmChartDeployment(t *testing.T) {
    kubectlOptions := k8s.NewKubectlOptions("", "", "default")
    options := &helm.Options{
        KubectlOptions: kubectlOptions,
        SetValues: map[string]string{
            "replicaCount": "3",
        },
    }
    
    defer helm.Delete(t, options, "my-release", true)
    helm.Install(t, options, "../charts/my-chart", "my-release")
    
    k8s.WaitUntilNumPodsCreated(t, kubectlOptions, metav1.ListOptions{
        LabelSelector: "app=my-app",
    }, 3, 30, 10*time.Second)
}

When to use Terratest vs helm-unittest:

  • Use helm-unittest for fast, template-focused validation in CI
  • Use Terratest when you need full integration testing with Go flexibility

Layer 4: Integration Testing

What it catches: Runtime failures, resource conflicts, actual Kubernetes behavior

Integration tests deploy the chart to a real (or ephemeral) cluster and verify it works end-to-end.

Primary tool: chart-testing (ct)

chart-testing is the official Helm project for testing charts in live clusters.

Project Profile:

  • Ownership: Official Helm project (CNCF)
  • Maintainers: Helm team (contributors from Microsoft, IBM, Google)
  • Governance: CNCF-backed with public roadmap
  • LTS: Aligned with Helm release cycle
  • Bus Factor: Low (institutional backing from CNCF provides strong long-term guarantees)

Strengths:

  • De facto standard for public Helm charts
  • Built-in upgrade testing (validates migrations)
  • Detects which charts changed in a PR (efficient for monorepos)
  • Integration with GitHub Actions via official action

Limitations:

  • Requires a live Kubernetes cluster
  • Initial setup more complex than unit testing
  • Does not include security scanning

What ct validates:

  • Chart installs successfully
  • Upgrades work without breaking state
  • Linting passes
  • Version constraints are respected

Example ct configuration:

# ct.yaml
target-branch: main
chart-dirs:
  - charts
chart-repos:
  - bitnami=https://charts.bitnami.com/bitnami
helm-extra-args: --timeout 600s
check-version-increment: true

Typical GitHub Actions workflow:

name: Lint and Test Charts

on: pull_request

jobs:
  lint-test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v3
        with:
          fetch-depth: 0

      - name: Set up Helm
        uses: azure/setup-helm@v3

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.11'

      - name: Set up chart-testing
        uses: helm/chart-testing-action@v2

      - name: Run chart-testing (lint)
        run: ct lint --config ct.yaml

      - name: Create kind cluster
        uses: helm/kind-action@v1

      - name: Run chart-testing (install)
        run: ct install --config ct.yaml

When ct is essential:

  • Public chart repositories (expected by community)
  • Charts with complex upgrade paths
  • Multi-chart repositories with CI optimization needs

Layer 5: Security and Policy Validation

What it catches: Security misconfigurations, policy violations, compliance issues

This layer prevents deploying charts that pass functional tests but violate organizational security baselines or contain vulnerabilities.

Policy Enforcement: Conftest (Open Policy Agent)

Conftest is the CLI interface to Open Policy Agent for policy-as-code validation.

Project Profile:

  • Parent: Open Policy Agent (CNCF Graduated Project)
  • Governance: Strong CNCF backing, multi-vendor support
  • Production adoption: Netflix, Pinterest, Goldman Sachs
  • Bus Factor: Low (graduated CNCF project with multi-vendor backing)

Strengths:

  • Policies written in Rego (reusable, composable)
  • Works with any YAML/JSON input (not Helm-specific)
  • Can enforce organizational standards programmatically
  • Integration with admission controllers (Gatekeeper)

Limitations:

  • Rego has a learning curve
  • Does not replace functional testing

Example Conftest policy:

# policy/security.rego
package main

import future.keywords.contains
import future.keywords.if
import future.keywords.in

deny[msg] {
  input.kind == "Deployment"
  container := input.spec.template.spec.containers[_]
  not container.resources.limits.memory
  msg := sprintf("Container '%s' must define memory limits", [container.name])
}

deny[msg] {
  input.kind == "Deployment"
  container := input.spec.template.spec.containers[_]
  not container.resources.limits.cpu
  msg := sprintf("Container '%s' must define CPU limits", [container.name])
}

Running the validation:

helm template my-chart . | conftest test -p policy/ -

Alternative: Kyverno

Kyverno offers policy enforcement using native Kubernetes manifests instead of Rego. Policies are written in YAML and can validate, mutate, or generate resources.

Example Kyverno policy:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-resource-limits
spec:
  validationFailureAction: Enforce
  rules:
  - name: check-container-limits
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "All containers must have CPU and memory limits"
      pattern:
        spec:
          containers:
          - resources:
              limits:
                memory: "?*"
                cpu: "?*"

Conftest vs Kyverno:

  • Conftest: Policies run in CI, flexible for any YAML
  • Kyverno: Runtime enforcement in-cluster, Kubernetes-native

Both can coexist: Conftest in CI for early feedback, Kyverno in cluster for runtime enforcement.

Vulnerability Scanning: Trivy

Trivy by Aqua Security provides comprehensive security scanning for Helm charts.

Project Profile:

  • Maintainer: Aqua Security (commercial backing with open-source core)
  • Scope: Vulnerability scanning + misconfiguration detection
  • Helm integration: Official trivy helm command
  • Bus Factor: Low (commercial backing + strong open-source adoption)

What Trivy scans in Helm charts:

  1. Vulnerabilities in referenced container images
  2. Misconfigurations (similar to Conftest but pre-built rules)
  3. Secrets accidentally committed in templates

Example scan:

trivy helm ./charts/my-chart --severity HIGH,CRITICAL --exit-code 1

Sample output:

myapp/templates/deployment.yaml (helm)
====================================

Tests: 12 (SUCCESSES: 10, FAILURES: 2)
Failures: 2 (HIGH: 1, CRITICAL: 1)

HIGH: Container 'app' of Deployment 'myapp' should set 'securityContext.runAsNonRoot' to true
════════════════════════════════════════════════════════════════════════════════════════════════
Ensure containers run as non-root users

See https://kubernetes.io/docs/concepts/security/pod-security-standards/
────────────────────────────────────────────────────────────────────────────────────────────────
 myapp/templates/deployment.yaml:42

Commercial support:
Aqua Security offers Trivy Enterprise with advanced features (centralized scanning, compliance reporting). For most teams, the open-source version is sufficient.

Other Security Tools

Polaris (Fairwinds)

Polaris scores charts based on security and reliability best practices. Unlike enforcement tools, it provides a health score and actionable recommendations.

Use case: Dashboard for chart quality across a platform

Checkov (Bridgecrew/Palo Alto)

Similar to Trivy but with a broader IaC focus (Terraform, CloudFormation, Kubernetes, Helm). Pre-built policies for compliance frameworks (CIS, PCI-DSS).

When to use Checkov:

  • Multi-IaC environment (not just Helm)
  • Compliance-driven validation requirements

Enterprise Selection Criteria

Bus Factor and Long-Term Viability

For production infrastructure, tool sustainability matters as much as features. Community support channels like Helm CNCF Slack (#helm-users, #helm-dev) and CNCF TAG Security provide valuable insights into which projects have active maintainer communities.

Questions to ask:

  • Is the project backed by a foundation (CNCF, Linux Foundation)?
  • Are multiple companies contributing?
  • Is the project used in production by recognizable organizations?
  • Is there a public roadmap?

Risk Classification:

Tool Governance Bus Factor Notes
chart-testing CNCF Low Helm official project
Conftest/OPA CNCF Graduated Low Multi-vendor backing
Trivy Aqua Security Low Commercial backing + OSS
kubeconform Community Medium Active, but single maintainer
helm-unittest Community Medium-High No institutional backing
Polaris Fairwinds Medium Company-sponsored OSS

Kubernetes Version Compatibility

Tools must explicitly support the Kubernetes versions you run in production.

Red flags:

  • No documented compatibility matrix
  • Hard-coded dependencies on old K8s versions
  • No testing against multiple K8s versions in CI

Example compatibility check:

# Does the tool support your K8s version?
kubeconform --help | grep -A5 "kubernetes-version"

For tools like ct, always verify they test against a matrix of Kubernetes versions in their own CI.

Commercial Support Options

When commercial support matters:

  • Regulatory compliance requirements (SOC2, HIPAA, etc.)
  • Limited internal expertise
  • SLA-driven operations

Available options:

  • Trivy: Aqua Security offers Trivy Enterprise
  • OPA/Conftest: Styra provides OPA Enterprise
  • Terratest: Gruntwork offers consulting and premium modules

Most teams don’t need commercial support for chart testing specifically, but it’s valuable in regulated industries where audits require vendor SLAs.

Security Scanner Integration

For enterprise pipelines, chart testing tools should integrate cleanly with:

  • SIEM/SOAR platforms
  • CI/CD notification systems
  • Security dashboards (e.g., Grafana, Datadog)

Required features:

  • Structured output formats (JSON, SARIF)
  • Exit codes for CI failure
  • Support for custom policies
  • Webhook or API for event streaming

Example: Integrating Trivy with SIEM

# .github/workflows/security.yaml
- name: Run Trivy scan
  run: trivy helm ./charts --format json --output trivy-results.json

- name: Send to SIEM
  run: |
    curl -X POST https://siem.company.com/api/events \
      -H "Content-Type: application/json" \
      -d @trivy-results.json

Testing Pipeline Architecture

A production-grade Helm chart pipeline combines multiple layers:

Pipeline efficiency principles:

  1. Fail fast: syntax and schema errors should never reach integration tests
  2. Parallel execution where possible (unit tests + security scans)
  3. Cache ephemeral cluster images to reduce setup time
  4. Skip unchanged charts (ct built-in change detection)

Decision Matrix: When to Use What

Scenario 1: Small Team / Early-Stage Startup

Requirements: Minimal overhead, fast iteration, reasonable safety

Recommended Stack:

Linting:      helm lint + yamllint
Validation:   kubeconform
Security:     trivy helm

Optional: helm-unittest (if template logic becomes complex)

Rationale: Zero-dependency baseline that catches 80% of issues without operational complexity.

Scenario 2: Enterprise with Compliance Requirements

Requirements: Auditable, comprehensive validation, commercial support available

Recommended Stack:

Linting:      helm lint + yamllint
Validation:   kubeconform
Unit Tests:   helm-unittest
Security:     Trivy Enterprise + Conftest (custom policies)
Integration:  chart-testing (ct)
Runtime:      Kyverno (admission control)

Optional: Terratest for complex upgrade scenarios

Rationale: Multi-layer defense with both pre-deployment and runtime enforcement. Commercial support available for security components.

Scenario 3: Multi-Tenant Internal Platform

Requirements: Prevent bad charts from affecting other tenants, enforce standards at scale

Recommended Stack:

CI Pipeline:
  • helm lint → kubeconform → helm-unittest → ct
  • Conftest (enforce resource quotas, namespaces, network policies)
  • Trivy (block critical vulnerabilities)

Runtime:
  • Kyverno or Gatekeeper (enforce policies at admission)
  • ResourceQuotas per namespace
  • NetworkPolicies by default

Additional tooling:

  • Polaris dashboard for chart quality scoring
  • Custom admission webhooks for platform-specific rules

Rationale: Multi-tenant environments cannot tolerate “soft” validation. Runtime enforcement is mandatory.

Scenario 4: Open Source Public Charts

Requirements: Community trust, transparent testing, broad compatibility

Recommended Stack:

Must-have:
  • chart-testing (expected standard)
  • Public CI (GitHub Actions with full logs)
  • Test against multiple K8s versions

Nice-to-have:
  • helm-unittest with high coverage
  • Automated changelog generation
  • Example values for common scenarios

Rationale: Public charts are judged by testing transparency. Missing ct is a red flag for potential users.

The Minimum Viable Testing Stack

For any environment deploying Helm charts to production, this is the baseline:

Layer 1: Pre-Commit (Developer Laptop)

helm lint charts/my-chart
yamllint charts/my-chart

Layer 2: CI Pipeline (Automated on PR)

# Fast validation
helm template my-chart ./charts/my-chart | kubeconform \
  -kubernetes-version 1.30.0 \
  -summary

# Security baseline
trivy helm ./charts/my-chart --exit-code 1 --severity CRITICAL,HIGH

Layer 3: Pre-Production (Staging Environment)

# Integration test with real cluster
ct install --config ct.yaml --charts charts/my-chart

Time investment:

  • Initial setup: 4-8 hours
  • Per-PR overhead: 3-5 minutes
  • Maintenance: ~1 hour/month

ROI calculation:

Average production incident caused by untested chart:

  • Detection: 15 minutes
  • Triage: 30 minutes
  • Rollback: 20 minutes
  • Post-mortem: 1 hour
  • Total: ~2.5 hours of engineering time

If chart testing prevents even one incident per quarter, it pays for itself in the first month.

Common Anti-Patterns to Avoid

Anti-Pattern 1: Only using --dry-run

helm install --dry-run validates syntax but skips:

  • Admission controller logic
  • RBAC validation
  • Actual resource creation

Better: Combine dry-run with kubeconform and at least one integration test.

Anti-Pattern 2: Testing only in production-like clusters

“We test in staging, which is identical to production.”

Problem: Staging clusters rarely match production exactly (node counts, storage classes, network policies). Integration tests should run in isolated, ephemeral environments.

Anti-Pattern 3: Security scanning without enforcement

Running trivy helm without failing the build on critical findings is theater.

Better: Set --exit-code 1 and enforce in CI.

Anti-Pattern 4: Ignoring upgrade paths

Most chart failures happen during upgrades, not initial installs. Chart-testing addresses this with ct install --upgrade.

Conclusion: Testing is Infrastructure Maturity

The gap between teams that test Helm charts and those that don’t is not about tooling availability—it’s about treating infrastructure code with the same discipline as application code.

The cost of testing is measured in minutes per PR. The cost of not testing is measured in hours of production incidents, eroded trust in automation, and teams reverting to manual deployments because “Helm is too risky.”

The testing stack you choose matters less than the fact that you have one. Start with the minimal viable stack (lint + schema + security), run it consistently, and expand as your charts become more complex.

By implementing a structured testing pipeline, you catch 95% of chart issues before they reach production. The remaining 5% are edge cases that require production observability, not more testing layers.

Helm chart testing is not about achieving perfection—it’s about eliminating the preventable failures that undermine confidence in your deployment pipeline.

Frequently Asked Questions (FAQ)

What is Helm chart testing and why is it important in production?

Helm chart testing ensures that Kubernetes manifests generated from Helm templates are syntactically correct, schema-compliant, secure, and function correctly when deployed. In production, untested charts can cause outages, security incidents, or failed upgrades, even if application code itself is stable.

Is helm lint enough to validate a Helm chart?

No. helm lint only validates chart structure and basic best practices. It does not validate rendered manifests against Kubernetes API schemas, test template logic, or verify runtime behavior. Production-grade testing requires additional layers such as schema validation, unit tests, and integration tests.

What is the difference between Helm unit tests and integration tests?

Unit tests (e.g., using helm-unittest) validate template logic by asserting expected output for given input values without deploying anything. Integration tests (e.g., using chart-testing or Terratest) deploy charts to a real Kubernetes cluster and validate runtime behavior, upgrades, and interactions with the API server.

Which tools are recommended for validating Helm charts against Kubernetes schemas?

The most commonly recommended tool is kubeconform, which validates rendered manifests against Kubernetes OpenAPI schemas for specific Kubernetes versions and supports CRDs. An alternative is kubectl --dry-run=server, which validates against a live API server.

How can Helm chart testing prevent production outages?

Testing catches common failure modes before deployment, such as missing selectors in Deployments, invalid RBAC permissions, incorrect conditionals, or incompatible API versions. Many production outages originate from configuration and chart logic errors rather than application bugs.

What is the role of security scanning in Helm chart testing?

Security scanning detects misconfigurations, policy violations, and vulnerabilities that functional tests may miss. Tools like Trivy and Conftest (OPA) help enforce security baselines, prevent unsafe defaults, and block deployments that violate organizational or compliance requirements.

Is chart-testing (ct) required for private Helm charts?

While not strictly required, chart-testing is highly recommended for any chart deployed to production. It is considered the de facto standard for integration testing, especially for charts with upgrades, multiple dependencies, or shared cluster environments.

What is the minimum viable Helm testing pipeline for CI?

At a minimum, a production-ready pipeline should include:
helm lint for structural validation
kubeconform for schema validation
trivy helm for security scanning
Integration tests can be added as charts grow in complexity or criticality.

MinIO Maintenance Mode Explained: Impact on Users & S3 Alternatives

MinIO Maintenance Mode Explained: Impact on Users & S3 Alternatives

Background: MinIO and the Maintenance Mode announcement

MinIO has long been one of the most popular self-hosted S3-compatible object storage solutions for Kubernetes, especially for logs, backups, and internal object storage in on‑premise and cloud-native environments. Its simplicity, performance, and API compatibility made it a common default choice for backups, artifacts, logs, and internal object storage.

In late 2025, MinIO marked its upstream repository as Maintenance Mode and clarified that the Community Edition would be distributed source-only, without official pre-built binaries or container images. This move triggered renewed discussion across the industry about sustainability, governance, and the risks of relying on a single-vendor-controlled “open core” storage layer.

A detailed industry analysis of this shift, including its broader ecosystem impact, can be found in this InfoQ article

What exactly changed?

1. Maintenance Mode

Maintenance Mode means:

  • No new features
  • No roadmap-driven improvements
  • Limited fixes, typically only for critical issues
  • No active review of community pull requests

As highlighted by InfoQ, this effectively freezes MinIO Community as a stable but stagnant codebase, pushing innovation and evolution exclusively toward the commercial offerings.

2. Source-only distribution

Official binaries and container images are no longer published for the Community Edition. Users must:

  • Build MinIO from source
  • Maintain their own container images
  • Handle signing, scanning, and provenance themselves

This aligns with a broader industry pattern noted by InfoQ: infrastructure projects increasingly shifting operational burden back to users unless they adopt paid tiers.

Direct implications for Community users

Security and patching

With no active upstream development:

  • Vulnerability response times may increase
  • Users must monitor security advisories independently
  • Regulated environments may find Community harder to justify

InfoQ emphasizes that this does not make MinIO insecure by default, but it changes the shared-responsibility model significantly.

Operational overhead

Teams now need to:

  • Pin commits or tags explicitly
  • Build and test their own releases
  • Maintain CI pipelines for a core storage dependency

This is a non-trivial cost for what was previously perceived as a “drop‑in” component.

Support and roadmap

The strategic message is clear: active development, roadmap influence, and predictable maintenance live behind the commercial subscription.

Impact on OEM and embedded use cases

The InfoQ analysis draws an important distinction between API consumers and technology embedders.

Using MinIO as an external S3 service

If your application simply consumes an S3 endpoint:

  • The impact is moderate
  • Migration is largely operational
  • Application code usually remains unchanged

Embedding or redistributing MinIO

If your product:

  • Ships MinIO internally
  • Builds gateways or features on MinIO internals
  • Depends on MinIO-specific operational tooling

Then the impact is high:

  • You inherit maintenance and security responsibility
  • Long-term internal forking becomes likely
  • Licensing (AGPL) implications must be reassessed carefully

For OEM vendors, this often forces a strategic re-evaluation rather than a tactical upgrade.

Forks and community reactions

At the time of writing:

  • Several community forks focus on preserving the MinIO Console / UI experience
  • No widely adopted, full replacement fork of the MinIO server exists
  • Community discussion, as summarized by InfoQ, reflects caution rather than rapid consolidation

The absence of a strong server-side fork suggests that most organizations are choosing migration over replacement-by-fork.

Best S3-Compatible Alternatives to MinIO: Ceph RGW, SeaweedFS, Garage

InfoQ highlights that the industry response is not about finding a single “new MinIO”, but about selecting storage systems whose governance and maintenance models better match long-term needs.

Ceph RGW

Best for: Enterprise-grade, highly available environments
Strengths: Mature ecosystem, large community, strong governance
Trade-offs: Operational complexity

SeaweedFS

Best for: Teams seeking simplicity and permissive licensing
Strengths: Apache-2.0 license, active development, integrated S3 API
Trade-offs: Partial S3 compatibility for advanced edge cases

Garage

Best for: Self-hosted and geo-distributed systems
Strengths: Resilience-first design, active open-source development
Trade-offs: AGPL license considerations

Zenko / CloudServer

Best for: Multi-cloud and Scality-aligned architectures
Strengths: Open-source S3 API implementation
Trade-offs: Different architectural assumptions than MinIO

Recommended strategies by scenario

If you need to reduce risk immediately

  • Freeze your current MinIO version
  • Build, scan, and sign your own images
  • Define and rehearse a migration path

If you operate Kubernetes on-prem with HA requirements

  • Ceph RGW is often the most future-proof option

If licensing flexibility is critical

  • Start evaluation with SeaweedFS

If operational UX matters

  • Shift toward automation-first workflows
  • Treat UI forks as secondary tooling, not core infrastructure

Conclusion

MinIO’s shift of the Community Edition into Maintenance Mode is less about short-term breakage and more about long-term sustainability and control.

As the InfoQ analysis makes clear, the real risk is not technical incompatibility but governance misalignment. Organizations that treat object storage as critical infrastructure should favor solutions with transparent roadmaps, active communities, and predictable maintenance models.

For many teams, this moment serves as a natural inflection point: either commit to self-maintaining MinIO, move to a commercially supported path, or migrate to a fully open-source alternative designed for the long run.

📚 Want to dive deeper into Kubernetes? This article is part of our , where you’ll find all fundamental and advanced concepts explained step by step.

Frequently Asked Questions

What does Maintenance Mode mean for MinIO Community Edition?

Maintenance Mode for MinIO Community Edition means the upstream codebase is effectively frozen. There will be no new features, only critical bug fixes, and community pull requests will not be actively reviewed. Furthermore, official pre-built binaries and container images are no longer provided; users must build from source.

Is MinIO Community Edition still safe to use?

The code itself isn’t inherently insecure, but the shared-responsibility model changes. Security patching for non-critical issues will be slower or non-existent. For production, especially in regulated environments, you must now actively monitor advisories and maintain your own built and scanned images, which increases operational risk.

What is the best open-source alternative to MinIO for Kubernetes?

The best alternative depends on your needs. For enterprise-grade, high-availability setups, Ceph RGW is the most robust choice. For simplicity and Apache 2.0 licensing, SeaweedFS is excellent. For geo-distributed, resilience-first designs, evaluate Garage. There is no direct drop-in replacement; each requires evaluation.

Should I fork MinIO or migrate to another solution?

For most organizations, migration is preferable to forking. Maintaining a full fork of a complex storage server involves significant long-term commitment to security, bug fixes, and potential feature backporting. The industry trend, as noted, is toward migration to systems with active upstream development and clear governance.

How does this affect products that embed MinIO (OEM use)?

The impact on OEMs is high. You inherit full maintenance and security responsibility. Long-term internal forking becomes likely, and the AGPL licensing implications must be carefully reassessed. This often forces a strategic re-evaluation of your embedded storage layer rather than a simple tactical update.

Related posts

Helm Drivers Explained: Secrets, ConfigMaps, and State Storage in Helm

Helm Drivers Explained: Secrets, ConfigMaps, and State Storage in Helm

When working seriously with Helm in production environments, one of the less-discussed but highly impactful topics is how Helm stores and manages release state. This is where Helm drivers come into play. Understanding Helm drivers is not just an academic exercise; it directly affects security, scalability, troubleshooting, and even disaster recovery strategies.

Understanding Helm drivers is critical for production deployments. This is just one of many essential topics covered in our comprehensive Helm package management guide.

What Helm Drivers Are and How They Are Configured

A Helm driver defines the backend storage mechanism Helm uses to persist release information such as manifests, values, and revision history. Every Helm release has state, and that state must live somewhere. The driver determines where and how this data is stored.

Helm drivers are configured using the HELM_DRIVER environment variable. If the variable is not explicitly set, Helm defaults to using Kubernetes Secrets.

export HELM_DRIVER=secrets

This simple configuration choice can have deep operational consequences, especially in regulated environments or large-scale clusters.

Available Helm Drivers

Secrets Driver (Default)

The secrets driver stores release information as Kubernetes Secrets in the target namespace. This has been the default driver since Helm 3 was introduced.

Secrets are base64-encoded and can be encrypted at rest if Kubernetes encryption at rest is enabled. This makes the driver suitable for clusters with moderate security requirements without additional configuration.

ConfigMaps Driver

The configmaps driver stores Helm release state as Kubernetes ConfigMaps. Functionally, it behaves very similarly to the secrets driver but without any form of implicit confidentiality.

export HELM_DRIVER=configmaps

This driver is often used in development or troubleshooting scenarios where human readability is preferred.

Memory Driver

The memory driver stores release information only in memory. Once the Helm process exits, all state is lost.

export HELM_DRIVER=memory

This driver is rarely used outside of testing, CI pipelines, or ephemeral validation workflows.

Evolution of Helm Drivers

Helm drivers were significantly reworked with the release of Helm 3 in late 2019. Helm 2 relied on Tiller and ConfigMaps by default, which introduced security and operational complexity. Helm 3 removed Tiller entirely and introduced pluggable storage backends with Secrets as the secure default.

Since then, improvements have focused on performance, stability, and better error handling rather than introducing new drivers. The core abstraction has remained intentionally small to avoid fragmentation.

Practical Use Cases and When to Use Each Driver

In production Kubernetes clusters, the secrets driver is almost always the right choice. It integrates naturally with RBAC, supports encryption at rest, and aligns with Kubernetes-native security models.

ConfigMaps can be useful when debugging failed upgrades or learning Helm internals, as the stored data is easier to inspect. However, it should be avoided in environments handling sensitive values.

The memory driver shines in CI/CD pipelines where chart validation or rendering is needed without polluting a cluster with state.

Practical Examples

Switching drivers dynamically can be useful when inspecting a release:

HELM_DRIVER=configmaps helm get manifest my-release

Or running a dry validation in CI:

HELM_DRIVER=memory helm upgrade --install test ./chart --dry-run

Final Thoughts

Helm drivers are rarely discussed, yet they influence how reliable, secure, and observable your Helm workflows are. Treating the choice of driver as a deliberate architectural decision rather than a default setting is one of those small details that differentiate mature DevOps practices from ad-hoc automation.

Helm 4.0 Features, Breaking Changes & Migration Guide 2025

Helm 4.0 Features, Breaking Changes & Migration Guide 2025

Helm is one of the main utilities within the Kubernetes ecosystem, and therefore the release of a new major version, such as Helm 4.0, is something to consider because it is undoubtedly something that will need to be analyzed, evaluated, and managed in the coming months.

Helm 4.0 represents a major milestone in Kubernetes package management. For a complete understanding of Helm from basics to advanced features, explore our .

Due to this, we will see many comments and articles around this topic, so we will try to shed some light.

Helm 4.0 Key Features and Improvements

According to the project itself in its announcement, Helm 4 introduces three major blocks of changes: new plugin system, better integration with Kubernetes ** and internal modernization of SDK and performance**.

New Plugin System (includes WebAssembly)

The plugin system has been completely redesigned, with a special focus on security through the introduction of a new WebAssembly runtime that, while optional, is recommended as it runs in a “sandbox” mode that offers limits and guarantees from a security perspective.

In any case, there is no need to worry excessively, as the “classic” plugins continue to work, but the message is clear: for security and extensibility, the direction is Wasm.

Server-Side Apply and Better Integration with Other Controllers

From this version, Helm 4 supports Server-Side Apply (SSA) through the --server-side flag, which has already become stable since Kubernetes version v1.22 and allows updates on objects to be handled server-side to avoid conflicts between different controllers managing the same resources.

It also incorporates integration with kstatus to ensure the state of a component in a more reliable way than what currently happens with the use of the --wait parameter.

Other Additional Improvements

Additionally, there is another list of improvements that, while of lesser scope, are important qualitative leaps, such as the following:

  • Installation by digest in OCI registries: (helm install myapp oci://...@sha256:<digest>)
  • Multi-document values: you can pass multiple YAML values in a single multi-doc file, facilitating complex environments/overlays.
  • New --set-json argument that allows for easily passing complex structures compared to the current solution using the --set parameter

Why a Major (v4) and Not Another Minor of 3.x?

As explained in the official release post, there were features that the team could not introduce in v3 without breaking public SDK APIs and internal architecture:

  • Strong change in the plugin system (WebAssembly, new types, deep integration with the core).
  • Restructuring of Go packages and establishment of a stable SDK at helm.sh/helm/v4, code-incompatible with v3.
  • Introduction and future evolution of Charts v3, which require the SDK to support multiple versions of chart APIs.

With all this, continuing in the 3.x branch would have violated SemVer: the major number change is basically “paying” the accumulated technical debt to be able to move forward.

Additionally, a new evolution of the charts is expected in the future, moving from v2 to a future v3 that is not yet fully defined, and currently, v2 charts run correctly in this new version.

Is Helm 4.0 Migration Required?

The short answer is: yes. And possibly the long answer is: yes, and quickly. In the official Helm 4 announcement, they specify the support schedule for Helm 3:

  • Helm 3 bug fixes until July 8, 2026.
  • Helm 3 security fixes until November 11, 2026.
  • No new features will be backported to Helm 3 during this period; only Kubernetes client libraries will be updated to support new K8s versions.

Practical translation:

  • Organizations have approximately 1 year to plan a smooth Helm 4.0 migration with continued bug support for Helm 3.
  • After November 2026, continuing to use Helm 3 will become increasingly risky from a security and compatibility standpoint.

Best Practices for Migration

To carry out the migration, it is important to remember that it is perfectly possible and feasible to have both versions installed on the same machine or agent, so a “gradual” migration can be done to ensure that the end of support for version v3 is reached with everything migrated correctly, and for that, the following steps are recommended:

  • Conduct an analysis of all Helm commands and usage from the perspective of integration pipelines, upgrade scripts, or even the import of Helm client libraries in Helm-based developments.
  • Especially carefully review all uses of --post-renderer, helm registry login, --atomic, --force.
  • After the analysis, start testing Helm 4 first in non-production environments, reusing the same charts and values, reverting to Helm 3 if a problem is detected until it is resolved.
  • If you have critical plugins, explicitly test them with Helm 4 before making the global change.

What are the main new features in Helm 4.0?

Helm 4.0 introduces three major improvements: a redesigned plugin system with WebAssembly support for enhanced security, Server-Side Apply (SSA) integration for better conflict resolution, and internal SDK modernization for improved performance. Additional features include OCI digest installation and multi-document values support.

When does Helm 3 support end?

Helm 3 bug fixes end July 8, 2026 and security fixes end November 11, 2026. No new features will be backported to Helm 3. Organizations should plan migration to Helm 4.0 before November 2026 to avoid security and compatibility risks.

Are Helm 3 charts compatible with Helm 4.0?

Yes, Helm Chart API v2 charts work correctly with Helm 4.0. However, the Go SDK has breaking changes, so applications using Helm libraries need code updates. The CLI commands remain largely compatible for most use cases.

Can I run Helm 3 and Helm 4 simultaneously?

Yes, both versions can be installed on the same machine, enabling gradual migration strategies. This allows teams to test Helm 4.0 in non-production environments while maintaining Helm 3 for critical workloads during the transition period.

What should I test before migrating to Helm 4.0?

Focus on testing critical plugins, post-renderers, and specific flags like --atomic, --force, and helm registry login. Test all charts and values in non-production environments first, and review any custom integrations using Helm SDK libraries.

What is Server-Side Apply in Helm 4.0?

Server-Side Apply (SSA) is enabled with the --server-side flag and handles resource updates on the Kubernetes API server side. This prevents conflicts between different controllers managing the same resources and has been stable since Kubernetes v1.22.

Resolving Kubernetes Ingress Issues: Limitations and Gateway Insights

Resolving Kubernetes Ingress Issues: Limitations and Gateway Insights

Introduction

Ingresses have been, since the early versions of Kubernetes, the most common way to expose applications to the outside. Although their initial design was simple and elegant, the success of Kubernetes and the growing complexity of use cases have turned Ingress into a problematic piece: limited, inconsistent between vendors, and difficult to govern in enterprise environments.

In this article, we analyze why Ingresses have become a constant source of friction, how different Ingress Controllers have influenced this situation, and why more and more organizations are considering alternatives like Gateway API.

What Ingresses are and why they were designed this way

The Ingress ecosystem revolves around two main resources:

🏷️ IngressClass

Defines which controller will manage the associated Ingresses. Its scope is cluster-wide, so it is usually managed by the platform team.

🌐 Ingress

It is the resource that developers use to expose a service. It allows defining routes, domains, TLS certificates, and little more.

Its specification is minimal by design, which allowed for rapid adoption, but also laid the foundation for current problems.

The problem: a standard too simple for complex needs

As Kubernetes became an enterprise standard, users wanted to replicate advanced configurations of traditional proxies: rewrites, timeouts, custom headers, CORS, etc.
But Ingress did not provide native support for all this.

Vendors reacted… and chaos was born.

Annotations vs CRDs: two incompatible paths

Different Ingress Controllers have taken very different paths to add advanced capabilities:

📝 Annotations (NGINX, HAProxy…)

Advantages:

  • Flexible and easy to use
  • Directly in the Ingress resource

Disadvantages:

  • Hundreds of proprietary annotations
  • Fragmented documentation
  • Non-portable configurations between vendors

📦 Custom CRDs (Traefik, Kong…)

Advantages:

  • More structured and powerful
  • Better validation and control

Disadvantages:

  • Adds new non-standard objects
  • Requires installation and management
  • Less interoperability

Result?
Infrastructures deeply coupled to a vendor, complicating migrations, audits, and automation.

The complexity for development teams

The design of Ingress implies two very different responsibilities:

  • Platform: defines IngressClass
  • Application: defines Ingress

But the reality is that the developer ends up making decisions that should be the responsibility of the platform area:

  • Certificates
  • Security policies
  • Rewrite rules
  • CORS
  • Timeouts
  • Corporate naming practices

This causes:

  • Inconsistent configurations
  • Bottlenecks in reviews
  • Constant dependency between teams
  • Lack of effective standardization

In large companies, where security and governance are critical, this is especially problematic.

NGINX Ingress: the decommissioning that reignited the debate

The recent decommissioning of the NGINX Ingress Controller has highlighted the fragility of the ecosystem:

  • Thousands of clusters depend on it
  • Multiple projects use its annotations
  • Migrating involves rewriting entire configurations

This has reignited the conversation about the need for a real standard… and there appears Gateway API.

Gateway API: a promising alternative (but not perfect)

Gateway API was born to solve many of the limitations of Ingress:

  • Clear separation of responsibilities (infrastructure vs application)
  • Standardized extensibility
  • More types of routes (HTTPRoute, TCPRoute…)
  • Greater expressiveness without relying on proprietary annotations

But it also brings challenges:

  • Requires gradual adoption
  • Not all vendors implement the same
  • Migration is not trivial

Even so, it is shaping up to be the future of traffic management in Kubernetes.

Conclusion

Ingresses have been fundamental to the success of Kubernetes, but their own simplicity has led them to become a bottleneck. The lack of interoperability, differences between vendors, and complex governance in enterprise environments make it clear that it is time to adopt more mature models.

Gateway API is not perfect, but it moves in the right direction.
Organizations that want future stability should start planning their transition.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Helm v3.17 Take Ownership Flag: Fix Release Conflicts

Helm v3.17 Take Ownership Flag: Fix Release Conflicts

Helm has long been the standard for managing Kubernetes applications using packaged charts, bringing a level of reproducibility and automation to the deployment process. However, some operational tasks, such as renaming a release or migrating objects between charts, have traditionally required cumbersome workarounds. With the introduction of the --take-ownership flag in Helm v3.17 (released in January 2025), a long-standing pain point is finally addressed—at least partially.

The take-ownership feature represents the continuing evolution of Helm. Learn about this and other cutting-edge capabilities in our Helm Charts Package Management Guide

In this post, we will explore:

  • What the --take-ownership flag does
  • Why it was needed
  • The caveats and limitations
  • Real-world use cases where it helps
  • When not to use it

Understanding Helm Release Ownership and Object Management

When Helm installs or upgrades a chart, it injects metadata—labels and annotations—into every managed Kubernetes object. These include:

app.kubernetes.io/managed-by: Helm
meta.helm.sh/release-name: my-release
meta.helm.sh/release-namespace: default

This metadata serves an important role: Helm uses it to track and manage resources associated with each release. As a safeguard, Helm does not allow another release to modify objects it does not own and when you trying that you will see messages like the one below:

Error: Unable to continue with install: Service "provisioner-agent" in namespace "test-my-ns" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dp-core-infrastructure11": current value is "dp-core-infrastructure"

While this protects users from accidental overwrites, it creates limitations for advanced use cases.

Why --take-ownership Was Needed

Let’s say you want to:

  • Rename an existing Helm release from api-v1 to api.
  • Move a ConfigMap or Service from one chart to another.
  • Rebuild state during GitOps reconciliation when previous Helm metadata has drifted.

Previously, your only option was to:

  1. Uninstall the existing release.
  2. Reinstall under the new name.

This approach introduces downtime, and in production systems, that’s often not acceptable.

What the Flag Does

helm upgrade my-release ./my-chart --take-ownership

When this flag is passed, Helm will:

  • Skip the ownership validation for existing objects.
  • Override the labels and annotations to associate the object with the current release.

In practice, this allows you to claim ownership of resources that previously belonged to another release, enabling seamless handovers.

⚠️ What It Doesn’t Do

This flag does not:

  • Clean up references from the previous release.
  • Protect you from future uninstalls of the original release (which might still remove shared resources).
  • Allow you to adopt completely unmanaged Kubernetes resources (those not initially created by Helm).

In short, it’s a mechanism for bypassing Helm’s ownership checks, not a full lifecycle manager.

Real-World Helm Take Ownership Use Cases

Let’s go through common scenarios where this feature is useful.

✅ 1. Renaming a Release Without Downtime

Before:

helm uninstall old-name
helm install new-name ./chart

Now:

helm upgrade new-name ./chart --take-ownership

✅ 2. Migrating Objects Between Charts

You’re refactoring a large chart into smaller, modular ones and need to reassign certain Service or Secret objects.

This flag allows the new release to take control of the object without deleting or recreating it.

✅ 3. GitOps Drift Reconciliation

If objects were deployed out-of-band or their metadata changed unintentionally, GitOps tooling using Helm can recover without manual intervention using --take-ownership.

Best Practices and Recommendations

  • Use this flag intentionally, and document where it’s applied.
  • If possible, remove the previous release after migration to avoid confusion.
  • Monitor Helm’s behavior closely when managing shared objects.
  • For non-Helm-managed resources, continue to use kubectl annotate or kubectl label to manually align metadata.

Conclusion

The --take-ownership flag is a welcomed addition to Helm’s CLI arsenal. While not a universal solution, it smooths over many of the rough edges developers and SREs face during release evolution and GitOps adoption.

It brings a subtle but powerful improvement—especially in complex environments where resource ownership isn’t static.

Stay updated with Helm releases, and consider this flag your new ally in advanced release engineering.

Frequently Asked Questions

What does the Helm –take-ownership flag do?

The --take-ownership flag allows Helm to bypass ownership validation and claim control of Kubernetes resources that belong to another release. It updates the meta.helm.sh/release-name annotation to associate objects with the current release, enabling zero-downtime release renames and chart migrations.

When should I use Helm take ownership?

Use --take-ownership when renaming releases without downtime, migrating objects between charts, or fixing GitOps drift. It’s ideal for production environments where uninstall/reinstall cycles aren’t acceptable. Always document usage and clean up previous releases afterward.

What are the limitations of Helm take ownership?

The flag doesn’t clean up references from previous releases or protect against future uninstalls of the original release. It only works with Helm-managed resources, not completely unmanaged Kubernetes objects. Manual cleanup of old releases is still required.

Is Helm take ownership safe for production use?

Yes, but use it intentionally and carefully. The flag bypasses Helm’s safety checks, so ensure you understand the ownership implications. Test in staging first, document all usage, and monitor for conflicts. Remove old releases after successful migration to avoid confusion.

Which Helm version introduced the take ownership flag?

The --take-ownership flag was introduced in Helm v3.17, released in January 2025. This feature addresses long-standing pain points with release renaming and chart migrations that previously required downtime-inducing uninstall/reinstall cycles.

ConfigMap Optional Values in Kubernetes: Avoid CreateContainerConfigError

ConfigMap Optional Values in Kubernetes: Avoid CreateContainerConfigError

Kubernetes ConfigMaps are a powerful tool for managing configuration data separately from application code. However, they can sometimes lead to issues during deployment, particularly when a ConfigMap referenced in a Pod specification is missing, causing the application to fail to start. This is a common scenario that can lead to a CreateContainerConfigError and halt your deployment pipeline.

Understanding the Problem

When a ConfigMap is referenced in a Pod’s specification, Kubernetes expects the ConfigMap to be present. If it is not, Kubernetes will not start the Pod, leading to a failed deployment. This can be problematic in situations where certain configuration data is optional or environment-specific, such as proxy settings that are only necessary in certain environments.

Making ConfigMap Values Optional

Kubernetes provides a way to define ConfigMap items as optional, allowing your application to start even if the ConfigMap is not present. This can be particularly useful for environment variables that only need to be set under certain conditions.

Here’s a basic example of how to make a ConfigMap optional:

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: example-container
    image: nginx
    env:
    - name: OPTIONAL_ENV_VAR
      valueFrom:
        configMapKeyRef:
          name: example-configmap
          key: optional-key
          optional: true

In this example:

  • name: example-configmap refers to the ConfigMap that might or might not be present.
  • optional: true ensures that the Pod will still start even if example-configmap or the optional-key within it is missing.

Practical Use Case: Proxy Configuration

A common use case for optional ConfigMap values is setting environment variables for proxy configuration. In many enterprise environments, proxy settings are only required in certain deployment environments (e.g., staging, production) but not in others (e.g., local development).

apiVersion: v1
kind: ConfigMap
metadata:
  name: proxy-config
data:
  HTTP_PROXY: "http://proxy.example.com"
  HTTPS_PROXY: "https://proxy.example.com"

In your Pod specification, you could reference these proxy settings as optional:

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  containers:
  - name: app-container
    image: my-app-image
    env:
    - name: HTTP_PROXY
      valueFrom:
        configMapKeyRef:
          name: proxy-config
          key: HTTP_PROXY
          optional: true
    - name: HTTPS_PROXY
      valueFrom:
        configMapKeyRef:
          name: proxy-config
          key: HTTPS_PROXY
          optional: true

In this setup, if the proxy-config ConfigMap is missing, the application will still start, simply without the proxy settings.

Sample Application

Let’s walk through a simple example to demonstrate this concept. We will create a deployment for an application that uses optional configuration values.

  1. Create the ConfigMap (Optional):
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  GREETING: "Hello, World!"
  1. Deploy the Application:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-world
        image: busybox
        command: ["sh", "-c", "echo $GREETING"]
        env:
        - name: GREETING
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: GREETING
              optional: true
  1. Deploy and Test:
  2. Deploy the application using kubectl apply -f <your-deployment-file>.yaml.
  3. If the app-config ConfigMap is present, the Pod will output “Hello, World!”.
  4. If the ConfigMap is missing, the Pod will start, but no greeting will be echoed.

Conclusion

Optional ConfigMap values are a simple yet effective way to make your Kubernetes deployments more resilient and adaptable to different environments. By marking ConfigMap keys as optional, you can prevent deployment failures and allow your applications to handle missing configuration gracefully.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.