ArgoCD Guide: GitOps Continuous Delivery for Kubernetes

ArgoCD Guide: GitOps Continuous Delivery for Kubernetes

ArgoCD has become the de facto standard for GitOps-based continuous delivery in Kubernetes. If you are running production workloads on Kubernetes and still deploying with raw kubectl apply or untracked Helm releases, ArgoCD solves a class of problems you may not even know you have yet. This guide covers everything from core concepts to production-grade configuration.

The Problem ArgoCD Solves

Traditional CI/CD pushes deployments into a cluster. A CI system runs tests, builds an image, and then executes kubectl apply or helm upgrade against the cluster. This model has several structural problems:

  • Drift goes undetected. Someone applies a hotfix directly to the cluster. Now your Git repository no longer reflects reality, and nobody knows it.
  • No single source of truth. The cluster state is authoritative, not Git. Your desired state and actual state can diverge silently.
  • Rollback is painful. Rolling back a bad deployment means re-running old CI pipelines or manually reversing changes, neither of which is fast.
  • Multi-cluster management compounds the problem. Each cluster becomes a snowflake with its own history of undocumented changes.

GitOps inverts this model. Git is the source of truth. The cluster pulls its desired state from Git and continuously reconciles toward it. ArgoCD is the most mature GitOps operator for Kubernetes, implementing this pull-based model with a production-ready feature set.

How ArgoCD Works: Core Architecture

ArgoCD runs as a set of controllers inside your Kubernetes cluster. The core components are:

  • Application Controller — Watches both the Git repository and the live cluster state. Computes the diff and drives reconciliation.
  • API Server — Exposes the gRPC/REST API consumed by the CLI, UI, and external systems.
  • Repository Server — Generates Kubernetes manifests from source (Helm, Kustomize, plain YAML, Jsonnet).
  • Redis — Caches cluster state and repository data to reduce API server load.
  • Dex (optional) — Provides OIDC authentication for SSO integration.

The fundamental unit in ArgoCD is an Application — a CRD that maps a source (a path in a Git repo at a specific revision) to a destination (a namespace in a cluster). ArgoCD continuously compares the desired state from Git with the live state in the cluster and reports on the sync status.

Sync Status vs Health Status

Two orthogonal concepts you need to understand from day one:

  • Sync Status — Does the live state match what Git says it should be? Values: Synced, OutOfSync, Unknown.
  • Health Status — Is the application actually working? Values: Healthy, Progressing, Degraded, Suspended, Missing, Unknown.

An application can be Synced but Degraded — the manifests were applied correctly, but a pod is crash-looping. Conversely, it can be OutOfSync but Healthy — someone applied a change directly to the cluster outside of Git.

Installing ArgoCD

The official installation method uses a single manifest. For production, always pin to a specific version:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.11.0/manifests/install.yaml

This deploys ArgoCD in the argocd namespace with full cluster-admin access. For a production HA setup, use the manifests/ha/install.yaml variant, which deploys multiple replicas of the API server and application controller.

Accessing the UI and CLI

The initial admin password is auto-generated and stored in a secret:

argocd admin initial-password -n argocd

For local access, port-forward the API server:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Then log in via the CLI:

argocd login localhost:8080 --username admin --password <password> --insecure

For production, expose the ArgoCD server via an Ingress or LoadBalancer with a proper TLS certificate. If you’re using NGINX Ingress Controller:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd-server-ingress
  namespace: argocd
  annotations:
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  ingressClassName: nginx
  rules:
  - host: argocd.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argocd-server
            port:
              number: 443

Defining Your First Application

Applications can be created via the UI, the CLI, or declaratively with a YAML manifest. The declarative approach is the recommended one — it means your ArgoCD configuration itself is in Git:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/your-app
    targetRevision: HEAD
    path: k8s/overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true

Key fields to understand:

  • targetRevision — Can be a branch name, tag, or commit SHA. For production, pin to a tag rather than HEAD.
  • path — The directory within the repo containing your Kubernetes manifests.
  • automated.prune — Automatically delete resources that are no longer in Git. Required for true GitOps but use carefully — it will delete things.
  • automated.selfHeal — Automatically revert manual changes made directly to the cluster. This is what enforces Git as the single source of truth.

Helm Integration

ArgoCD has native Helm support. It can deploy Helm charts directly from chart repositories or from your Git repository. You can override values per environment:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: prometheus-stack
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://prometheus-community.github.io/helm-charts
    chart: kube-prometheus-stack
    targetRevision: 58.4.0
    helm:
      releaseName: prometheus-stack
      valuesObject:
        grafana:
          adminPassword: "${GRAFANA_PASSWORD}"
        prometheus:
          prometheusSpec:
            retention: 30d
            storageSpec:
              volumeClaimTemplate:
                spec:
                  storageClassName: fast-ssd
                  resources:
                    requests:
                      storage: 50Gi
  destination:
    server: https://kubernetes.default.svc
    namespace: observability

One important nuance: ArgoCD renders Helm charts server-side using its own templating engine, not helm install. This means Helm hooks (pre-install, post-upgrade, etc.) are supported, but the release is not tracked in Helm’s release history. Running helm list will not show ArgoCD-managed releases unless you configure ArgoCD to use the Helm secrets backend.

Projects: Multi-Tenancy and Access Control

ArgoCD Projects provide multi-tenancy within a single ArgoCD instance. They let you restrict which source repositories, destination clusters, and namespaces a team can deploy to. Every Application belongs to a Project.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: platform-team
  namespace: argocd
spec:
  description: Platform team applications
  sourceRepos:
  - 'https://github.com/your-org/*'
  destinations:
  - namespace: 'platform-*'
    server: https://kubernetes.default.svc
  clusterResourceWhitelist:
  - group: ''
    kind: Namespace
  namespaceResourceBlacklist:
  - group: ''
    kind: ResourceQuota

Projects are where you define the boundaries of what each team can do. The default project has no restrictions — never use it for production workloads. Create dedicated projects per team or per environment.

RBAC Configuration

ArgoCD has its own RBAC system layered on top of Kubernetes RBAC. It is configured via the argocd-rbac-cm ConfigMap. Roles are defined per project or globally:

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-rbac-cm
  namespace: argocd
data:
  policy.default: role:readonly
  policy.csv: |
    # Platform team has full access to platform-team project
    p, role:platform-admin, applications, *, platform-team/*, allow
    p, role:platform-admin, projects, get, platform-team, allow
    p, role:platform-admin, repositories, *, *, allow

    # Dev team can sync but not delete
    p, role:developer, applications, get, */*, allow
    p, role:developer, applications, sync, */*, allow
    p, role:developer, applications, action/*, */*, allow

    # Bind SSO groups to roles
    g, your-org:platform-team, role:platform-admin
    g, your-org:developers, role:developer

The policy.default: role:readonly ensures that any authenticated user who has no explicit role assignment gets read-only access — a safe default for production.

Multi-Cluster Management

ArgoCD can manage multiple Kubernetes clusters from a single control plane. Register external clusters with the CLI:

# First, ensure the target cluster context is in your kubeconfig
argocd cluster add production-eu-west --name production-eu-west

# Verify registration
argocd cluster list

ArgoCD will create a ServiceAccount in the target cluster and store its credentials as a Kubernetes secret in the ArgoCD namespace. Applications can then target this cluster by name in their destination.server field.

For large-scale multi-cluster setups, consider the App of Apps pattern or ApplicationSets. ApplicationSets are a controller that generates Applications dynamically based on generators — cluster lists, Git directory structures, or matrix combinations:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-addons
  namespace: argocd
spec:
  generators:
  - clusters:
      selector:
        matchLabels:
          environment: production
  template:
    metadata:
      name: '{{name}}-addons'
    spec:
      project: platform
      source:
        repoURL: https://github.com/your-org/cluster-addons
        targetRevision: HEAD
        path: 'addons/{{metadata.labels.region}}'
      destination:
        server: '{{server}}'
        namespace: kube-system

This single ApplicationSet deploys the appropriate addons to every cluster labeled environment: production, using each cluster’s region label to select the correct path in the repository.

Sync Strategies and Waves

When deploying complex applications with dependencies between resources, you need to control the order of deployment. ArgoCD provides two mechanisms:

Sync Phases

Resources are deployed in three phases: PreSync, Sync, and PostSync. Use Sync Hooks for resources that must complete before the main sync proceeds (database migrations, certificate issuance, etc.):

apiVersion: batch/v1
kind: Job
metadata:
  name: db-migration
  annotations:
    argocd.argoproj.io/hook: PreSync
    argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
  template:
    spec:
      containers:
      - name: migrate
        image: your-app:v1.2.3
        command: ["./migrate.sh"]
      restartPolicy: Never

Sync Waves

Within the Sync phase, waves control ordering. Resources with a lower wave number are applied and must become healthy before resources with higher wave numbers are applied:

# Applied first
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "1"

# Applied after wave 1 is healthy
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "2"

Notifications and Alerting

ArgoCD Notifications is a standalone controller that sends alerts when Application state changes. It supports Slack, PagerDuty, GitHub commit status, email, and a dozen other providers. Configure it via the argocd-notifications-cm ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-notifications-cm
  namespace: argocd
data:
  service.slack: |
    token: $slack-token
  template.app-sync-failed: |
    slack:
      attachments: |
        [{
          "title": "{{.app.metadata.name}}",
          "color": "#E96D76",
          "fields": [{
            "title": "Sync Status",
            "value": "{{.app.status.sync.status}}",
            "short": true
          },{
            "title": "Message",
            "value": "{{range .app.status.conditions}}{{.message}}{{end}}",
            "short": false
          }]
        }]
  trigger.on-sync-failed: |
    - when: app.status.sync.status == 'Unknown'
      send: [app-sync-failed]
    - when: app.status.operationState.phase in ['Error', 'Failed']
      send: [app-sync-failed]

Secret Management with ArgoCD

ArgoCD intentionally has no secret management built in — storing secrets in Git as plain text is never acceptable. The common patterns are:

  • Sealed Secrets (Bitnami) — Encrypts secrets with a cluster-specific key. The encrypted secret can be committed to Git; only the cluster can decrypt it.
  • External Secrets Operator — Syncs secrets from Vault, AWS Secrets Manager, GCP Secret Manager, etc. into Kubernetes secrets. The ArgoCD Application manages the ExternalSecret CRD, not the actual secret value.
  • argocd-vault-plugin — A plugin that replaces placeholder values in manifests with secrets retrieved from Vault at sync time.

The External Secrets Operator approach is the most flexible for teams already using a centralized secrets backend. The Application in ArgoCD deploys ExternalSecret objects, which the ESO controller resolves at runtime without ever touching Git.

Production Best Practices

  • Run ArgoCD in HA mode. Use manifests/ha/install.yaml with 3 replicas of the API server and multiple application controller shards for large clusters (100+ applications).
  • Pin image versions. Never use latest for the ArgoCD image itself. Pin to a specific version and upgrade deliberately.
  • Use the App of Apps pattern for bootstrapping. A single root Application deploys all other Applications. This makes cluster bootstrapping idempotent and reproducible.
  • Separate ArgoCD config from application config. Store ArgoCD Application manifests in a dedicated gitops repository, separate from application source code.
  • Enable resource tracking via annotations. Use application.resourceTrackingMethod: annotation in argocd-cm instead of the default label-based tracking, which can conflict with Helm’s own labels.
  • Set resource limits on ArgoCD controllers. Application controller CPU and memory scale with the number of resources tracked. Monitor and tune accordingly.
  • Restrict auto-sync in production. Consider requiring manual sync approval for production environments even when using GitOps — or at minimum require a PR approval gate before changes reach the target branch.

ArgoCD vs Flux

Flux v2 is the other major GitOps operator. Both are CNCF projects. The main differences in practice:

FeatureArgoCDFlux v2
UIBuilt-in web UINo official UI (use Weave GitOps)
Multi-clusterSingle control plane manages many clustersAgent per cluster, pull model
ApplicationSetsNativeKustomization + HelmRelease
Secret managementPlugin-basedSOPS native integration
Learning curveSteeper (more concepts)Lower (Kubernetes-native CRDs)
CNCF statusGraduatedGraduated

ArgoCD wins when you need the UI, multi-cluster management from a central plane, or have a large operations team that benefits from the visual application topology view. Flux wins when you want a simpler, purely Kubernetes-native approach with better SOPS integration for secret management.

FAQ

Can ArgoCD deploy to the cluster it runs in?

Yes. The https://kubernetes.default.svc destination refers to the local cluster. ArgoCD can manage both its own cluster and external clusters simultaneously.

Does ArgoCD support private Git repositories?

Yes. Configure repository credentials via argocd repo add with SSH keys, HTTPS username/password, or GitHub App credentials. Credentials are stored as Kubernetes secrets in the ArgoCD namespace.

How does ArgoCD handle CRD installation?

CRDs can be managed by ArgoCD, but there is a chicken-and-egg problem: if a CRD is not yet installed, ArgoCD cannot validate resources that use it. The recommended pattern is to put CRDs in wave 1 and dependent resources in wave 2, or to use a separate Application for CRDs.

What is the difference between an Application and an AppProject?

An Application is the unit of deployment — it maps a Git source to a cluster destination. An AppProject is a grouping and access control boundary — it restricts what sources and destinations an Application within the project can use. Every Application belongs to exactly one AppProject.

How do I roll back a deployment with ArgoCD?

The GitOps way: revert the commit in Git and let ArgoCD reconcile. ArgoCD also provides a UI-based rollback to any previous sync revision, but this is considered a temporary measure — the Git history should always be updated to match.

Getting Started

The fastest path from zero to a working ArgoCD setup on a local cluster:

# 1. Create a local cluster (kind or minikube)
kind create cluster --name argocd-demo

# 2. Install ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# 3. Wait for pods
kubectl wait --for=condition=Ready pods --all -n argocd --timeout=120s

# 4. Get the initial admin password
argocd admin initial-password -n argocd

# 5. Port-forward and log in
kubectl port-forward svc/argocd-server -n argocd 8080:443 &
argocd login localhost:8080 --username admin --insecure

# 6. Deploy your first application
argocd app create guestbook 
  --repo https://github.com/argoproj/argocd-example-apps.git 
  --path guestbook 
  --dest-server https://kubernetes.default.svc 
  --dest-namespace guestbook 
  --sync-policy automated

From here, the natural next steps are integrating ArgoCD with your existing CI pipeline (CI builds and pushes the image, updates the image tag in Git, ArgoCD detects the change and syncs), configuring SSO via Dex, and setting up the App of Apps pattern for managing multiple applications declaratively.

For teams looking to go deeper on GitOps and ArgoCD in production, the Kubernetes architecture patterns guide covers how ArgoCD fits into a broader platform engineering stack alongside service mesh, policy enforcement, and observability tooling.

Helm Chart Testing in Production: Layers, Tools, and a Minimum CI Pipeline

Helm Chart Testing in Production: Layers, Tools, and a Minimum CI Pipeline

When a Helm chart fails in production, the impact is immediate and visible. A misconfigured ServiceAccount, a typo in a ConfigMap key, or an untested conditional in templates can trigger incidents that cascade through your entire deployment pipeline. The irony is that most teams invest heavily in testing application code while treating Helm charts as “just configuration.”

Chart testing is fundamental for production-quality Helm deployments. For comprehensive coverage of testing along with all other Helm best practices, visit our complete Helm guide.

Helm charts are infrastructure code. They define how your applications run, scale, and integrate with the cluster. Treating them with less rigor than your application logic is a risk most production environments cannot afford.

The Real Cost of Untested Charts

In late 2024, a medium-sized SaaS company experienced a 4-hour outage because a chart update introduced a breaking change in RBAC permissions. The chart had been tested locally with helm install --dry-run, but the dry-run validation doesn’t interact with the API server’s RBAC layer. The deployment succeeded syntactically but failed operationally.

The incident revealed three gaps in their workflow:

  1. No schema validation against the target Kubernetes version
  2. No integration tests in a live cluster
  3. No policy enforcement for security baselines

These gaps are common. According to a 2024 CNCF survey on GitOps practices, fewer than 40% of organizations systematically test Helm charts before production deployment.

The problem is not a lack of tools—it’s understanding which layer each tool addresses.

Testing Layers: What Each Level Validates

Helm chart testing is not a single operation. It requires validation at multiple layers, each catching different classes of errors.

Layer 1: Syntax and Structure Validation

What it catches: Malformed YAML, invalid chart structure, missing required fields

Tools:

  • helm lint: Built-in, minimal validation following Helm best practices
  • yamllint: Strict YAML formatting rules

Example failure caught:

# Invalid indentation breaks the chart
resources:
  limits:
      cpu: "500m"
    memory: "512Mi"  # Incorrect indentation

Limitation: Does not validate whether the rendered manifests are valid Kubernetes objects.

Layer 2: Schema Validation

What it catches: Manifests that would be rejected by the Kubernetes API

Primary tool: kubeconform

Kubeconform is the actively maintained successor to the deprecated kubeval. It validates against OpenAPI schemas for specific Kubernetes versions and can include custom CRDs.

Project Profile:

  • Maintenance: Active, community-driven
  • Strengths: CRD support, multi-version validation, fast execution
  • Why it matters: helm lint validates chart structure, but not if rendered manifests match Kubernetes schemas

Example failure caught:

apiVersion: apps/v1
kind: Deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: app
        image: nginx:latest
# Missing required field: spec.selector

Configuration example:

helm template my-chart . | kubeconform \
  -kubernetes-version 1.30.0 \
  -schema-location default \
  -schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' \
  -summary

Example CI integration:

#!/bin/bash
set -e

KUBE_VERSION="1.30.0"

echo "Rendering chart..."
helm template my-release ./charts/my-chart > manifests.yaml

echo "Validating against Kubernetes $KUBE_VERSION..."
kubeconform \
  -kubernetes-version "$KUBE_VERSION" \
  -schema-location default \
  -summary \
  -output json \
  manifests.yaml | jq -e '.summary.invalid == 0'

Alternative: kubectl --dry-run=server (requires cluster access, validates against actual API server)

Layer 3: Unit Testing

What it catches: Logic errors in templates, incorrect conditionals, wrong value interpolation

Unit tests validate that given a set of input values, the chart produces the expected manifests. This is where template logic is verified before reaching a cluster.

Primary tool: helm-unittest

helm-unittest is the most widely adopted unit testing framework for Helm charts.

Project Profile:

  • GitHub: 3.3k+ stars, ~100 contributors
  • Maintenance: Active (releases every 2-3 months)
  • Primary maintainer: Quentin Machu (originally @QubitProducts, now independent)
  • Commercial backing: None
  • Bus Factor: Medium-High (no institutional backing, but consistent community engagement)

Strengths:

  • Fast execution (no cluster required)
  • Familiar test syntax (similar to Jest/Mocha)
  • Snapshot testing support
  • Good documentation

Limitations:

  • Doesn’t validate runtime behavior
  • Cannot test interactions with admission controllers
  • No validation against actual Kubernetes API

Example test scenario:

# tests/deployment_test.yaml
suite: test deployment
templates:
  - deployment.yaml
tests:
  - it: should set resource limits when provided
    set:
      resources.limits.cpu: "1000m"
      resources.limits.memory: "1Gi"
    asserts:
      - equal:
          path: spec.template.spec.containers[0].resources.limits.cpu
          value: "1000m"
      - equal:
          path: spec.template.spec.containers[0].resources.limits.memory
          value: "1Gi"

  - it: should not create HPA when autoscaling disabled
    set:
      autoscaling.enabled: false
    template: hpa.yaml
    asserts:
      - hasDocuments:
          count: 0

Alternative: Terratest (Helm module)

Terratest is a Go-based testing framework from Gruntwork that includes first-class Helm support. Unlike helm-unittest, Terratest deploys charts to real clusters and allows programmatic assertions in Go.

Example Terratest test:

func TestHelmChartDeployment(t *testing.T) {
    kubectlOptions := k8s.NewKubectlOptions("", "", "default")
    options := &helm.Options{
        KubectlOptions: kubectlOptions,
        SetValues: map[string]string{
            "replicaCount": "3",
        },
    }
    
    defer helm.Delete(t, options, "my-release", true)
    helm.Install(t, options, "../charts/my-chart", "my-release")
    
    k8s.WaitUntilNumPodsCreated(t, kubectlOptions, metav1.ListOptions{
        LabelSelector: "app=my-app",
    }, 3, 30, 10*time.Second)
}

When to use Terratest vs helm-unittest:

  • Use helm-unittest for fast, template-focused validation in CI
  • Use Terratest when you need full integration testing with Go flexibility

Layer 4: Integration Testing

What it catches: Runtime failures, resource conflicts, actual Kubernetes behavior

Integration tests deploy the chart to a real (or ephemeral) cluster and verify it works end-to-end.

Primary tool: chart-testing (ct)

chart-testing is the official Helm project for testing charts in live clusters.

Project Profile:

  • Ownership: Official Helm project (CNCF)
  • Maintainers: Helm team (contributors from Microsoft, IBM, Google)
  • Governance: CNCF-backed with public roadmap
  • LTS: Aligned with Helm release cycle
  • Bus Factor: Low (institutional backing from CNCF provides strong long-term guarantees)

Strengths:

  • De facto standard for public Helm charts
  • Built-in upgrade testing (validates migrations)
  • Detects which charts changed in a PR (efficient for monorepos)
  • Integration with GitHub Actions via official action

Limitations:

  • Requires a live Kubernetes cluster
  • Initial setup more complex than unit testing
  • Does not include security scanning

What ct validates:

  • Chart installs successfully
  • Upgrades work without breaking state
  • Linting passes
  • Version constraints are respected

Example ct configuration:

# ct.yaml
target-branch: main
chart-dirs:
  - charts
chart-repos:
  - bitnami=https://charts.bitnami.com/bitnami
helm-extra-args: --timeout 600s
check-version-increment: true

Typical GitHub Actions workflow:

name: Lint and Test Charts

on: pull_request

jobs:
  lint-test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v3
        with:
          fetch-depth: 0

      - name: Set up Helm
        uses: azure/setup-helm@v3

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.11'

      - name: Set up chart-testing
        uses: helm/chart-testing-action@v2

      - name: Run chart-testing (lint)
        run: ct lint --config ct.yaml

      - name: Create kind cluster
        uses: helm/kind-action@v1

      - name: Run chart-testing (install)
        run: ct install --config ct.yaml

When ct is essential:

  • Public chart repositories (expected by community)
  • Charts with complex upgrade paths
  • Multi-chart repositories with CI optimization needs

Layer 5: Security and Policy Validation

What it catches: Security misconfigurations, policy violations, compliance issues

This layer prevents deploying charts that pass functional tests but violate organizational security baselines or contain vulnerabilities.

Policy Enforcement: Conftest (Open Policy Agent)

Conftest is the CLI interface to Open Policy Agent for policy-as-code validation.

Project Profile:

  • Parent: Open Policy Agent (CNCF Graduated Project)
  • Governance: Strong CNCF backing, multi-vendor support
  • Production adoption: Netflix, Pinterest, Goldman Sachs
  • Bus Factor: Low (graduated CNCF project with multi-vendor backing)

Strengths:

  • Policies written in Rego (reusable, composable)
  • Works with any YAML/JSON input (not Helm-specific)
  • Can enforce organizational standards programmatically
  • Integration with admission controllers (Gatekeeper)

Limitations:

  • Rego has a learning curve
  • Does not replace functional testing

Example Conftest policy:

# policy/security.rego
package main

import future.keywords.contains
import future.keywords.if
import future.keywords.in

deny[msg] {
  input.kind == "Deployment"
  container := input.spec.template.spec.containers[_]
  not container.resources.limits.memory
  msg := sprintf("Container '%s' must define memory limits", [container.name])
}

deny[msg] {
  input.kind == "Deployment"
  container := input.spec.template.spec.containers[_]
  not container.resources.limits.cpu
  msg := sprintf("Container '%s' must define CPU limits", [container.name])
}

Running the validation:

helm template my-chart . | conftest test -p policy/ -

Alternative: Kyverno

Kyverno offers policy enforcement using native Kubernetes manifests instead of Rego. Policies are written in YAML and can validate, mutate, or generate resources.

Example Kyverno policy:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-resource-limits
spec:
  validationFailureAction: Enforce
  rules:
  - name: check-container-limits
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "All containers must have CPU and memory limits"
      pattern:
        spec:
          containers:
          - resources:
              limits:
                memory: "?*"
                cpu: "?*"

Conftest vs Kyverno:

  • Conftest: Policies run in CI, flexible for any YAML
  • Kyverno: Runtime enforcement in-cluster, Kubernetes-native

Both can coexist: Conftest in CI for early feedback, Kyverno in cluster for runtime enforcement.

Vulnerability Scanning: Trivy

Trivy by Aqua Security provides comprehensive security scanning for Helm charts.

Project Profile:

  • Maintainer: Aqua Security (commercial backing with open-source core)
  • Scope: Vulnerability scanning + misconfiguration detection
  • Helm integration: Official trivy helm command
  • Bus Factor: Low (commercial backing + strong open-source adoption)

What Trivy scans in Helm charts:

  1. Vulnerabilities in referenced container images
  2. Misconfigurations (similar to Conftest but pre-built rules)
  3. Secrets accidentally committed in templates

Example scan:

trivy helm ./charts/my-chart --severity HIGH,CRITICAL --exit-code 1

Sample output:

myapp/templates/deployment.yaml (helm)
====================================

Tests: 12 (SUCCESSES: 10, FAILURES: 2)
Failures: 2 (HIGH: 1, CRITICAL: 1)

HIGH: Container 'app' of Deployment 'myapp' should set 'securityContext.runAsNonRoot' to true
════════════════════════════════════════════════════════════════════════════════════════════════
Ensure containers run as non-root users

See https://kubernetes.io/docs/concepts/security/pod-security-standards/
────────────────────────────────────────────────────────────────────────────────────────────────
 myapp/templates/deployment.yaml:42

Commercial support:
Aqua Security offers Trivy Enterprise with advanced features (centralized scanning, compliance reporting). For most teams, the open-source version is sufficient.

Other Security Tools

Polaris (Fairwinds)

Polaris scores charts based on security and reliability best practices. Unlike enforcement tools, it provides a health score and actionable recommendations.

Use case: Dashboard for chart quality across a platform

Checkov (Bridgecrew/Palo Alto)

Similar to Trivy but with a broader IaC focus (Terraform, CloudFormation, Kubernetes, Helm). Pre-built policies for compliance frameworks (CIS, PCI-DSS).

When to use Checkov:

  • Multi-IaC environment (not just Helm)
  • Compliance-driven validation requirements

Enterprise Selection Criteria

Bus Factor and Long-Term Viability

For production infrastructure, tool sustainability matters as much as features. Community support channels like Helm CNCF Slack (#helm-users, #helm-dev) and CNCF TAG Security provide valuable insights into which projects have active maintainer communities.

Questions to ask:

  • Is the project backed by a foundation (CNCF, Linux Foundation)?
  • Are multiple companies contributing?
  • Is the project used in production by recognizable organizations?
  • Is there a public roadmap?

Risk Classification:

Tool Governance Bus Factor Notes
chart-testing CNCF Low Helm official project
Conftest/OPA CNCF Graduated Low Multi-vendor backing
Trivy Aqua Security Low Commercial backing + OSS
kubeconform Community Medium Active, but single maintainer
helm-unittest Community Medium-High No institutional backing
Polaris Fairwinds Medium Company-sponsored OSS

Kubernetes Version Compatibility

Tools must explicitly support the Kubernetes versions you run in production.

Red flags:

  • No documented compatibility matrix
  • Hard-coded dependencies on old K8s versions
  • No testing against multiple K8s versions in CI

Example compatibility check:

# Does the tool support your K8s version?
kubeconform --help | grep -A5 "kubernetes-version"

For tools like ct, always verify they test against a matrix of Kubernetes versions in their own CI.

Commercial Support Options

When commercial support matters:

  • Regulatory compliance requirements (SOC2, HIPAA, etc.)
  • Limited internal expertise
  • SLA-driven operations

Available options:

  • Trivy: Aqua Security offers Trivy Enterprise
  • OPA/Conftest: Styra provides OPA Enterprise
  • Terratest: Gruntwork offers consulting and premium modules

Most teams don’t need commercial support for chart testing specifically, but it’s valuable in regulated industries where audits require vendor SLAs.

Security Scanner Integration

For enterprise pipelines, chart testing tools should integrate cleanly with:

  • SIEM/SOAR platforms
  • CI/CD notification systems
  • Security dashboards (e.g., Grafana, Datadog)

Required features:

  • Structured output formats (JSON, SARIF)
  • Exit codes for CI failure
  • Support for custom policies
  • Webhook or API for event streaming

Example: Integrating Trivy with SIEM

# .github/workflows/security.yaml
- name: Run Trivy scan
  run: trivy helm ./charts --format json --output trivy-results.json

- name: Send to SIEM
  run: |
    curl -X POST https://siem.company.com/api/events \
      -H "Content-Type: application/json" \
      -d @trivy-results.json

Testing Pipeline Architecture

A production-grade Helm chart pipeline combines multiple layers:

Pipeline efficiency principles:

  1. Fail fast: syntax and schema errors should never reach integration tests
  2. Parallel execution where possible (unit tests + security scans)
  3. Cache ephemeral cluster images to reduce setup time
  4. Skip unchanged charts (ct built-in change detection)

Decision Matrix: When to Use What

Scenario 1: Small Team / Early-Stage Startup

Requirements: Minimal overhead, fast iteration, reasonable safety

Recommended Stack:

Linting:      helm lint + yamllint
Validation:   kubeconform
Security:     trivy helm

Optional: helm-unittest (if template logic becomes complex)

Rationale: Zero-dependency baseline that catches 80% of issues without operational complexity.

Scenario 2: Enterprise with Compliance Requirements

Requirements: Auditable, comprehensive validation, commercial support available

Recommended Stack:

Linting:      helm lint + yamllint
Validation:   kubeconform
Unit Tests:   helm-unittest
Security:     Trivy Enterprise + Conftest (custom policies)
Integration:  chart-testing (ct)
Runtime:      Kyverno (admission control)

Optional: Terratest for complex upgrade scenarios

Rationale: Multi-layer defense with both pre-deployment and runtime enforcement. Commercial support available for security components.

Scenario 3: Multi-Tenant Internal Platform

Requirements: Prevent bad charts from affecting other tenants, enforce standards at scale

Recommended Stack:

CI Pipeline:
  • helm lint → kubeconform → helm-unittest → ct
  • Conftest (enforce resource quotas, namespaces, network policies)
  • Trivy (block critical vulnerabilities)

Runtime:
  • Kyverno or Gatekeeper (enforce policies at admission)
  • ResourceQuotas per namespace
  • NetworkPolicies by default

Additional tooling:

  • Polaris dashboard for chart quality scoring
  • Custom admission webhooks for platform-specific rules

Rationale: Multi-tenant environments cannot tolerate “soft” validation. Runtime enforcement is mandatory.

Scenario 4: Open Source Public Charts

Requirements: Community trust, transparent testing, broad compatibility

Recommended Stack:

Must-have:
  • chart-testing (expected standard)
  • Public CI (GitHub Actions with full logs)
  • Test against multiple K8s versions

Nice-to-have:
  • helm-unittest with high coverage
  • Automated changelog generation
  • Example values for common scenarios

Rationale: Public charts are judged by testing transparency. Missing ct is a red flag for potential users.

The Minimum Viable Testing Stack

For any environment deploying Helm charts to production, this is the baseline:

Layer 1: Pre-Commit (Developer Laptop)

helm lint charts/my-chart
yamllint charts/my-chart

Layer 2: CI Pipeline (Automated on PR)

# Fast validation
helm template my-chart ./charts/my-chart | kubeconform \
  -kubernetes-version 1.30.0 \
  -summary

# Security baseline
trivy helm ./charts/my-chart --exit-code 1 --severity CRITICAL,HIGH

Layer 3: Pre-Production (Staging Environment)

# Integration test with real cluster
ct install --config ct.yaml --charts charts/my-chart

Time investment:

  • Initial setup: 4-8 hours
  • Per-PR overhead: 3-5 minutes
  • Maintenance: ~1 hour/month

ROI calculation:

Average production incident caused by untested chart:

  • Detection: 15 minutes
  • Triage: 30 minutes
  • Rollback: 20 minutes
  • Post-mortem: 1 hour
  • Total: ~2.5 hours of engineering time

If chart testing prevents even one incident per quarter, it pays for itself in the first month.

Common Anti-Patterns to Avoid

Anti-Pattern 1: Only using --dry-run

helm install --dry-run validates syntax but skips:

  • Admission controller logic
  • RBAC validation
  • Actual resource creation

Better: Combine dry-run with kubeconform and at least one integration test.

Anti-Pattern 2: Testing only in production-like clusters

“We test in staging, which is identical to production.”

Problem: Staging clusters rarely match production exactly (node counts, storage classes, network policies). Integration tests should run in isolated, ephemeral environments.

Anti-Pattern 3: Security scanning without enforcement

Running trivy helm without failing the build on critical findings is theater.

Better: Set --exit-code 1 and enforce in CI.

Anti-Pattern 4: Ignoring upgrade paths

Most chart failures happen during upgrades, not initial installs. Chart-testing addresses this with ct install --upgrade.

Conclusion: Testing is Infrastructure Maturity

The gap between teams that test Helm charts and those that don’t is not about tooling availability—it’s about treating infrastructure code with the same discipline as application code.

The cost of testing is measured in minutes per PR. The cost of not testing is measured in hours of production incidents, eroded trust in automation, and teams reverting to manual deployments because “Helm is too risky.”

The testing stack you choose matters less than the fact that you have one. Start with the minimal viable stack (lint + schema + security), run it consistently, and expand as your charts become more complex.

By implementing a structured testing pipeline, you catch 95% of chart issues before they reach production. The remaining 5% are edge cases that require production observability, not more testing layers.

Helm chart testing is not about achieving perfection—it’s about eliminating the preventable failures that undermine confidence in your deployment pipeline.

Frequently Asked Questions (FAQ)

What is Helm chart testing and why is it important in production?

Helm chart testing ensures that Kubernetes manifests generated from Helm templates are syntactically correct, schema-compliant, secure, and function correctly when deployed. In production, untested charts can cause outages, security incidents, or failed upgrades, even if application code itself is stable.

Is helm lint enough to validate a Helm chart?

No. helm lint only validates chart structure and basic best practices. It does not validate rendered manifests against Kubernetes API schemas, test template logic, or verify runtime behavior. Production-grade testing requires additional layers such as schema validation, unit tests, and integration tests.

What is the difference between Helm unit tests and integration tests?

Unit tests (e.g., using helm-unittest) validate template logic by asserting expected output for given input values without deploying anything. Integration tests (e.g., using chart-testing or Terratest) deploy charts to a real Kubernetes cluster and validate runtime behavior, upgrades, and interactions with the API server.

Which tools are recommended for validating Helm charts against Kubernetes schemas?

The most commonly recommended tool is kubeconform, which validates rendered manifests against Kubernetes OpenAPI schemas for specific Kubernetes versions and supports CRDs. An alternative is kubectl --dry-run=server, which validates against a live API server.

How can Helm chart testing prevent production outages?

Testing catches common failure modes before deployment, such as missing selectors in Deployments, invalid RBAC permissions, incorrect conditionals, or incompatible API versions. Many production outages originate from configuration and chart logic errors rather than application bugs.

What is the role of security scanning in Helm chart testing?

Security scanning detects misconfigurations, policy violations, and vulnerabilities that functional tests may miss. Tools like Trivy and Conftest (OPA) help enforce security baselines, prevent unsafe defaults, and block deployments that violate organizational or compliance requirements.

Is chart-testing (ct) required for private Helm charts?

While not strictly required, chart-testing is highly recommended for any chart deployed to production. It is considered the de facto standard for integration testing, especially for charts with upgrades, multiple dependencies, or shared cluster environments.

What is the minimum viable Helm testing pipeline for CI?

At a minimum, a production-ready pipeline should include:
helm lint for structural validation
kubeconform for schema validation
trivy helm for security scanning
Integration tests can be added as charts grow in complexity or criticality.

Helm Drivers Explained: Secrets, ConfigMaps, and State Storage in Helm

Helm Drivers Explained: Secrets, ConfigMaps, and State Storage in Helm

When working seriously with Helm in production environments, one of the less-discussed but highly impactful topics is how Helm stores and manages release state. This is where Helm drivers come into play. Understanding Helm drivers is not just an academic exercise; it directly affects security, scalability, troubleshooting, and even disaster recovery strategies.

Understanding Helm drivers is critical for production deployments. This is just one of many essential topics covered in our comprehensive Helm package management guide.

What Helm Drivers Are and How They Are Configured

A Helm driver defines the backend storage mechanism Helm uses to persist release information such as manifests, values, and revision history. Every Helm release has state, and that state must live somewhere. The driver determines where and how this data is stored.

Helm drivers are configured using the HELM_DRIVER environment variable. If the variable is not explicitly set, Helm defaults to using Kubernetes Secrets.

export HELM_DRIVER=secrets

This simple configuration choice can have deep operational consequences, especially in regulated environments or large-scale clusters.

Available Helm Drivers

Secrets Driver (Default)

The secrets driver stores release information as Kubernetes Secrets in the target namespace. This has been the default driver since Helm 3 was introduced.

Secrets are base64-encoded and can be encrypted at rest if Kubernetes encryption at rest is enabled. This makes the driver suitable for clusters with moderate security requirements without additional configuration.

ConfigMaps Driver

The configmaps driver stores Helm release state as Kubernetes ConfigMaps. Functionally, it behaves very similarly to the secrets driver but without any form of implicit confidentiality.

export HELM_DRIVER=configmaps

This driver is often used in development or troubleshooting scenarios where human readability is preferred.

Memory Driver

The memory driver stores release information only in memory. Once the Helm process exits, all state is lost.

export HELM_DRIVER=memory

This driver is rarely used outside of testing, CI pipelines, or ephemeral validation workflows.

Evolution of Helm Drivers

Helm drivers were significantly reworked with the release of Helm 3 in late 2019. Helm 2 relied on Tiller and ConfigMaps by default, which introduced security and operational complexity. Helm 3 removed Tiller entirely and introduced pluggable storage backends with Secrets as the secure default.

Since then, improvements have focused on performance, stability, and better error handling rather than introducing new drivers. The core abstraction has remained intentionally small to avoid fragmentation.

Practical Use Cases and When to Use Each Driver

In production Kubernetes clusters, the secrets driver is almost always the right choice. It integrates naturally with RBAC, supports encryption at rest, and aligns with Kubernetes-native security models.

ConfigMaps can be useful when debugging failed upgrades or learning Helm internals, as the stored data is easier to inspect. However, it should be avoided in environments handling sensitive values.

The memory driver shines in CI/CD pipelines where chart validation or rendering is needed without polluting a cluster with state.

Practical Examples

Switching drivers dynamically can be useful when inspecting a release:

HELM_DRIVER=configmaps helm get manifest my-release

Or running a dry validation in CI:

HELM_DRIVER=memory helm upgrade --install test ./chart --dry-run

Final Thoughts

Helm drivers are rarely discussed, yet they influence how reliable, secure, and observable your Helm workflows are. Treating the choice of driver as a deliberate architectural decision rather than a default setting is one of those small details that differentiate mature DevOps practices from ad-hoc automation.

Helm 4.0 Features, Breaking Changes & Migration Guide 2025

Helm 4.0 Features, Breaking Changes & Migration Guide 2025

Helm is one of the main utilities within the Kubernetes ecosystem, and therefore the release of a new major version, such as Helm 4.0, is something to consider because it is undoubtedly something that will need to be analyzed, evaluated, and managed in the coming months.

Helm 4.0 represents a major milestone in Kubernetes package management. For a complete understanding of Helm from basics to advanced features, explore our .

Due to this, we will see many comments and articles around this topic, so we will try to shed some light.

Helm 4.0 Key Features and Improvements

According to the project itself in its announcement, Helm 4 introduces three major blocks of changes: new plugin system, better integration with Kubernetes ** and internal modernization of SDK and performance**.

New Plugin System (includes WebAssembly)

The plugin system has been completely redesigned, with a special focus on security through the introduction of a new WebAssembly runtime that, while optional, is recommended as it runs in a “sandbox” mode that offers limits and guarantees from a security perspective.

In any case, there is no need to worry excessively, as the “classic” plugins continue to work, but the message is clear: for security and extensibility, the direction is Wasm.

Server-Side Apply and Better Integration with Other Controllers

From this version, Helm 4 supports Server-Side Apply (SSA) through the --server-side flag, which has already become stable since Kubernetes version v1.22 and allows updates on objects to be handled server-side to avoid conflicts between different controllers managing the same resources.

It also incorporates integration with kstatus to ensure the state of a component in a more reliable way than what currently happens with the use of the --wait parameter.

Other Additional Improvements

Additionally, there is another list of improvements that, while of lesser scope, are important qualitative leaps, such as the following:

  • Installation by digest in OCI registries: (helm install myapp oci://...@sha256:<digest>)
  • Multi-document values: you can pass multiple YAML values in a single multi-doc file, facilitating complex environments/overlays.
  • New --set-json argument that allows for easily passing complex structures compared to the current solution using the --set parameter

Why a Major (v4) and Not Another Minor of 3.x?

As explained in the official release post, there were features that the team could not introduce in v3 without breaking public SDK APIs and internal architecture:

  • Strong change in the plugin system (WebAssembly, new types, deep integration with the core).
  • Restructuring of Go packages and establishment of a stable SDK at helm.sh/helm/v4, code-incompatible with v3.
  • Introduction and future evolution of Charts v3, which require the SDK to support multiple versions of chart APIs.

With all this, continuing in the 3.x branch would have violated SemVer: the major number change is basically “paying” the accumulated technical debt to be able to move forward.

Additionally, a new evolution of the charts is expected in the future, moving from v2 to a future v3 that is not yet fully defined, and currently, v2 charts run correctly in this new version.

Is Helm 4.0 Migration Required?

The short answer is: yes. And possibly the long answer is: yes, and quickly. In the official Helm 4 announcement, they specify the support schedule for Helm 3:

  • Helm 3 bug fixes until July 8, 2026.
  • Helm 3 security fixes until November 11, 2026.
  • No new features will be backported to Helm 3 during this period; only Kubernetes client libraries will be updated to support new K8s versions.

Practical translation:

  • Organizations have approximately 1 year to plan a smooth Helm 4.0 migration with continued bug support for Helm 3.
  • After November 2026, continuing to use Helm 3 will become increasingly risky from a security and compatibility standpoint.

Best Practices for Migration

To carry out the migration, it is important to remember that it is perfectly possible and feasible to have both versions installed on the same machine or agent, so a “gradual” migration can be done to ensure that the end of support for version v3 is reached with everything migrated correctly, and for that, the following steps are recommended:

  • Conduct an analysis of all Helm commands and usage from the perspective of integration pipelines, upgrade scripts, or even the import of Helm client libraries in Helm-based developments.
  • Especially carefully review all uses of --post-renderer, helm registry login, --atomic, --force.
  • After the analysis, start testing Helm 4 first in non-production environments, reusing the same charts and values, reverting to Helm 3 if a problem is detected until it is resolved.
  • If you have critical plugins, explicitly test them with Helm 4 before making the global change.

What are the main new features in Helm 4.0?

Helm 4.0 introduces three major improvements: a redesigned plugin system with WebAssembly support for enhanced security, Server-Side Apply (SSA) integration for better conflict resolution, and internal SDK modernization for improved performance. Additional features include OCI digest installation and multi-document values support.

When does Helm 3 support end?

Helm 3 bug fixes end July 8, 2026 and security fixes end November 11, 2026. No new features will be backported to Helm 3. Organizations should plan migration to Helm 4.0 before November 2026 to avoid security and compatibility risks.

Are Helm 3 charts compatible with Helm 4.0?

Yes, Helm Chart API v2 charts work correctly with Helm 4.0. However, the Go SDK has breaking changes, so applications using Helm libraries need code updates. The CLI commands remain largely compatible for most use cases.

Can I run Helm 3 and Helm 4 simultaneously?

Yes, both versions can be installed on the same machine, enabling gradual migration strategies. This allows teams to test Helm 4.0 in non-production environments while maintaining Helm 3 for critical workloads during the transition period.

What should I test before migrating to Helm 4.0?

Focus on testing critical plugins, post-renderers, and specific flags like --atomic, --force, and helm registry login. Test all charts and values in non-production environments first, and review any custom integrations using Helm SDK libraries.

What is Server-Side Apply in Helm 4.0?

Server-Side Apply (SSA) is enabled with the --server-side flag and handles resource updates on the Kubernetes API server side. This prevents conflicts between different controllers managing the same resources and has been stable since Kubernetes v1.22.

Helm v3.17 Take Ownership Flag: Fix Release Conflicts

Helm v3.17 Take Ownership Flag: Fix Release Conflicts

Helm has long been the standard for managing Kubernetes applications using packaged charts, bringing a level of reproducibility and automation to the deployment process. However, some operational tasks, such as renaming a release or migrating objects between charts, have traditionally required cumbersome workarounds. With the introduction of the --take-ownership flag in Helm v3.17 (released in January 2025), a long-standing pain point is finally addressed—at least partially.

The take-ownership feature represents the continuing evolution of Helm. Learn about this and other cutting-edge capabilities in our Helm Charts Package Management Guide

In this post, we will explore:

  • What the --take-ownership flag does
  • Why it was needed
  • The caveats and limitations
  • Real-world use cases where it helps
  • When not to use it

Understanding Helm Release Ownership and Object Management

When Helm installs or upgrades a chart, it injects metadata—labels and annotations—into every managed Kubernetes object. These include:

app.kubernetes.io/managed-by: Helm
meta.helm.sh/release-name: my-release
meta.helm.sh/release-namespace: default

This metadata serves an important role: Helm uses it to track and manage resources associated with each release. As a safeguard, Helm does not allow another release to modify objects it does not own and when you trying that you will see messages like the one below:

Error: Unable to continue with install: Service "provisioner-agent" in namespace "test-my-ns" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dp-core-infrastructure11": current value is "dp-core-infrastructure"

While this protects users from accidental overwrites, it creates limitations for advanced use cases.

Why --take-ownership Was Needed

Let’s say you want to:

  • Rename an existing Helm release from api-v1 to api.
  • Move a ConfigMap or Service from one chart to another.
  • Rebuild state during GitOps reconciliation when previous Helm metadata has drifted.

Previously, your only option was to:

  1. Uninstall the existing release.
  2. Reinstall under the new name.

This approach introduces downtime, and in production systems, that’s often not acceptable.

What the Flag Does

helm upgrade my-release ./my-chart --take-ownership

When this flag is passed, Helm will:

  • Skip the ownership validation for existing objects.
  • Override the labels and annotations to associate the object with the current release.

In practice, this allows you to claim ownership of resources that previously belonged to another release, enabling seamless handovers.

⚠️ What It Doesn’t Do

This flag does not:

  • Clean up references from the previous release.
  • Protect you from future uninstalls of the original release (which might still remove shared resources).
  • Allow you to adopt completely unmanaged Kubernetes resources (those not initially created by Helm).

In short, it’s a mechanism for bypassing Helm’s ownership checks, not a full lifecycle manager.

Real-World Helm Take Ownership Use Cases

Let’s go through common scenarios where this feature is useful.

✅ 1. Renaming a Release Without Downtime

Before:

helm uninstall old-name
helm install new-name ./chart

Now:

helm upgrade new-name ./chart --take-ownership

✅ 2. Migrating Objects Between Charts

You’re refactoring a large chart into smaller, modular ones and need to reassign certain Service or Secret objects.

This flag allows the new release to take control of the object without deleting or recreating it.

✅ 3. GitOps Drift Reconciliation

If objects were deployed out-of-band or their metadata changed unintentionally, GitOps tooling using Helm can recover without manual intervention using --take-ownership.

Best Practices and Recommendations

  • Use this flag intentionally, and document where it’s applied.
  • If possible, remove the previous release after migration to avoid confusion.
  • Monitor Helm’s behavior closely when managing shared objects.
  • For non-Helm-managed resources, continue to use kubectl annotate or kubectl label to manually align metadata.

Conclusion

The --take-ownership flag is a welcomed addition to Helm’s CLI arsenal. While not a universal solution, it smooths over many of the rough edges developers and SREs face during release evolution and GitOps adoption.

It brings a subtle but powerful improvement—especially in complex environments where resource ownership isn’t static.

Stay updated with Helm releases, and consider this flag your new ally in advanced release engineering.

Frequently Asked Questions

What does the Helm –take-ownership flag do?

The --take-ownership flag allows Helm to bypass ownership validation and claim control of Kubernetes resources that belong to another release. It updates the meta.helm.sh/release-name annotation to associate objects with the current release, enabling zero-downtime release renames and chart migrations.

When should I use Helm take ownership?

Use --take-ownership when renaming releases without downtime, migrating objects between charts, or fixing GitOps drift. It’s ideal for production environments where uninstall/reinstall cycles aren’t acceptable. Always document usage and clean up previous releases afterward.

What are the limitations of Helm take ownership?

The flag doesn’t clean up references from previous releases or protect against future uninstalls of the original release. It only works with Helm-managed resources, not completely unmanaged Kubernetes objects. Manual cleanup of old releases is still required.

Is Helm take ownership safe for production use?

Yes, but use it intentionally and carefully. The flag bypasses Helm’s safety checks, so ensure you understand the ownership implications. Test in staging first, document all usage, and monitor for conflicts. Remove old releases after successful migration to avoid confusion.

Which Helm version introduced the take ownership flag?

The --take-ownership flag was introduced in Helm v3.17, released in January 2025. This feature addresses long-standing pain points with release renaming and chart migrations that previously required downtime-inducing uninstall/reinstall cycles.

Advanced Helm Commands and Flags Every Kubernetes Engineer Should Know

Advanced Helm Commands and Flags Every Kubernetes Engineer Should Know

Managing Kubernetes resources effectively can sometimes feel overwhelming, but Helm, the Kubernetes package manager, offers several commands and flags that make the process smoother and more intuitive. In this article, we’ll dive into some lesser-known Helm commands and flags, explaining their uses, benefits, and practical examples.

These advanced commands are essential for mastering Helm in production. For the complete toolkit including fundamentals, testing, and deployment patterns, visit our Helm package management guide.

1. helm get values: Retrieving Deployed Chart Values

The helm get values command is essential when you need to see the configuration values of a deployed Helm chart. This is particularly useful when you have a chart deployed but lack access to its original configuration file. With this command, you can achieve an “Infrastructure as Code” approach by capturing the current state of your deployment.

Usage:

helm get values <release-name> [flags]

Example:

To get the values of a deployed chart named my-release:

helm get values my-release --namespace my-namespace

This command outputs the current values used for the deployment, which is valuable for documentation, replicating the environment, or modifying deployments.

2. Understanding helm upgrade Flags: --reset-values, --reuse-values, and --reset-then-reuse

The helm upgrade command is typically used to upgrade or modify an existing Helm release. However, the behavior of this command can be finely tuned using several flags: --reset-values, --reuse-values, and --reset-then-reuse.

  • --reset-values: Ignores the previous values and uses only the values provided in the current command. Use this flag when you want to override the existing configuration entirely.

Example Scenario: You are deploying a new version of your application, and you want to ensure that no old values are retained.

  helm upgrade my-release my-chart --reset-values --set newKey=newValue
  • --reuse-values: Reuses the previous release’s values and merges them with any new values provided. This flag is useful when you want to keep most of the old configuration but apply a few tweaks.

Example Scenario: You need to add a new environment variable to an existing deployment without affecting the other settings.

  helm upgrade my-release my-chart --reuse-values --set newEnv=production
  • --reset-then-reuse: A combination of the two. It resets to the original values and then merges the old values back, allowing you to start with a clean slate while retaining specific configurations.

Example Scenario: Useful in complex environments where you want to ensure the chart is using the original default settings but retain some custom values.

  helm upgrade my-release my-chart --reset-then-reuse --set version=2.0

3. helm lint: Ensuring Chart Quality in CI/CD Pipelines

The helm lint command checks Helm charts for syntax errors, best practices, and other potential issues. This is especially useful when integrating Helm into a CI/CD pipeline, as it ensures your charts are reliable and adhere to best practices before deployment.

Usage:

helm lint <chart-path> [flags]
  • <chart-path>: Path to the Helm chart you want to validate.

Example:

helm lint ./my-chart/

This command scans the my-chart directory for issues like missing fields, incorrect YAML structure, or deprecated usage. If you’re automating deployments, integrating helm lint into your pipeline helps catch problems early. By adding this command in your CICD pipeline, you ensure that any syntax or structural issues are caught before proceeding to build or deployment stages. You can lear more about helm testing in the linked article

4. helm rollback: Reverting to a Previous Release

The helm rollback command allows you to revert a release to a previous version. This can be incredibly useful in case of a failed upgrade or deployment, as it provides a way to quickly restore a known good state.

Usage:

helm rollback <release-name> [revision] [flags]
  • [revision]: The revision number to which you want to roll back. If omitted, Helm will roll back to the previous release by default.

Example:

To roll back a release named my-release to its previous version:

helm rollback my-release

To roll back to a specific revision, say revision 3:

helm rollback my-release 3

This command can be a lifesaver when a recent change breaks your application, allowing you to quickly restore service continuity while investigating the issue.

5. helm verify: Validating a Chart Before Use

The helm verify command checks the integrity and validity of a chart before it is deployed. This command ensures that the chart’s package file has not been tampered with or corrupted. It’s particularly useful when you are pulling charts from external repositories or using charts shared across multiple teams.

Usage:

helm verify <chart-path>

Example:

To verify a downloaded chart named my-chart:

helm verify ./my-chart.tgz

If the chart passes the verification, Helm will output a success message. If it fails, you’ll see details of the issues, which could range from missing files to checksum mismatches.

Conclusion

Leveraging these advanced Helm commands and flags can significantly enhance your Kubernetes management capabilities. Whether you are retrieving existing deployment configurations, fine-tuning your Helm upgrades, or ensuring the quality of your charts in a CI/CD pipeline, these tricks help you maintain a robust and efficient Kubernetes environment.

Helm Multiple Instances Subcharts Explained: Reuse the Same Chart with Aliases

Helm Multiple Instances Subcharts Explained: Reuse the Same Chart with Aliases

Helm Multiple Instances Subchart usages as part of your main chart could be something that, from the beginning, can sound strange. We already commented about the helm charts sub-chart and dependencies in the blog because the usual use case is like that:

Multiple subchart instances enable powerful architectural patterns in Helm. Learn about this and other advanced deployment techniques in our complete Helm charts guide.

I have a chart that needs another component, and I “import” it as a sub-chart, which gives me the possibility to deploy the same component and customize its values without needing to create another chart copy and, as you can imagine simplifying a lot of the management of the charts, a sample can be like that:

Discover how multiple subcharts can revolutionize your Helm deployments. Learn how to leverage the power of reusability and customization, allowing you to deploy identical components with unique configurations. Enhance flexibility and simplify management with this advanced Helm feature. Unlock the full potential of your microservices architecture and take control of complex application deployments. Dive into the world of multiple subcharts and elevate your Helm charts to the next level.

So, I think that’s totally clear, but what about are we talking now? The use-case is to have the same sub-chart defined twice. So, imagine this scenario, we’re talking about that instead of this:

# Chart.yaml
dependencies:
- name: nginx
  version: "1.2.3"
  repository: "https://example.com/charts"
- name: memcached
  version: "3.2.1"
  repository: "https://another.example.com/charts"

We’re having something like this

# Chart.yaml
dependencies:
- name: nginx
  version: "1.2.3"
  repository: "https://example.com/charts"
- name: memcached-copy1
  version: "3.2.1"
  repository: "https://another.example.com/charts"
- name: memcached-copy2
  version: "3.2.1"
  repository: "https://another.example.com/charts"

So we have the option to define more than one “instance” of the same sub chart. And I guess, at this moment, you can start asking to yourself: “What are the use-case where I could need this?”

Because that’s quite understandable, unless you need it you will never realize about that. It is the same that happens to me. So let’s talk a bit about possible use cases for this.

 Use-Cases For Multi Instance Helm Dependency

Imagine that you’re deploying a helm chart for a set of microservices that belongs to the scope of the same application and each of the microservices has the same technology base, that can be TIBCO BusinessWorks Container Edition or it can be Golang microservices. So all of them has the same base so it can use the same chart “bwce-microservice” or “golang-microservices” but each of them has its own configuration, for example:

  • Each of them will have its own image name that would differ from one to the other.
  • Each of them will have its own configuration that will differ from one to the other.
  • Each of them will have its own endpoints that will differ and probably even connecting to different sources such as databases or external systems.

So, this approach would help us reuse the same technology helm chart, “bwce” and instance it several times, so we can have each of them with its own configuration without the need to create something “custom” and keeping the same benefits in terms of maintainability that the helm dependency approach provides to us.

 How can we implement this?

Now that we have a clear the use-case that we’re going to support, the next step is regarding how we can do this a reality. And, to be honest, this is much simpler than you can think from the beginning, let’s start with the normal situation when we have a main chart, let’s call it a “program,” that has included a “bwce” template as a dependency as you can see here:

name: multi-bwce
description: Helm Chart to Deploy a TIBCO BusinessWorks Container Edition Application
apiVersion: v1
version: 0.2.0
icon: 
appVersion: 2.7.2

dependencies:
- name: bwce
  version: ~1.0.0
  repository: "file:///Users/avazquez/Data/Projects/DET/helm-charts/bwce"

And now, we are going to move to a multi-instance approach where we will require two different microservices, let’s call it serviceA and serviceB, and both of them we will use the same bwce helm chart.

So the first thing we will modify is the Chart.yaml as follows:

name: multi-bwce
description: Helm Chart to Deploy a TIBCO BusinessWorks Container Edition Application
apiVersion: v1
version: 0.2.0
icon: 
appVersion: 2.7.2

dependencies:
- name: bwce
  alias: serviceA
  version: ~0.2.0
  repository: "file:///Users/avazquez/Data/Projects/DET/helm-charts/bwce"
- name: bwce
  alias: serviceB
  version: ~0.2.0
  repository: "file:///Users/avazquez/Data/Projects/DET/helm-charts/bwce"

The important part here is how we declare the dependency. As you can see in the name we still keeping the same “name” but they have an additional field named “alias” and this alias is what we will help to later identify the properties for each of the instances as we required. With that, we’re already have our two serviceA and serviceB instance definition and we can start using it in the values.yml as follows:

# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

serviceA:
  image: 
    imageName: 552846087011.dkr.ecr.eu-west-2.amazonaws.com/tibco/serviceA:2.5.2
    pullPolicy: Always
serviceB:  
  image: 
    imageName: 552846087011.dkr.ecr.eu-west-2.amazonaws.com/tibco/serviceB:2.5.2
    pullPolicy: Always
  

 Conclusion

The main benefit of this is that it enhances the options of using helm chart for “complex” applications that require different instances of the same kind of components and at the same time.

That doesn’t mean that you need a huge helm chart for your project because this will go against all the best practices of the whole containerization and microservices approach but at least it will give you the option to define different levels of abstraction as you want, keeping all the benefits from a management perspective.

Helm Templates in Files Explained: Customize ConfigMaps and Secrets Content

Helm Templates in Files Explained: Customize ConfigMaps and Secrets Content

If you have ever built a Helm chart that includes configuration files, scripts, or property files inside a ConfigMap or Secret, you have probably hit the same wall: the default templating engine only processes YAML files inside the templates/ directory. Everything else is treated as static content.

This is a problem because real-world applications rarely deploy with hardcoded configuration. You need environment-specific values in your .properties files, tokens in your JSON configs, or dynamic hostnames in your shell scripts. Helm provides three core functions to handle this: .Files.Get, .Files.Glob, and tpl. Each solves a different piece of the puzzle, and combining them is where things get powerful.

This guide covers every practical pattern you will need, from the simplest .Files.Get call to the advanced tpl + .Files.Glob combination that gives you full templating inside external files. For a broader look at Helm packaging, see our definitive Helm package management guide.

Understanding the Helm Chart File Structure

Before diving into the functions, it is important to understand what Helm considers a “file” and where it can access them. A typical chart looks like this:

my-chart/
├── Chart.yaml
├── values.yaml
├── templates/
│   ├── configmap.yaml
│   ├── secret.yaml
│   └── deployment.yaml
├── config/
│   ├── app.properties
│   ├── logging.json
│   └── init.sh
└── files/
    └── zones.json

Files inside templates/ go through the full Helm template engine. Files outside it (like config/app.properties or files/zones.json) are accessible via the .Files object, but they are not templated automatically. This is the key distinction that catches most people off guard.

.Files.Get: Reading a Single File

The .Files.Get function reads the content of a specific file by its path, relative to the chart root. This is the simplest way to include file content in a ConfigMap or Secret.

Basic ConfigMap Example with .Files.Get

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-app-config
  namespace: {{ .Release.Namespace }}
data:
  app.properties: |
{{ .Files.Get "config/app.properties" | indent 4 }}

This reads config/app.properties from the chart root and injects it into the ConfigMap, preserving the original content. The indent 4 ensures correct YAML indentation.

Secret Example with .Files.Get and b64enc

For Secrets, Kubernetes expects base64-encoded values in the data field. Combine .Files.Get with b64enc:

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-tls-config
  namespace: {{ .Release.Namespace }}
type: Opaque
data:
  tls.crt: {{ .Files.Get "certs/tls.crt" | b64enc }}
  tls.key: {{ .Files.Get "certs/tls.key" | b64enc }}

.Files.Get Path Limitations: Why “../” Does Not Work

A very common question is whether you can use .Files.Get to access files outside the chart directory, for example .Files.Get "../shared/config.yaml". The answer is no. Helm restricts file access to the chart root for security reasons. Any path that tries to escape the chart directory with ../ will silently return an empty string.

If you need to share files between charts, the recommended patterns are:

  • Use a library chart or dependency that includes the shared files
  • Copy shared files into each chart during your CI/CD pipeline before packaging
  • Pass the content through values.yaml using a parent chart

Also note that files inside the templates/ directory and Chart.yaml itself are not accessible through .Files. Only files that are packaged with the chart and not in templates/ can be read.

.Files.Glob: Working with Multiple Files

When you have multiple configuration files to include, .Files.Glob lets you match files using glob patterns and iterate over them. This is especially useful when your chart ships with several config files that all need to end up in the same ConfigMap.

.Files.Glob with .AsConfig Example: ConfigMap from Multiple Files

The .AsConfig helper takes a set of matched files and formats them as ConfigMap data entries, where each filename becomes the key and the file content becomes the value:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-all-configs
  namespace: {{ .Release.Namespace }}
data:
{{ (.Files.Glob "config/*").AsConfig | indent 2 }}

If your config/ directory contains app.properties, logging.json, and init.sh, the result would be:

data:
  app.properties: |
    server.port=8080
    db.host=postgres
  logging.json: |
    {"level": "info", "format": "json"}
  init.sh: |
    #!/bin/bash
    echo "Initializing..."

This is the cleanest approach when you want to include all files from a directory without modifying their content.

.Files.Glob with .AsSecrets Example

.AsSecrets works identically to .AsConfig, but automatically base64-encodes each file’s content for use in Secret resources:

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-credentials
  namespace: {{ .Release.Namespace }}
type: Opaque
data:
{{ (.Files.Glob "secrets/*").AsSecrets | indent 2 }}

Iterating with range: The Explicit Approach

For more control over how each file is processed, you can iterate explicitly using range:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-scripts
  namespace: {{ .Release.Namespace }}
data:
  {{- range $path, $_ := .Files.Glob "scripts/*.sh" }}
  {{ base $path }}: |
{{ $.Files.Get $path | indent 4 }}
  {{- end }}

Notice two important details here. First, base $path extracts just the filename (e.g., init.sh) from the full path (scripts/init.sh). Second, inside a range loop the context changes, so you must use $. (dollar-dot) to access the root scope when calling $.Files.Get.

The tpl Function: Full Templating Inside External Files

This is where things get really interesting. The .Files.Get and .Files.Glob functions read file content as-is. If your app.properties file contains {{ .Values.database.host }}, it will be included literally as that string, not replaced with the actual value.

The tpl function solves this by passing a string through the Helm template engine. When you combine tpl with .Files.Get, your external files get the same templating power as files inside templates/.

tpl + .Files.Get: Templated ConfigMap

Consider this config/app.properties file in your chart:

# config/app.properties
server.port={{ .Values.app.port | default 8080 }}
server.host={{ .Values.app.host }}
database.url=jdbc:postgresql://{{ .Values.database.host }}:{{ .Values.database.port }}/{{ .Values.database.name }}
database.pool.size={{ .Values.database.poolSize | default 10 }}
logging.level={{ .Values.logging.level | default "INFO" }}

And the corresponding template that processes it:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-app-config
  namespace: {{ .Release.Namespace }}
data:
  app.properties: |
{{ tpl (.Files.Get "config/app.properties") . | indent 4 }}

The tpl function takes two arguments: the string to process and the context (.). It runs the content through the template engine, replacing all {{ .Values.* }} references with actual values. The result is a fully dynamic configuration file.

tpl + .Files.Glob: Templating Multiple Files

To template every file matched by a glob pattern, combine the range iteration with tpl:

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-dynamic-secrets
  namespace: {{ .Release.Namespace }}
type: Opaque
data:
  {{- range $path, $_ := .Files.Glob "secrets/*.json" }}
  {{ base $path }}: {{ tpl ($.Files.Get $path) $ | b64enc }}
  {{- end }}

This iterates over all .json files in the secrets/ directory, passes each through the template engine (so {{ .Values.* }} references are resolved), base64-encodes the result, and includes it in the Secret. This is the most powerful pattern for managing multiple dynamic configuration files.

Common Pitfalls and Troubleshooting

Working with .Files in Helm can produce confusing errors or silent failures. Here are the most common issues and how to fix them.

.Files.Get Returns Empty String

If .Files.Get returns nothing, check these three things:

  • Wrong path: The path is relative to the chart root, not to the templates directory. Use .Files.Get "config/app.properties", not .Files.Get "../config/app.properties".
  • File excluded by .helmignore: Check your .helmignore file. If it matches the file’s path, the file will not be packaged with the chart and .Files.Get will return empty.
  • File in templates/ directory: Files inside templates/ are not accessible via .Files. Move them to a different directory.

YAML Indentation Errors

The most frequent rendering error is incorrect indentation. Always use indent N (or nindent N) when including file content in YAML. The difference between indent and nindent is that nindent adds a newline before the content, which is cleaner when using {{- to trim whitespace:

# Using indent (requires | on the previous line)
data:
  app.properties: |
{{ .Files.Get "config/app.properties" | indent 4 }}

# Using nindent (cleaner, self-contained)
data:
  app.properties: {{ .Files.Get "config/app.properties" | nindent 4 }}

Chart Size Limit

Helm charts stored in Kubernetes as Secrets or ConfigMaps are subject to the 1 MB limit imposed by etcd. If your chart includes many large files, you may hit this limit during helm install. The error typically reads release: invalid or etcd: request is too large. In that case, consider mounting files via persistent volumes or external config management instead of embedding them in the chart.

Context Issues Inside range Loops

Inside a range block, . refers to the current iteration item, not the root context. This means .Files.Get will fail. Use $. to access the root context:

# Wrong: . is the loop item, not the root context
{{- range $path, $_ := .Files.Glob "config/*" }}
  {{ .Files.Get $path }}  {{/* This will fail */}}
{{- end }}

# Correct: use $ to access root context
{{- range $path, $_ := .Files.Glob "config/*" }}
  {{ $.Files.Get $path }}  {{/* This works */}}
{{- end }}

Real-World Pattern: Multi-Environment Configuration

A practical pattern that combines everything above is organizing environment-specific configuration files and selecting them dynamically:

# Chart structure
my-chart/
├── config/
│   ├── application.yaml      # Shared base config
│   ├── db-pool.properties    # Database pool settings
│   └── logging.xml           # Log4j/Logback config
├── templates/
│   └── configmap.yaml
└── values.yaml

The ConfigMap template processes all config files with templating:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-config
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "my-chart.labels" . | nindent 4 }}
data:
  {{- range $path, $_ := .Files.Glob "config/*" }}
  {{ base $path }}: |
{{ tpl ($.Files.Get $path) $ | indent 4 }}
  {{- end }}

This way, every file in config/ is automatically included, templated, and properly formatted. Adding a new configuration file is as simple as dropping it into the directory. No template changes required.

Quick Reference: .Files Functions Summary

Here is a quick reference of all file-related functions available in Helm:

FunctionWhat it doesExample
.Files.GetReturns the content of a single file as a string.Files.Get "config/app.yaml"
.Files.GlobReturns all files matching a glob pattern.Files.Glob "config/*.json"
.AsConfigFormats matched files as ConfigMap data entries(.Files.Glob "config/*").AsConfig
.AsSecretsFormats matched files as base64-encoded Secret data(.Files.Glob "secrets/*").AsSecrets
tplPasses a string through the Helm template enginetpl (.Files.Get "f.yaml") .
.Files.LinesReturns file content as a list of lines.Files.Lines "config/hosts.txt"
.Files.AsBytesReturns file content as a byte array.Files.Get "bin/tool" \| b64enc

Conclusion

Helm file handling follows a clear escalation path depending on what you need. Use .Files.Get when you need a single file as-is. Use .Files.Glob with .AsConfig or .AsSecrets when you have multiple files that need no modification. And use tpl combined with either function when your files need dynamic values from values.yaml.

The most common mistake is forgetting that .Files only reads files — it does not template them. The moment you need {{ .Values.* }} inside an external file, tpl is the function you are looking for. For more Helm patterns and advanced tips, explore our guides on Helm hooks, Helm loops, and Helm dependencies.

Frequently Asked Questions

Can .Files.Get access files outside the chart directory?

No. Helm restricts .Files.Get to files within the chart root directory. Paths containing ../ will silently return an empty string. This is a security constraint to prevent charts from reading arbitrary files from the filesystem. If you need shared files across charts, use library charts, copy files during CI/CD, or pass content through parent chart values.

What is the difference between .AsConfig and .AsSecrets in Helm?

.AsConfig formats files as plain text ConfigMap data entries (key: filename, value: file content). .AsSecrets does the same but automatically base64-encodes each file’s content, which is required for the data field in Kubernetes Secret resources. Both are called on the result of .Files.Glob.

How do I use Helm values inside a properties file or JSON config?

Use the tpl function. Instead of .Files.Get "config/file.json", use tpl (.Files.Get "config/file.json") .. This passes the file content through the Helm template engine, so any {{ .Values.* }} references in the file will be resolved against your chart’s values.

Why does .Files.Get return an empty string?

Three common causes: the file path is wrong (paths are relative to the chart root, not the templates directory), the file is excluded by .helmignore, or the file is inside the templates/ directory which is not accessible via .Files. Run helm template locally and check the output to debug.

Is there a size limit for files included via .Files.Get?

Helm itself does not impose a file size limit, but the packaged chart is stored as a Kubernetes Secret or ConfigMap which is subject to the etcd 1 MB size limit. If your chart with all its files exceeds this, helm install will fail. For large files, consider external config management or persistent volumes instead of embedding them in the chart.

Helm Dependencies Explained: How Chart Dependencies Work in Helm

Helm Dependencies Explained: How Chart Dependencies Work in Helm

Helm Dependency is a critical part of understanding how Helm works as it is the way to establish relationships between different helm packages. We have talked a lot here about what Helm is, and some topics around that, and we even provided some tricks if you create your charts.

Understanding chart dependencies is crucial for building scalable Helm architectures. Explore more Helm patterns and best practices in our comprehensive Helm guide.

So, as commented, Helm Chart is nothing more than a package that you put around the different Kubernetes objects that need to be deployed for your application to work. The usual comparison is that it is similar to a Software Package. When you install an application that depends on several components, all of those components are packaged together, and here is the same thing.

What is a Helm Dependency?

A Helm Dependency is nothing more than the way you define that your Chart needs another chart to work. For sure, you can create a Helm Chart with everything you need to deploy your application, but something you would like to split that work into several charts just because they are easy to maintain or the most common use case because you want to leverage another Helm Chart that is already available.

One use case can be a web application that requires a database, so you can create on your Helm Chart all the YAML files to deploy your web application and your Database in Kubernetes, or you can have your YAML files for your web application (Deployment, Services, ConfigMaps,…) and then say: And I need a database and to provide it I’m going to use this chart.

This is similar to how it works with the software packages in UNIX systems; you have your package that does the job, like, for example, A, but for that job to be done, it requires the library L, and to ensure that when you are installing A, Library L is already there or if not it will be installed you declare that your application A depends on Library L, so here is the same thing. You declare that your Chart depends on another Chart to work. And that leaves us to the next point.

How do we declare a Helm Dependency?

This is the next point; now that we understand what a Helm Dependency is conceptually and we have a use case, how can we do that in our Helm Chart?

All the work is done in the Chart.yml file. If you remember, the Chart.yml file is the file where you declare all the metadata of your Helm Chart, such as the name, the version of the chart, the application version, location URL, icon, and much more. And usually has a structure like this one:

apiVersion: v2
name: MyChart
description: My Chart Description
type: application
version: 0.2.0
appVersion: "1.16.0"

So here we can add a section dependencies and, in that section is where we are going to define the charts that we depend on. As you can see in the snippet below:

apiVersion: v2
name: MyChart
description: My Chart Description
type: application
version: 0.2.0
appVersion: "1.16.0"
dependencies:
- name: Dependency
  version: 1.0.0
  repository: "file:///location_of_my_chart"

Here we are declaring Dependency as our Helm Dependency. We specify the version that we would like to use (similar to the version we say in our chart), and that will help us to ensure that we will provide the same version that has been tested as part of the resolution of the dependency and also the location using an URL that can be an external URL is this is pointing to a Helm Chart that is available on the internet or outside your computer or using a File Path in case you are pointing to a local resource in your machine.

That will do the job of defining the helm dependency, and this way, when you install your chart using the command helm install, it will also provide the dependence.

How do I declare a Helm Conditional Dependency?

Until now, we learned how to declare a dependency, and each time I provision my application, it will also provide the dependence. But usually, we would like to have a fine-grained approach to that. Imagine the same scenario as above: We have our Web Application that depends on the Database, and we have two options, we can provision the database as part of the installation of the web application, or we can point to an external database and in that case, it makes no sense to provision the Helm Dependency. How can we do that?

So, easy, because one of the optional parameters you can add to your dependency is condition and do exactly that, condition allow you to specify a flag in your values.yml that in the case is equal to true, it will provide the dependency but in the case is equal to false it will skip that part similar to the snippet shown below:

 apiVersion: v2
name: MyChart
description: My Chart Description
type: application
version: 0.2.0
appVersion: "1.16.0"
dependencies:
- name: Dependency
  version: 1.0.0
  repository: "file:///location_of_my_chart"
  condition: database.enabled 

And with that, we will set the enabled parameter under database in our values.yml to true if we would like to provision it.

How do I declare a Helm Dependency With a Different version?

As shown in the snippets above, we offer that when we declare a Helm Dependency, we specify the version; that is a safe way to do it because it ensures that any change done to the helm chart will not affect your package. Still, at the same time, you cannot be aware of security fixes or patches to the chart that you would like to leverage in your deployment.

To simplify that, you have the option to define the version in a more flexible way using the operator ~ in the definition of the version, as you can see in the snippet below:

apiVersion: v2
name: MyChart
description: My Chart Description
type: application
version: 0.2.0
appVersion: "1.16.0"
dependencies:
- name: Dependency
  version: ~1.0.0
  repository: "file:///location_of_my_chart"
  condition: database.enabled 

This means that any patch done to the chart will be accepted, so this is similar that this chart will use the latest version of 1.0.X. Still, it will not use the 1.1.0 version, so that allows to have more flexibility, but at the same time keeping things safe and secured in case of a breaking change on the Chart you depend on. This is just one way to define that, but the flexibility is enormous as the Chart versions use “Semantic Versions,” You can learn and read more about that here: https://github.com/Masterminds/semver.

Helm Loops Explained: A Practical Helm Hack to Avoid Deployment Issues

Helm Loops Explained: A Practical Helm Hack to Avoid Deployment Issues

Introduction



When working with complex Helm deployments, mastering loops is just one piece of the puzzle. For a comprehensive understanding of Helm from fundamentals to advanced patterns, check out our complete Helm Charts & Kubernetes Package Management Guide.

Helm Charts are becoming the default de-factor solution when you want to package your Kubernetes deployment to be able to distribute or quickly install it in your system.

Defined several times as the apt for Kubernetes for its similarity with the ancient package manager from GNU/Linux Debian-like distributions, it seems to continue to grow in popularity each month compared with other similar solutions even more tightly integrated into Kubernetes such as Kustomize, as you can see in the Google Trends picture below:

Helm Loops: Helm Charts vs Kustomize

But creating these helm charts is not as easy as it shows. If you already have been on the work of doing so, you probably get stuck at some point, or you spend a lot of time trying to do some things. If this is the first time you are creating one or trying to do something advanced, I hope all these tricks will help you on your journey. Today we are going to cover one of the most important tricks, and those are Helm Loops.

Helm Loops Introduction

If you see any helm chart for sure, you will have a lot of conditional blocks. Pretty much everything is covered under an if/else structure based on the values.yml files you are creating. But this gets a little bit tricky when we talk about loops. But the great thing is that you will have the option to execute a helm loop inside your helm charts using the rangeprimitive.

How to create a Helm Loop?

The usage of the rangeprimitive is quite simple, as you only need to specify the element you want to iterate across, as shown in the snippet below:

{{- range .Values.pizzaToppings }}
- {{ . | title | quote }}
{{- end }}    

This is a pretty simple sample where the yaml will iterate over the values that you have assigned to the pizzaToppings structure in your values.yml

There are some concepts to keep in mind in this situation:

  • You can easily access everything inside this structure you are looping across. So, if pizza topping has additional fields, you can access them with something similar to this:
{{- range.Values.pizzaToppings }}
- {{ .ingredient.name | title | quote }}
{{- end }}    

And this will access a structure similar to this one in your values.yml:

 pizzaToppings:
	- ingredient:
		name: Pinneaple
		weight: 3

The good thing is that you can access their underlying attribute without replicating all the parent hierarchy until you reach the looping structure because inside the range section, the scope has changed. We will refer to the root of each element we are iterating across.

How to access parent elements inside a Helm Loop?

In the previous section, we covered how easily we can access the inner attribute inside the loop structure because of the change of scope, which also has an issue. In case I want to access some element in the parent of my values.yml file or somewhere outside the structure, how can I access them?

The good thing is that we also have a great answer to that, but you can get there. We need to understand a little bit about the scopes in Helm.

As commented, . refers to the root element in the current scope. If you have never defined a range section or another primitive that switches the context, .always will refer to the root of your values.yml. That is why when you see a helm chart, you see all the structures with the following way of working: .Values.x.y.z, but we already have seen that when we have a range section, this is changing, so this is not a good way.

To solve that, we have the context $ that constantly refers to the root of the values. ymlno matter which one is the current scope. So that means that if I have the following values.yml:

base:
	- type: slim 
pizzaToppings:
	- ingredient:
		name: Pinneaple
		weight: 3
	- ingredient:
		name: Apple
		weight: 3

And I want to refer to the base type inside the range section similar to before I can do it using the following snippet:

{{- range .Values.pizzaToppings }}
- {{ .ingredient.name | title | quote }} {{ $.Values.base.type }}
{{- end }}    

That will generate the following output:

 - Pinneaple slim
 - Apple slim

So I hope this helm chart trick will help you with the creation, modification, or improvement of your upgraded helm charts in the future by using helm loops without any further concern!