Kubernetes Ingress on OpenShift: Routes Explained and When to Use Them

Kubernetes Ingress on OpenShift: Routes Explained and When to Use Them

Introduction
OpenShift, Red Hat’s Kubernetes platform, has its own way of exposing services to external clients. In vanilla Kubernetes, you would typically use an Ingress resource along with an ingress controller to route external traffic to services. OpenShift, however, introduced the concept of a Route and an integrated Router (built on HAProxy) early on, before Kubernetes Ingress even existed. Today, OpenShift supports both Routes and standard Ingress objects, which can sometimes lead to confusion about when to use each and how they relate.

This article explores how OpenShift handles Kubernetes Ingress resources, how they translate to Routes, the limitations of this approach, and guidance on when to use Ingress versus Routes.

OpenShift Routes and the Router: A Quick Overview


OpenShift Routes are OpenShift-specific resources designed to expose services externally. They are served by the OpenShift Router, which is an HAProxy-based proxy running inside the cluster. Routes support advanced features such as:

  • Weighted backends for traffic splitting
  • Sticky sessions (session affinity)
  • Multiple TLS termination modes (edge, passthrough, re-encrypt)
  • Wildcard subdomains
  • Custom certificates and SNI
  • Path-based routing

Because Routes are OpenShift-native, the Router understands these features natively and can be configured accordingly. This tight integration enables powerful and flexible routing capabilities tailored to OpenShift environments.

Using Kubernetes Ingress in OpenShift (Default Behavior)


Starting with OpenShift Container Platform (OCP) 3.10, Kubernetes Ingress resources are supported. When you create an Ingress, OpenShift automatically translates it into an equivalent Route behind the scenes. This means you can use standard Kubernetes Ingress manifests, and OpenShift will handle exposing your services externally by creating Routes accordingly.

Example: Kubernetes Ingress and Resulting Route

Here is a simple Ingress manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: www.example.com
    http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test-service
            port:
              number: 80

OpenShift will create a Route similar to:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: example-route
spec:
  host: www.example.com
  path: /testpath
  to:
    kind: Service
    name: test-service
    weight: 100
  port:
    targetPort: 80
  tls:
    termination: edge

This automatic translation simplifies migration and supports basic use cases without requiring Route-specific manifests.

Tuning Behavior with Annotations (Ingress ➝ Route)

When you use Ingress on OpenShift, only OpenShift-aware annotations are honored during the Ingress ➝ Route translation. Controller-specific annotations for other ingress controllers (e.g., nginx.ingress.kubernetes.io/*) are ignored by the OpenShift Router. The following annotations are commonly used and supported by the OpenShift router to tweak the generated Route:

Purpose Annotation Typical Values Effect on Generated Route
TLS termination route.openshift.io/termination edge · reencrypt · passthrough Sets Route spec.tls.termination to the chosen mode.
HTTP→HTTPS redirect (edge) route.openshift.io/insecureEdgeTerminationPolicy Redirect · Allow · None Controls spec.tls.insecureEdgeTerminationPolicy (commonly Redirect).
Backend load-balancing haproxy.router.openshift.io/balance roundrobin · leastconn · source Sets HAProxy balancing algorithm for the Route.
Per-route timeout haproxy.router.openshift.io/timeout duration like 60s, 5m Configures HAProxy timeout for requests on that Route.
HSTS header haproxy.router.openshift.io/hsts_header e.g. max-age=31536000;includeSubDomains;preload Injects HSTS header on responses (edge/re-encrypt).

Note: Advanced features like weighted backends/canary or wildcard hosts are not expressible via standard Ingress. Use a Route directly for those.

Example: Ingress with OpenShift router annotations

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress-https
  annotations:
    route.openshift.io/termination: edge
    route.openshift.io/insecureEdgeTerminationPolicy: Redirect
    haproxy.router.openshift.io/balance: leastconn
    haproxy.router.openshift.io/timeout: 60s
    haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload
spec:
  rules:
  - host: www.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: test-service
            port:
              number: 80

This Ingress will be realized as a Route with edge TLS and an automatic HTTP→HTTPS redirect, using least connections balancing and a 60s route timeout. The HSTS header will be added by the router on HTTPS responses.

Limitations of Using Ingress to Generate Routes
While convenient, using Ingress to generate Routes has limitations:

  • Missing advanced features: Weighted backends and sticky sessions require Route-specific annotations and are not supported via Ingress.
  • TLS passthrough and re-encrypt modes: These require OpenShift-specific annotations on Routes and are not supported through standard Ingress.
  • Ingress without host: An Ingress without a hostname will not create a Route; Routes require a host.
  • Wildcard hosts: Wildcard hosts (e.g., *.example.com) are only supported via Routes, not Ingress.
  • Annotation compatibility: Some OpenShift Route annotations do not have equivalents in Ingress, leading to configuration gaps.
  • Protocol support: Ingress supports only HTTP/HTTPS protocols, while Routes can handle non-HTTP protocols with passthrough TLS.
  • Config drift risk: Because Routes created from Ingress are managed by OpenShift, manual edits to the generated Route may be overwritten or cause inconsistencies.

These limitations mean that for advanced routing configurations or OpenShift-specific features, using Routes directly is preferable.

When to Use Ingress vs. When to Use Routes
Choosing between Ingress and Routes depends on your requirements:

  • Use Ingress if:
  • You want portability across Kubernetes platforms.
  • You have existing Ingress manifests and want to minimize changes.
  • Your application uses only basic HTTP or HTTPS routing.
  • You prefer platform-neutral manifests for CI/CD pipelines.
  • Use Routes if:
  • You need advanced routing features like weighted backends, sticky sessions, or multiple TLS termination modes.
  • Your deployment is OpenShift-specific and can leverage OpenShift-native features.
  • You require stability and full support for OpenShift routing capabilities.
  • You need to expose non-HTTP protocols or use TLS passthrough/re-encrypt modes.
  • You want to use wildcard hosts or custom annotations not supported by Ingress.

In many cases, teams use a combination: Ingress for portability and Routes for advanced or OpenShift-specific needs.

Conclusion


On OpenShift, Kubernetes Ingress resources are automatically converted into Routes, enabling basic external service exposure with minimal effort. This allows users to leverage existing Kubernetes manifests and maintain portability. However, for advanced routing scenarios and to fully utilize OpenShift’s powerful Router features, using Routes directly is recommended.

Both Ingress and Routes coexist seamlessly on OpenShift, allowing you to choose the right tool for your application’s requirements.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Talos: A Modern Kubernetes-Optimized Linux Distribution

Talos: A Modern Kubernetes-Optimized Linux Distribution

Introduction

Managing a Kubernetes cluster can quickly become overwhelming, particularly when your operating system adds unnecessary complexity. Enter Talos Linux—a groundbreaking, container-optimized, immutable OS explicitly designed for Kubernetes environments. It’s API-driven, completely secure, and strips away traditional management methods, including SSH and package managers.

Talos Linux revolutionizes node management by drastically simplifying operations and enhancing security. In this deep dive, we’ll explore why Talos is capturing attention, its core architecture, and the practical implications for Kubernetes teams.

What is Talos Linux?

Talos Linux is a specialized open-source Linux distribution meticulously crafted to run Kubernetes securely and efficiently. Unlike general-purpose operating systems, Talos discards all irrelevant features and focuses exclusively on Kubernetes, ensuring:

  • Immutable Design: Changes are handled through atomic upgrades rather than manual interventions.
  • API-Driven Management: Administrators use talosctl, a CLI that interacts securely with nodes through a gRPC API.
  • Security by Default: No SSH access, comprehensive kernel hardening, TPM integration, disk encryption, and secure boot features.
  • Minimal and Predictable: Talos minimizes resource usage and reduces operational overhead by eliminating unnecessary services and processes.

Maintainers and Backing

Talos is maintained by Sidero Labs, renowned for their expertise in Kubernetes tooling and bare-metal provisioning. The active, open-source community of cloud-native engineers and SREs continuously contribute to its growth and evolution.

Talos Architecture Deep Dive

Talos Linux employs a radical design that prioritizes security, simplicity, and performance:

  • API-Only Interaction: There is no traditional shell access, eliminating many common vulnerabilities associated with SSH.
  • Atomic Upgrades: System updates are atomic—new versions boot directly into a stable, validated state.
  • Resource Efficiency: Talos’s stripped-down design reduces its footprint significantly, ensuring optimum resource utilization and faster startup times.
  • Enhanced Security Measures: It incorporates kernel-level protections, secure boot, disk encryption, and TPM-based security, aligning with stringent compliance requirements.

Kubernetes Distribution based on Talos Linux

Sidero Labs also offers a complete Kubernetes distribution built directly upon Talos Linux, known as “Talos Kubernetes.” This streamlined distribution combines the benefits of Talos Linux with pre-configured Kubernetes components, making it easier and faster to deploy highly secure, production-ready Kubernetes clusters. This simplifies cluster management further by reducing the overhead and complexity typically associated with installing and maintaining Kubernetes separately.

Real-World Use-Cases

Talos shines particularly well in scenarios demanding heightened security, predictability, and streamlined operations:

  • Security-Conscious Clusters: Zero-trust architectures greatly benefit from Talos’s immutable and restricted-access model.
  • Edge Computing and IoT: Its minimal resource consumption and robust management via API make it ideal for edge deployments, where remote management is essential.
  • CI/CD and GitOps Pipelines: The declarative configuration, compatible with YAML and GitOps methodologies, enables automated and reproducible Kubernetes environments.

How to Download and Try Talos Linux

Talos Linux is easy to test and evaluate. You can download it directly from the official Talos GitHub releases. Sidero Labs provides comprehensive documentation and straightforward quick-start guides for deploying Talos Linux on various platforms, including bare-metal servers, virtual machines, and cloud environments such as AWS, Azure, and GCP. For a quick test-drive, running it within a local virtual machine or container is a convenient option.

Talos Compared to Traditional OS Choices

Talos presents distinct advantages compared to more familiar options like Ubuntu, CoreOS, or Flatcar:

Feature Talos Ubuntu Flatcar
SSH Access
Package Manager ✅ (apt) ✅ (rpm)
Kubernetes Native ✅ Built-in ✅ (via tools)
Security Defaults 🔒 High Moderate High
Immutable OS
Resource Efficiency ✅ High Moderate High
API-driven Management Limited

What You Cannot Do with Talos Linux

Talos Linux’s specialized design intentionally restricts certain traditional operating system functionalities. Notably:

  • No SSH Access: Direct shell access to nodes is disabled. All interactions must occur through talosctl.
  • No Package Managers: Traditional tools like apt, yum, or similar are absent; changes are done through immutable updates.
  • No Additional Applications: It doesn’t support running additional, non-Kubernetes services or workloads directly on Talos nodes.

These restrictions enforce best practices, significantly enhance security, and ensure a predictable, consistent operational environment.

Conclusion

Talos Linux represents a substantial shift in Kubernetes node management—secure, lean, and entirely Kubernetes-focused. For organizations prioritizing security, compliance, operational simplicity, and efficiency, Talos provides a robust and future-ready foundation.

If your Kubernetes strategy values minimalism, security, and simplicity, Talos Linux offers compelling reasons to consider adoption.

What is Talos Linux?

Talos Linux is a minimal, immutable Linux distribution designed specifically to run Kubernetes. It offers a declarative API for management and focuses on security and consistency.

What are the main advantages of Talos Linux?

Talos Linux provides an immutable system with atomic updates, removes traditional SSH access, and maintains a minimal attack surface, making it ideal for production Kubernetes environments.

How do I install Talos Linux?

To install Talos Linux, download the appropriate image, prepare a machine configuration file, and boot your node from the image. Then use the Talos CLI to bootstrap your Kubernetes cluster.

How does Talos Linux differ from other distributions like CoreOS?

Unlike CoreOS, Talos Linux focuses exclusively on Kubernetes, offering an immutable OS managed entirely via API. CoreOS has been discontinued, whereas Talos Linux is actively maintained and supported.

Is Talos Linux suitable for production use?

Yes. Talos Linux is optimized for running Kubernetes in production, provides advanced security features, and has both active community and commercial support options.

References
Talos Documentation
Sidero Labs
Talos GitHub Repository

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

XSLTPlayground.com: Test, Optimize, and Debug XSLT Online in Real Time

XSLTPlayground.com: Test, Optimize, and Debug XSLT Online in Real Time

Working with XSLT in modern data pipelines and XML-driven systems has always been powerful… but not always easy. Tools are often heavyweight, outdated, or require local setup and complex environments. That’s why I’m thrilled to announce the launch of XSLTPlayground.com — a free, open-source, browser-based XSLT editor designed specifically for real-world use cases.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

p:paragraph –>

No installations. No complexity. Just open your browser and transform.

🚀 Why XSLT Playground?

🔁 Real-time XSLT Transformations for Real-World Scenarios

Unlike legacy tools or limited web demos, XSLT Playground supports complex transformations involving multiple XML sources, parameterized templates, and real feedback. Whether you work on data integration, API gateways, XML-based reporting, or legacy system upgrades, this tool helps you test and iterate quickly.

🧩 Multi-Input Parameter Support

One of the biggest pain points in XSLT testing is simulating real environments. With XSLTPlayground.com, you can define multiple input sources (e.g., data feeds, configuration, or metadata), and pass them into your XSLT in a synchronized way — just like a production data pipeline.

⚙️ Automatic Parameter Synchronization

When you load a stylesheet with required parameters, the Playground automatically detects them and creates input fields for you on the side. All you need to do is fill in the values. This smart feature removes the guesswork and helps avoid runtime errors.

⚡ Performance & Optimization Insights

Need to know if your optimization is working? We display execution time for each transformation, helping you compare versions and choose the faster approach — all without deploying full systems or instrumenting code. While it’s not a benchmarking tool, the feedback is invaluable for real-time tuning.

🌐 100% Free, Web-based, and Open Source

No need to install bulky tools like Oxygen XML or run Eclipse plugins just to test a stylesheet. XSLTPlayground.com is entirely web-based, free, and built to be open and extensible. Want to contribute or host your own version? The source is on GitHub.

🖱️ Drag & Drop Support

Upload your XML or XSLT files by simply dragging them into the browser. All components — inputs, stylesheets, outputs — support drag and drop for faster iteration.

🎨 Pretty Print and Export Options

Your output is automatically pretty-printed for readability, and with just one click you can download your XSLT and transformation result, making it easy to share, archive, or import into larger projects.

🔗 Try it now: https://xsltplayground.com

Whether you’re a developer, data engineer, or working with legacy systems, this is the tool you’ve been waiting for. Say goodbye to the complexity of setting up XSLT tests and say hello to instant transformations — anywhere, anytime.

Want to contribute or follow development? Star the project on GitHub or send feedback directly from the site.

Helm v3.17 Take Ownership Flag: Fix Release Conflicts

Helm v3.17 Take Ownership Flag: Fix Release Conflicts

Helm has long been the standard for managing Kubernetes applications using packaged charts, bringing a level of reproducibility and automation to the deployment process. However, some operational tasks, such as renaming a release or migrating objects between charts, have traditionally required cumbersome workarounds. With the introduction of the --take-ownership flag in Helm v3.17 (released in January 2025), a long-standing pain point is finally addressed—at least partially.

The take-ownership feature represents the continuing evolution of Helm. Learn about this and other cutting-edge capabilities in our Helm Charts Package Management Guide

In this post, we will explore:

  • What the --take-ownership flag does
  • Why it was needed
  • The caveats and limitations
  • Real-world use cases where it helps
  • When not to use it

Understanding Helm Release Ownership and Object Management

When Helm installs or upgrades a chart, it injects metadata—labels and annotations—into every managed Kubernetes object. These include:

app.kubernetes.io/managed-by: Helm
meta.helm.sh/release-name: my-release
meta.helm.sh/release-namespace: default

This metadata serves an important role: Helm uses it to track and manage resources associated with each release. As a safeguard, Helm does not allow another release to modify objects it does not own and when you trying that you will see messages like the one below:

Error: Unable to continue with install: Service "provisioner-agent" in namespace "test-my-ns" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dp-core-infrastructure11": current value is "dp-core-infrastructure"

While this protects users from accidental overwrites, it creates limitations for advanced use cases.

Why --take-ownership Was Needed

Let’s say you want to:

  • Rename an existing Helm release from api-v1 to api.
  • Move a ConfigMap or Service from one chart to another.
  • Rebuild state during GitOps reconciliation when previous Helm metadata has drifted.

Previously, your only option was to:

  1. Uninstall the existing release.
  2. Reinstall under the new name.

This approach introduces downtime, and in production systems, that’s often not acceptable.

What the Flag Does

helm upgrade my-release ./my-chart --take-ownership

When this flag is passed, Helm will:

  • Skip the ownership validation for existing objects.
  • Override the labels and annotations to associate the object with the current release.

In practice, this allows you to claim ownership of resources that previously belonged to another release, enabling seamless handovers.

⚠️ What It Doesn’t Do

This flag does not:

  • Clean up references from the previous release.
  • Protect you from future uninstalls of the original release (which might still remove shared resources).
  • Allow you to adopt completely unmanaged Kubernetes resources (those not initially created by Helm).

In short, it’s a mechanism for bypassing Helm’s ownership checks, not a full lifecycle manager.

Real-World Helm Take Ownership Use Cases

Let’s go through common scenarios where this feature is useful.

✅ 1. Renaming a Release Without Downtime

Before:

helm uninstall old-name
helm install new-name ./chart

Now:

helm upgrade new-name ./chart --take-ownership

✅ 2. Migrating Objects Between Charts

You’re refactoring a large chart into smaller, modular ones and need to reassign certain Service or Secret objects.

This flag allows the new release to take control of the object without deleting or recreating it.

✅ 3. GitOps Drift Reconciliation

If objects were deployed out-of-band or their metadata changed unintentionally, GitOps tooling using Helm can recover without manual intervention using --take-ownership.

Best Practices and Recommendations

  • Use this flag intentionally, and document where it’s applied.
  • If possible, remove the previous release after migration to avoid confusion.
  • Monitor Helm’s behavior closely when managing shared objects.
  • For non-Helm-managed resources, continue to use kubectl annotate or kubectl label to manually align metadata.

Conclusion

The --take-ownership flag is a welcomed addition to Helm’s CLI arsenal. While not a universal solution, it smooths over many of the rough edges developers and SREs face during release evolution and GitOps adoption.

It brings a subtle but powerful improvement—especially in complex environments where resource ownership isn’t static.

Stay updated with Helm releases, and consider this flag your new ally in advanced release engineering.

Frequently Asked Questions

What does the Helm –take-ownership flag do?

The --take-ownership flag allows Helm to bypass ownership validation and claim control of Kubernetes resources that belong to another release. It updates the meta.helm.sh/release-name annotation to associate objects with the current release, enabling zero-downtime release renames and chart migrations.

When should I use Helm take ownership?

Use --take-ownership when renaming releases without downtime, migrating objects between charts, or fixing GitOps drift. It’s ideal for production environments where uninstall/reinstall cycles aren’t acceptable. Always document usage and clean up previous releases afterward.

What are the limitations of Helm take ownership?

The flag doesn’t clean up references from previous releases or protect against future uninstalls of the original release. It only works with Helm-managed resources, not completely unmanaged Kubernetes objects. Manual cleanup of old releases is still required.

Is Helm take ownership safe for production use?

Yes, but use it intentionally and carefully. The flag bypasses Helm’s safety checks, so ensure you understand the ownership implications. Test in staging first, document all usage, and monitor for conflicts. Remove old releases after successful migration to avoid confusion.

Which Helm version introduced the take ownership flag?

The --take-ownership flag was introduced in Helm v3.17, released in January 2025. This feature addresses long-standing pain points with release renaming and chart migrations that previously required downtime-inducing uninstall/reinstall cycles.

Extending Kyverno Policies: Creating Custom Rules for Kubernetes Security

Extending Kyverno Policies: Creating Custom Rules for Kubernetes Security

Kyverno offers a robust, declarative approach to enforcing security and compliance standards within Kubernetes clusters by allowing users to define and enforce custom policies. For an in-depth look at Kyverno’s functionality, including core concepts and benefits, see my detailed article here. In this guide, we’ll focus on extending Kyverno policies, providing a structured walkthrough of its data model, and illustrating use cases to make the most of Kyverno in a Kubernetes environment.

Understanding the Kyverno Policy Data Model

Kyverno policies consist of several components that define how the policy should behave, which resources it should affect, and the specific rules that apply. Let’s dive into the main parts of the Kyverno policy model:

  1. Policy Definition: This is the root configuration where you define the policy’s metadata, including name, type, and scope. Policies can be created at the namespace level for specific areas or as cluster-wide rules to enforce uniform standards across the entire Kubernetes cluster.
  2. Rules: Policies are made up of rules that dictate what conditions Kyverno should enforce. Each rule can include logic for validation, mutation, or generation based on your needs.
  3. Match and Exclude Blocks: These sections allow fine-grained control over which resources the policy applies to. You can specify resources by their kinds (e.g., Pods, Deployments), namespaces, labels, and even specific names. This flexibility is crucial for creating targeted policies that impact only the resources you want to manage.
    1. Match block: Defines the conditions under which the rule applies to specific resources.
    2. Exclude block: Used to explicitly omit resources that match certain conditions, ensuring that unaffected resources are not inadvertently included.
  4. Validation, Mutation, and Generation Actions: Each rule can take different types of actions:
    1. Validation: Ensures resources meet specific criteria and blocks deployment if they don’t.
    2. Mutation: Adjusts resource configurations to align with predefined standards, which is useful for auto-remediation.
    3. Generation: Creates or manages additional resources based on existing resource configurations.

Example: Restricting Container Image Sources to Docker Hub

A common security requirement is to limit container images to trusted registries. The example below demonstrates a policy that only permits images from Docker Hub.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: restrict-dockerhub-images
spec:
  rules:
    - name: only-dockerhub-images
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Only Docker Hub images are allowed."
        pattern:
          spec:
            containers:
              - image: "docker.io/*"

This policy targets all Pod resources in the cluster and enforces a validation rule that restricts the image source to docker.io. If a Pod uses an image outside Docker Hub, Kyverno denies its deployment, reinforcing secure sourcing practices.

Practical Use-Cases for Kyverno Policies

Kyverno policies can handle a variety of Kubernetes management tasks through validation, mutation, and generation. Let’s explore examples for each type to illustrate Kyverno’s versatility:

1. Validation Policies

Validation policies in Kyverno ensure that resources comply with specific configurations or security standards, stopping any non-compliant resources from deploying.

Use-Case: Enforcing Resource Limits for Containers

This example prevents deployments that lack resource limits, ensuring all Pods specify CPU and memory constraints.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: enforce-resource-limits
spec:
  rules:
    - name: require-resource-limits
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Resource limits (CPU and memory) are required for all containers."
        pattern:
          spec:
            containers:
              - resources:
                  limits:
                    cpu: "?*"
                    memory: "?*"

By enforcing resource limits, this policy helps prevent resource contention in the cluster, fostering stable and predictable performance.

2. Mutation Policies

Mutation policies allow Kyverno to automatically adjust configurations in resources to meet compliance requirements. This approach is beneficial for consistent configurations without manual intervention.

Use-Case: Adding Default Labels to Pods

This policy adds a default label, environment: production, to all new Pods that lack this label, ensuring that resources align with organization-wide labeling standards.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-default-label
spec:
  rules:
    - name: add-environment-label
      match:
        resources:
          kinds:
            - Pod
      mutate:
        patchStrategicMerge:
          metadata:
            labels:
              environment: "production"

This mutation policy is an example of how Kyverno can standardize resource configurations at scale by dynamically adding missing information, reducing human error and ensuring labeling consistency.

3. Generation Policies

Generation policies in Kyverno are used to create or update related resources, enhancing Kubernetes automation by responding to specific configurations or needs in real-time.

Use-Case: Automatically Creating a ConfigMap for Each New Namespace

This example policy generates a ConfigMap in every new namespace, setting default configuration values for all resources in that namespace.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: generate-configmap
spec:
  rules:
    - name: add-default-configmap
      match:
        resources:
          kinds:
            - Namespace
      generate:
        kind: ConfigMap
        name: default-config
        namespace: "{{request.object.metadata.name}}"
        data:
          default-key: "default-value"

This generation policy is triggered whenever a new namespace is created, automatically provisioning a ConfigMap with default settings. This approach is especially useful in multi-tenant environments, ensuring new namespaces have essential configurations in place.

Conclusion

Extending Kyverno policies enables Kubernetes administrators to establish and enforce tailored security and operational practices within their clusters. By leveraging Kyverno’s capabilities in validation, mutation, and generation, you can automate compliance, streamline operations, and reinforce security standards seamlessly.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kyverno: A Detailed Way of Enforcing Standard and Custom Policies

Kyverno: A Detailed Way of Enforcing Standard and Custom Policies

In the Kubernetes ecosystem, security and governance are key aspects that need continuous attention. While Kubernetes offers some out-of-the-box (OOTB) security features such as Pod Security Admission (PSA), these might not be sufficient for complex environments with varying compliance requirements. This is where Kyverno comes into play, providing a powerful yet flexible solution for managing and enforcing policies across your cluster.

In this post, we will explore the key differences between Kyverno and PSA, explain how Kyverno can be used in different use cases, and show you how to install and deploy policies with it. Although custom policy creation will be covered in a separate post, we will reference some pre-built policies you can use right away.

What is Pod Security Admission (PSA)?

Kubernetes introduced Pod Security Admission (PSA) as a replacement for the now deprecated PodSecurityPolicy (PSP). PSA focuses on enforcing three predefined levels of security: Privileged, Baseline, and Restricted. These levels control what pods are allowed to run in a namespace based on their security context configurations.

  • Privileged: Minimal restrictions, allowing privileged containers and host access.
  • Baseline: Applies standard restrictions, disallowing privileged containers and limiting host access.
  • Restricted: The strictest level, ensuring secure defaults and enforcing best practices for running containers.

While PSA is effective for basic security requirements, it lacks flexibility when enforcing fine-grained or custom policies. We have a full article covering this topic that you can read here.

Kyverno vs. PSA: Key Differences

Kyverno extends beyond the capabilities of PSA by offering more granular control and flexibility. Here’s how it compares:

  1. Policy Types: While PSA focuses solely on security, Kyverno allows the creation of policies for validation, mutation, and generation of resources. This means you can modify or generate new resources, not just enforce security rules.
  2. Customizability: Kyverno supports custom policies that can enforce your organization’s compliance requirements. You can write policies that govern specific resource types, such as ensuring that all deployments have certain labels or that container images come from a trusted registry.
  3. Policy as Code: Kyverno policies are written in YAML, allowing for easy integration with CI/CD pipelines and GitOps workflows. This makes policy management declarative and version-controlled, which is not the case with PSA.
  4. Audit and Reporting: With Kyverno, you can generate detailed audit logs and reports on policy violations, giving administrators a clear view of how policies are enforced and where violations occur. PSA lacks this built-in reporting capability.
  5. Enforcement and Mutation: While PSA primarily enforces restrictions on pods, Kyverno allows not only validation of configurations but also modification of resources (mutation) when required. This adds an additional layer of flexibility, such as automatically adding annotations or labels.

When to Use Kyverno Over PSA

While PSA might be sufficient for simpler environments, Kyverno becomes a valuable tool in scenarios requiring:

  • Custom Compliance Rules: For example, enforcing that all containers use a specific base image or restricting specific container capabilities across different environments.
  • CI/CD Integrations: Kyverno can integrate into your CI/CD pipelines, ensuring that resources comply with organizational policies before they are deployed.
  • Complex Governance: When managing large clusters with multiple teams, Kyverno’s policy hierarchy and scope allow for finer control over who can deploy what and how resources are configured.

If your organization needs a more robust and flexible security solution, Kyverno is a better fit compared to PSA’s more generic approach.

Installing Kyverno

To start using Kyverno, you’ll need to install it in your Kubernetes cluster. This is a straightforward process using Helm, which makes it easy to manage and update.

Step-by-Step Installation

Add the Kyverno Helm repository:

    helm repo add kyverno https://kyverno.github.io/kyverno/

    Update Helm repositories:

      helm repo update

      Install Kyverno in your Kubernetes cluster:

        helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace

        Verify the installation:

          kubectl get pods -n kyverno

          After installation, Kyverno will begin enforcing policies across your cluster, but you’ll need to deploy some policies to get started.

          Deploying Policies with Kyverno

          Kyverno policies are written in YAML, just like Kubernetes resources, which makes them easy to read and manage. You can find several ready-to-use policies from the Kyverno Policy Library, or create your own to match your requirements.

          Here is an example of a simple validation policy that ensures all pods use trusted container images from a specific registry:

          apiVersion: kyverno.io/v1
          kind: ClusterPolicy
          metadata:
            name: require-trusted-registry
          spec:
            validationFailureAction: enforce
            rules:
            - name: check-registry
              match:
                resources:
                  kinds:
                  - Pod
              validate:
                message: "Only images from 'myregistry.com' are allowed."
                pattern:
                  spec:
                    containers:
                    - image: "myregistry.com/*"

          This policy will automatically block the deployment of any pod that uses an image from a registry other than myregistry.com.

          Applying the Policy

          To apply the above policy, save it to a YAML file (e.g., trusted-registry-policy.yaml) and run the following command:

          kubectl apply -f trusted-registry-policy.yaml

          Once applied, Kyverno will enforce this policy across your cluster.

          Viewing Kyverno Policy Reports

          Kyverno generates detailed reports on policy violations, which are useful for audits and tracking policy compliance. To check the reports, you can use the following commands:

          List all Kyverno policy reports:

            kubectl get clusterpolicyreport

            Describe a specific policy report to get more details:

              kubectl describe clusterpolicyreport <report-name>

              These reports can be integrated into your monitoring tools to trigger alerts when critical violations occur.

              Conclusion

              Kyverno offers a flexible and powerful way to enforce policies in Kubernetes, making it an essential tool for organizations that need more than the basic capabilities provided by PSA. Whether you need to ensure compliance with internal security standards, automate resource modifications, or integrate policies into CI/CD pipelines, Kyverno’s extensive feature set makes it a go-to choice for Kubernetes governance.

              For now, start with the out-of-the-box policies available in Kyverno’s library. In future posts, we’ll dive deeper into creating custom policies tailored to your specific needs.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Kubernetes Pod Security Admission Explained: Enforcing PSA Policies the Right Way

              Kubernetes Pod Security Admission Explained: Enforcing PSA Policies the Right Way

              In Kubernetes, security is a key concern, especially as containers and microservices grow in complexity. One of the essential features of Kubernetes for policy enforcement is Pod Security Admission (PSA), which replaces the deprecated Pod Security Policies (PSP). PSA provides a more straightforward and flexible approach to enforce security policies, helping administrators safeguard clusters by ensuring that only compliant pods are allowed to run.

              This article will guide you through PSA, the available Pod Security Standards, how to configure them, and how to apply security policies to specific namespaces using labels.

              What is Pod Security Admission (PSA)?

              PSA is a built-in admission controller introduced in Kubernetes 1.23 to replace Pod Security Policies (PSPs). PSPs had a steep learning curve and could become cumbersome when scaling security policies across various environments. PSA simplifies this process by applying Kubernetes Pod Security Standards based on predefined security levels without needing custom logic for each policy.

              With PSA, cluster administrators can restrict the permissions of pods by using labels that correspond to specific Pod Security Standards. PSA operates at the namespace level, enabling better granularity in controlling security policies for different workloads.

              Pod Security Standards

              Kubernetes provides three key Pod Security Standards in the PSA framework:

              • Privileged: No restrictions; permits all features and is the least restrictive mode. This is not recommended for production workloads but can be used in controlled environments or for workloads requiring elevated permissions.
              • Baseline: Provides a good balance between usability and security, restricting the most dangerous aspects of pod privileges while allowing common configurations. It is suitable for most applications that don’t need special permissions.
              • Restricted: The most stringent level of security. This level is intended for workloads that require the highest level of isolation and control, such as multi-tenant clusters or workloads exposed to the internet.

              Each standard includes specific rules to limit pod privileges, such as disallowing privileged containers, restricting access to the host network, and preventing changes to certain security contexts.

              Setting Up Pod Security Admission (PSA)

              To enable PSA, you need to label your namespaces based on the security level you want to enforce. The label format is as follows:

              kubectl label --overwrite ns  pod-security.kubernetes.io/enforce=<value>

              For example, to enforce a restricted security policy on the production namespace, you would run:

              kubectl label --overwrite ns production pod-security.kubernetes.io/enforce=restricted

              In this example, Kubernetes will automatically apply the rules associated with the restricted policy to all pods deployed in the production namespace.

              Additional PSA Modes

              PSA also provides additional modes for greater control:

              • Audit: Logs a policy violation but allows the pod to be created.
              • Warn: Issues a warning but permits the pod creation.
              • Enforce: Blocks pod creation if it violates the policy.

              To configure these modes, use the following labels:

              kubectl label --overwrite ns      pod-security.kubernetes.io/enforce=baseline     pod-security.kubernetes.io/audit=restricted     pod-security.kubernetes.io/warn=baseline

              This setup enforces the baseline standard while issuing warnings and logging violations for restricted-level rules.

              Example: Configuring Pod Security in a Namespace

              Let’s walk through an example of configuring baseline security for the dev namespace. First, you need to apply the PSA labels:

              kubectl create namespace dev
              kubectl label --overwrite ns dev pod-security.kubernetes.io/enforce=baseline

              Now, any pod deployed in the dev namespace will be checked against the baseline security standard. If a pod violates the baseline policy (for instance, by attempting to run a privileged container), it will be blocked from starting.

              You can also combine warn and audit modes to track violations without blocking pods:

              kubectl label --overwrite ns dev     pod-security.kubernetes.io/enforce=baseline     pod-security.kubernetes.io/warn=restricted     pod-security.kubernetes.io/audit=privileged

              In this case, PSA will allow pods to run if they meet the baseline policy, but it will issue warnings for restricted-level violations and log any privileged-level violations.

              Applying Policies by Default

              One of the strengths of PSA is its simplicity in applying policies at the namespace level, but administrators might wonder if there’s a way to apply a default policy across new namespaces automatically. As of now, Kubernetes does not natively provide an option to apply PSA policies globally by default. However, you can use admission webhooks or automation tools such as OPA Gatekeeper or Kyverno to enforce default policies for new namespaces.

              Conclusion

              Pod Security Admission (PSA) simplifies policy enforcement in Kubernetes clusters, making it easier to ensure compliance with security standards across different environments. By configuring Pod Security Standards at the namespace level and using labels, administrators can control the security level of workloads with ease. The flexibility of PSA allows for efficient security management without the complexity associated with the older Pod Security Policies (PSPs).

              For more details on configuring PSA and Pod Security Standards, check the official Kubernetes PSA documentation and Pod Security Standards documentation.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Helm Hooks Explained: Complete Guide to Using Hooks in Helm Charts

              Helm Hooks Explained: Complete Guide to Using Hooks in Helm Charts

              Helm hooks are a powerful yet often misunderstood feature ofHelm Hooks: Complete Guide to Using Hooks in Helm Charts

              Helm hooks are a powerful—but often misunderstood—feature of Helm, the Kubernetes package manager. They allow you to execute Kubernetes resources at specific points in the Helm release lifecycle, enabling advanced deployment workflows, validations, migrations, and cleanups.

              In this complete guide to Helm hooks, you’ll learn:

              • What Helm hooks are and how they work internally
              • All available Helm hooks and when to use each one
              • Real-world use cases with practical examples
              • Best practices and common pitfalls when working with Helm hooks in Kubernetes

              If you build or maintain Helm charts in production, understanding Helm hooks is essential.

              What Are Helm Hooks?

              Helm hooks are Kubernetes resources annotated with special metadata that instruct Helm to execute them at specific lifecycle events, such as:

              • Before or after an install
              • Before or after an upgrade
              • Before or after a rollback
              • During deletion
              • When running tests

              From a technical perspective, Helm hooks are implemented using annotations on standard Kubernetes resources (most commonly Job objects).

              Helm evaluates these annotations during a Helm operation and executes the hooked resources outside the normal install/upgrade flow, giving you fine-grained lifecycle control.

              Available Helm Hooks and Use Cases

              Helm provides several hooks that correspond to different lifecycle stages. Below is a detailed breakdown of all Helm hooks, including execution timing and common use cases.

              1. pre-install

              Execution timing
              After templates are rendered, but before any Kubernetes resources are created.

              Typical use cases

              • Creating prerequisites (ConfigMaps, Secrets)
              • Performing environment validation
              • Preparing external dependencies
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: setup-config
                annotations:
                  "helm.sh/hook": pre-install
              spec:
                template:
                  spec:
                    containers:
                      - name: config-creator
                        image: busybox
                        command: ['sh', '-c', 'echo "config data" > /config/config.txt']
                    restartPolicy: Never

              2. post-install

              Execution timing
              After all resources have been successfully created.

              Typical use cases

              • Database initialization
              • Data seeding
              • Post-deployment verification
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: init-database
                annotations:
                  "helm.sh/hook": post-install
              spec:
                template:
                  spec:
                    containers:
                      - name: db-init
                        image: busybox
                        command: ['sh', '-c', 'init-db-command']
                    restartPolicy: Never

              3. pre-delete

              Execution timing
              Triggered before Helm deletes any resources.

              Typical use cases

              • Backups
              • Graceful shutdowns
              • External cleanup preparation
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: backup-before-delete
                annotations:
                  "helm.sh/hook": pre-delete
              spec:
                template:
                  spec:
                    containers:
                      - name: backup
                        image: busybox
                        command: ['sh', '-c', 'backup-command']
                    restartPolicy: Never

              4. post-delete

              Execution timing
              After all release resources have been deleted.

              Typical use cases

              • Cleaning up cloud resources
              • Removing external state
              • Audit logging
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: cleanup
                annotations:
                  "helm.sh/hook": post-delete
              spec:
                template:
                  spec:
                    containers:
                      - name: cleanup
                        image: busybox
                        command: ['sh', '-c', 'cleanup-command']
                    restartPolicy: Never

              5. pre-upgrade

              Execution timing
              Before Helm applies any upgrade changes.

              Typical use cases

              • Schema validation
              • Pre-upgrade checks
              • Compatibility verification
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: pre-upgrade-check
                annotations:
                  "helm.sh/hook": pre-upgrade
              spec:
                template:
                  spec:
                    containers:
                      - name: upgrade-check
                        image: busybox
                        command: ['sh', '-c', 'upgrade-check-command']
                    restartPolicy: Never

              6. post-upgrade

              Execution timing
              After all upgraded resources are applied.

              Typical use cases

              • Data migrations
              • Smoke tests
              • Post-upgrade validation
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: post-upgrade-verify
                annotations:
                  "helm.sh/hook": post-upgrade
              spec:
                template:
                  spec:
                    containers:
                      - name: verification
                        image: busybox
                        command: ['sh', '-c', 'verify-upgrade']
                    restartPolicy: Never

              7. pre-rollback

              Execution timing
              Before Helm reverts to a previous release revision.

              Typical use cases

              • Data snapshots
              • Notifications
              • Rollback preparation
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: pre-rollback-backup
                annotations:
                  "helm.sh/hook": pre-rollback
              spec:
                template:
                  spec:
                    containers:
                      - name: backup
                        image: busybox
                        command: ['sh', '-c', 'rollback-backup']
                    restartPolicy: Never

              8. post-rollback

              Execution timing
              After rollback resources are restored.

              Typical use cases

              • State verification
              • Alerting
              • Post-incident actions
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: post-rollback-verify
                annotations:
                  "helm.sh/hook": post-rollback
              spec:
                template:
                  spec:
                    containers:
                      - name: verify
                        image: busybox
                        command: ['sh', '-c', 'verify-rollback']
                    restartPolicy: Never

              9. test

              Execution timing
              Executed only when running helm test.

              Typical use cases

              • Integration tests
              • Health checks
              • End-to-end validation
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: test-application
                annotations:
                  "helm.sh/hook": test
              spec:
                template:
                  spec:
                    containers:
                      - name: test
                        image: busybox
                        command: ['sh', '-c', 'run-tests']
                    restartPolicy: Never

              Helm Hook Annotations Explained

              Helm provides additional annotations to control hook behavior:

              • helm.sh/hook-weight
                Controls execution order. Lower values run first.
              • helm.sh/hook-delete-policy
                Determines when hook resources are deleted:
                • hook-succeeded
                • hook-failed
                • before-hook-creation
              • helm.sh/resource-policy: keep
                Prevents Helm from deleting the resource, useful for debugging.

              These annotations are critical for avoiding orphaned jobs and unexpected hook behavior.

              Best Practices for Using Helm Hooks

              ✔ Use hooks sparingly — avoid overloading charts with logic
              ✔ Prefer idempotent hook jobs
              ✔ Always define restartPolicy: Never for Jobs
              ✔ Clean up hook resources with hook-delete-policy
              ✔ Avoid using hooks for core application logic

              Conclusion

              Helm hooks give you precise control over the Kubernetes deployment lifecycle, making them invaluable for advanced Helm charts and production workflows. When used correctly, they enable safer upgrades, cleaner rollbacks, and more reliable deployments.

              FAQ & Takeaways

              What are Helm hooks?

              Helm hooks are Kubernetes resources annotated so that Helm executes them at specific points in the release lifecycle (e.g., before or after install, upgrade, delete). They allow you to prepare prerequisites, run jobs, or clean up resources.

              How do I use Helm hooks in my Helm charts?

              You add helm.sh/hook annotations to Kubernetes manifests in your chart. These annotations tell Helm when to run the resource (pre-install, post-install, pre-delete, etc.). Jobs are commonly used to implement hook tasks.

              When should I use pre-install vs post-install hooks?

              Use a pre-install hook when you need to create prerequisites (like ConfigMaps or Secrets) or validate the environment before deploying. Use a post-install hook when you need to initialize a database, seed data, or run verification jobs after the chart is installed.

              Are Helm hooks removed automatically?

              By default, hook resources are deleted after execution, but you can control this with the helm.sh/hook-delete-policy annotation (e.g., hook-succeeded, hook-failed, before-hook-creation) or keep them for debugging with helm.sh/resource-policy: keep.

              What is the difference between Helm hooks and Helm tests?

              Helm hooks run automatically at specified lifecycle events (install, upgrade, delete, rollback), whereas Helm tests run only when you invoke helm test. Tests are used to validate the health or functionality of your deployment.

              To deepen your Helm expertise, check out our
              👉 comprehensive Helm charts guide

              Advanced Helm Commands and Flags Every Kubernetes Engineer Should Know

              Advanced Helm Commands and Flags Every Kubernetes Engineer Should Know

              Managing Kubernetes resources effectively can sometimes feel overwhelming, but Helm, the Kubernetes package manager, offers several commands and flags that make the process smoother and more intuitive. In this article, we’ll dive into some lesser-known Helm commands and flags, explaining their uses, benefits, and practical examples.

              These advanced commands are essential for mastering Helm in production. For the complete toolkit including fundamentals, testing, and deployment patterns, visit our Helm package management guide.

              1. helm get values: Retrieving Deployed Chart Values

              The helm get values command is essential when you need to see the configuration values of a deployed Helm chart. This is particularly useful when you have a chart deployed but lack access to its original configuration file. With this command, you can achieve an “Infrastructure as Code” approach by capturing the current state of your deployment.

              Usage:

              helm get values <release-name> [flags]

              Example:

              To get the values of a deployed chart named my-release:

              helm get values my-release --namespace my-namespace

              This command outputs the current values used for the deployment, which is valuable for documentation, replicating the environment, or modifying deployments.

              2. Understanding helm upgrade Flags: --reset-values, --reuse-values, and --reset-then-reuse

              The helm upgrade command is typically used to upgrade or modify an existing Helm release. However, the behavior of this command can be finely tuned using several flags: --reset-values, --reuse-values, and --reset-then-reuse.

              • --reset-values: Ignores the previous values and uses only the values provided in the current command. Use this flag when you want to override the existing configuration entirely.

              Example Scenario: You are deploying a new version of your application, and you want to ensure that no old values are retained.

                helm upgrade my-release my-chart --reset-values --set newKey=newValue
              • --reuse-values: Reuses the previous release’s values and merges them with any new values provided. This flag is useful when you want to keep most of the old configuration but apply a few tweaks.

              Example Scenario: You need to add a new environment variable to an existing deployment without affecting the other settings.

                helm upgrade my-release my-chart --reuse-values --set newEnv=production
              • --reset-then-reuse: A combination of the two. It resets to the original values and then merges the old values back, allowing you to start with a clean slate while retaining specific configurations.

              Example Scenario: Useful in complex environments where you want to ensure the chart is using the original default settings but retain some custom values.

                helm upgrade my-release my-chart --reset-then-reuse --set version=2.0

              3. helm lint: Ensuring Chart Quality in CI/CD Pipelines

              The helm lint command checks Helm charts for syntax errors, best practices, and other potential issues. This is especially useful when integrating Helm into a CI/CD pipeline, as it ensures your charts are reliable and adhere to best practices before deployment.

              Usage:

              helm lint <chart-path> [flags]
              • <chart-path>: Path to the Helm chart you want to validate.

              Example:

              helm lint ./my-chart/

              This command scans the my-chart directory for issues like missing fields, incorrect YAML structure, or deprecated usage. If you’re automating deployments, integrating helm lint into your pipeline helps catch problems early. By adding this command in your CICD pipeline, you ensure that any syntax or structural issues are caught before proceeding to build or deployment stages. You can lear more about helm testing in the linked article

              4. helm rollback: Reverting to a Previous Release

              The helm rollback command allows you to revert a release to a previous version. This can be incredibly useful in case of a failed upgrade or deployment, as it provides a way to quickly restore a known good state.

              Usage:

              helm rollback <release-name> [revision] [flags]
              • [revision]: The revision number to which you want to roll back. If omitted, Helm will roll back to the previous release by default.

              Example:

              To roll back a release named my-release to its previous version:

              helm rollback my-release

              To roll back to a specific revision, say revision 3:

              helm rollback my-release 3

              This command can be a lifesaver when a recent change breaks your application, allowing you to quickly restore service continuity while investigating the issue.

              5. helm verify: Validating a Chart Before Use

              The helm verify command checks the integrity and validity of a chart before it is deployed. This command ensures that the chart’s package file has not been tampered with or corrupted. It’s particularly useful when you are pulling charts from external repositories or using charts shared across multiple teams.

              Usage:

              helm verify <chart-path>

              Example:

              To verify a downloaded chart named my-chart:

              helm verify ./my-chart.tgz

              If the chart passes the verification, Helm will output a success message. If it fails, you’ll see details of the issues, which could range from missing files to checksum mismatches.

              Conclusion

              Leveraging these advanced Helm commands and flags can significantly enhance your Kubernetes management capabilities. Whether you are retrieving existing deployment configurations, fine-tuning your Helm upgrades, or ensuring the quality of your charts in a CI/CD pipeline, these tricks help you maintain a robust and efficient Kubernetes environment.

              Exposing TCP Ports with Istio Ingress Gateway in Kubernetes (Step-by-Step Guide)

              Exposing TCP Ports with Istio Ingress Gateway in Kubernetes (Step-by-Step Guide)

              Istio has become an essential tool for managing HTTP traffic within Kubernetes clusters, offering advanced features such as Canary Deployments, mTLS, and end-to-end visibility. However, some tasks, like exposing a TCP port using the Istio IngressGateway, can be challenging if you’ve never done it before. This article will guide you through the process of exposing TCP ports with Istio Ingress Gateway, complete with real-world examples and practical use cases.

              Understanding the Context

              Istio is often used to manage HTTP traffic in Kubernetes, providing powerful capabilities such as traffic management, security, and observability. The Istio IngressGateway serves as the entry point for external traffic into the Kubernetes cluster, typically handling HTTP and HTTPS traffic. However, Istio also supports TCP traffic, which is necessary for use cases like exposing databases or other non-HTTP services running in the cluster to external consumers.

              Exposing a TCP port through Istio involves configuring the IngressGateway to handle TCP traffic and route it to the appropriate service. This setup is particularly useful in scenarios where you need to expose services like TIBCO EMS or Kubernetes-based databases to other internal or external applications.

              Steps to Expose a TCP Port with Istio IngressGateway

              1.- Modify the Istio IngressGateway Service:

              Before configuring the Gateway, you must ensure that the Istio IngressGateway service is configured to listen on the new TCP port. This step is crucial if you’re using a NodePort service, as this port needs to be opened on the Load Balancer.

                 apiVersion: v1
                 kind: Service
                 metadata:
               name: istio-ingressgateway
               namespace: istio-system
                 spec:
               ports:
               - name: http2
                 port: 80
                 targetPort: 80
               - name: https
                 port: 443
                 targetPort: 443
               - name: tcp
                 port: 31400
                 targetPort: 31400
                 protocol: TCP
              

              2.- Update the Istio IngressGateway service to include the new port 31400 for TCP traffic.

              Configure the Istio IngressGateway: After modifying the service, configure the Istio IngressGateway to listen on the desired TCP port.

              apiVersion: networking.istio.io/v1beta1
              kind: Gateway
              metadata:
                name: tcp-ingress-gateway
                namespace: istio-system
              spec:
                selector:
              istio: ingressgateway
                servers:
                - port:
              	  number: 31400
              	  name: tcp
              	  protocol: TCP
              	hosts:
              	- "*"
              

              In this example, the IngressGateway is configured to listen on port 31400 for TCP traffic.

              3.- Create a Service and VirtualService:

              After configuring the gateway, you need to create a Service that represents the backend application and a VirtualService to route the TCP traffic.

              apiVersion: v1
              kind: Service
              metadata:
                name: tcp-service
                namespace: default
              spec:
                ports:
                - port: 31400
              	targetPort: 8080
              	protocol: TCP
                selector:
              app: tcp-app
              

              The Service above maps port 31400 on the IngressGateway to port 8080 on the backend application.

              apiVersion: networking.istio.io/v1beta1
              kind: VirtualService
              metadata:
                name: tcp-virtual-service
                namespace: default
              spec:
                hosts:
                - "*"
                gateways:
                - tcp-ingress-gateway
                tcp:
                - match:
              	- port: 31400
              	route:
              	- destination:
              		host: tcp-service
              		port:
              		  number: 8080
              

              The VirtualService routes TCP traffic coming to port 31400 on the gateway to the tcp-service on port 8080.

              4.- Apply the Configuration

              Apply the above configurations using kubectl to create the necessary Kubernetes resources.

              kubectl apply -f istio-ingressgateway-service.yaml
              kubectl apply -f tcp-ingress-gateway.yaml
              kubectl apply -f tcp-service.yaml
              kubectl apply -f tcp-virtual-service.yaml
              

              After applying these configurations, the Istio IngressGateway will expose the TCP port to external traffic.

              Practical Use Cases

              • Exposing TIBCO EMS Server: One common scenario is exposing a TIBCO EMS (Enterprise Message Service) server running within a Kubernetes cluster to other internal applications or external consumers. By configuring the Istio IngressGateway to handle TCP traffic, you can securely expose EMS’s TCP port, allowing it to communicate with services outside the Kubernetes environment.
              • Exposing Databases: Another use case is exposing a database running within Kubernetes to external services or different clusters. By exposing the database’s TCP port through the Istio IngressGateway, you enable other applications to interact with it, regardless of their location.
              • Exposing a Custom TCP-Based Service: Suppose you have a custom application running within Kubernetes that communicates over TCP, such as a game server or a custom TCP-based API service. You can use the Istio IngressGateway to expose this service to external users, making it accessible from outside the cluster.

              Conclusion

              Exposing TCP ports using the Istio IngressGateway can be a powerful technique for managing non-HTTP traffic in your Kubernetes cluster. With the steps outlined in this article, you can confidently expose services like TIBCO EMS, databases, or custom TCP-based applications to external consumers, enhancing the flexibility and connectivity of your applications.