Helm v3.17 Introduces take-ownership: What It Solves and When To Use It

Helm has long been the standard for managing Kubernetes applications using packaged charts, bringing a level of reproducibility and automation to the deployment process. However, some operational tasks, such as renaming a release or migrating objects between charts, have traditionally required cumbersome workarounds. With the introduction of the --take-ownership flag in Helm v3.17 (released in January 2025), a long-standing pain point is finally addressed—at least partially.

In this post, we will explore:

  • What the --take-ownership flag does
  • Why it was needed
  • The caveats and limitations
  • Real-world use cases where it helps
  • When not to use it

Understanding Helm Object Ownership

When Helm installs or upgrades a chart, it injects metadata—labels and annotations—into every managed Kubernetes object. These include:

app.kubernetes.io/managed-by: Helm
meta.helm.sh/release-name: my-release
meta.helm.sh/release-namespace: default

This metadata serves an important role: Helm uses it to track and manage resources associated with each release. As a safeguard, Helm does not allow another release to modify objects it does not own and when you trying that you will see messages like the one below:

Error: Unable to continue with install: Service "provisioner-agent" in namespace "test-my-ns" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dp-core-infrastructure11": current value is "dp-core-infrastructure"

While this protects users from accidental overwrites, it creates limitations for advanced use cases.

Why --take-ownership Was Needed

Let’s say you want to:

  • Rename an existing Helm release from api-v1 to api.
  • Move a ConfigMap or Service from one chart to another.
  • Rebuild state during GitOps reconciliation when previous Helm metadata has drifted.

Previously, your only option was to:

  1. Uninstall the existing release.
  2. Reinstall under the new name.

This approach introduces downtime, and in production systems, that’s often not acceptable.

What the Flag Does

helm upgrade my-release ./my-chart --take-ownership

When this flag is passed, Helm will:

  • Skip the ownership validation for existing objects.
  • Override the labels and annotations to associate the object with the current release.

In practice, this allows you to claim ownership of resources that previously belonged to another release, enabling seamless handovers.

⚠️ What It Doesn’t Do

This flag does not:

  • Clean up references from the previous release.
  • Protect you from future uninstalls of the original release (which might still remove shared resources).
  • Allow you to adopt completely unmanaged Kubernetes resources (those not initially created by Helm).

In short, it’s a mechanism for bypassing Helm’s ownership checks, not a full lifecycle manager.

Real-World Use Cases

Let’s go through common scenarios where this feature is useful.

✅ 1. Renaming a Release Without Downtime

Before:

helm uninstall old-name
helm install new-name ./chart

Now:

helm upgrade new-name ./chart --take-ownership

✅ 2. Migrating Objects Between Charts

You’re refactoring a large chart into smaller, modular ones and need to reassign certain Service or Secret objects.

This flag allows the new release to take control of the object without deleting or recreating it.

✅ 3. GitOps Drift Reconciliation

If objects were deployed out-of-band or their metadata changed unintentionally, GitOps tooling using Helm can recover without manual intervention using --take-ownership.

Best Practices and Recommendations

  • Use this flag intentionally, and document where it’s applied.
  • If possible, remove the previous release after migration to avoid confusion.
  • Monitor Helm’s behavior closely when managing shared objects.
  • For non-Helm-managed resources, continue to use kubectl annotate or kubectl label to manually align metadata.

Conclusion

The --take-ownership flag is a welcomed addition to Helm’s CLI arsenal. While not a universal solution, it smooths over many of the rough edges developers and SREs face during release evolution and GitOps adoption.

It brings a subtle but powerful improvement—especially in complex environments where resource ownership isn’t static.

Stay updated with Helm releases, and consider this flag your new ally in advanced release engineering.

Extending Kyverno Policies: Creating Custom Rules for Enhanced Kubernetes Security

Kyverno offers a robust, declarative approach to enforcing security and compliance standards within Kubernetes clusters by allowing users to define and enforce custom policies. For an in-depth look at Kyverno’s functionality, including core concepts and benefits, see my detailed article here. In this guide, we’ll focus on extending Kyverno policies, providing a structured walkthrough of its data model, and illustrating use cases to make the most of Kyverno in a Kubernetes environment.

Understanding the Kyverno Policy Data Model

Kyverno policies consist of several components that define how the policy should behave, which resources it should affect, and the specific rules that apply. Let’s dive into the main parts of the Kyverno policy model:

  1. Policy Definition: This is the root configuration where you define the policy’s metadata, including name, type, and scope. Policies can be created at the namespace level for specific areas or as cluster-wide rules to enforce uniform standards across the entire Kubernetes cluster.
  2. Rules: Policies are made up of rules that dictate what conditions Kyverno should enforce. Each rule can include logic for validation, mutation, or generation based on your needs.
  3. Match and Exclude Blocks: These sections allow fine-grained control over which resources the policy applies to. You can specify resources by their kinds (e.g., Pods, Deployments), namespaces, labels, and even specific names. This flexibility is crucial for creating targeted policies that impact only the resources you want to manage.
    1. Match block: Defines the conditions under which the rule applies to specific resources.
    2. Exclude block: Used to explicitly omit resources that match certain conditions, ensuring that unaffected resources are not inadvertently included.
  4. Validation, Mutation, and Generation Actions: Each rule can take different types of actions:
    1. Validation: Ensures resources meet specific criteria and blocks deployment if they don’t.
    2. Mutation: Adjusts resource configurations to align with predefined standards, which is useful for auto-remediation.
    3. Generation: Creates or manages additional resources based on existing resource configurations.

Example: Restricting Container Image Sources to Docker Hub

A common security requirement is to limit container images to trusted registries. The example below demonstrates a policy that only permits images from Docker Hub.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: restrict-dockerhub-images
spec:
  rules:
    - name: only-dockerhub-images
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Only Docker Hub images are allowed."
        pattern:
          spec:
            containers:
              - image: "docker.io/*"

This policy targets all Pod resources in the cluster and enforces a validation rule that restricts the image source to docker.io. If a Pod uses an image outside Docker Hub, Kyverno denies its deployment, reinforcing secure sourcing practices.

Practical Use-Cases for Kyverno Policies

Kyverno policies can handle a variety of Kubernetes management tasks through validation, mutation, and generation. Let’s explore examples for each type to illustrate Kyverno’s versatility:

1. Validation Policies

Validation policies in Kyverno ensure that resources comply with specific configurations or security standards, stopping any non-compliant resources from deploying.

Use-Case: Enforcing Resource Limits for Containers

This example prevents deployments that lack resource limits, ensuring all Pods specify CPU and memory constraints.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: enforce-resource-limits
spec:
  rules:
    - name: require-resource-limits
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Resource limits (CPU and memory) are required for all containers."
        pattern:
          spec:
            containers:
              - resources:
                  limits:
                    cpu: "?*"
                    memory: "?*"

By enforcing resource limits, this policy helps prevent resource contention in the cluster, fostering stable and predictable performance.

2. Mutation Policies

Mutation policies allow Kyverno to automatically adjust configurations in resources to meet compliance requirements. This approach is beneficial for consistent configurations without manual intervention.

Use-Case: Adding Default Labels to Pods

This policy adds a default label, environment: production, to all new Pods that lack this label, ensuring that resources align with organization-wide labeling standards.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-default-label
spec:
  rules:
    - name: add-environment-label
      match:
        resources:
          kinds:
            - Pod
      mutate:
        patchStrategicMerge:
          metadata:
            labels:
              environment: "production"

This mutation policy is an example of how Kyverno can standardize resource configurations at scale by dynamically adding missing information, reducing human error and ensuring labeling consistency.

3. Generation Policies

Generation policies in Kyverno are used to create or update related resources, enhancing Kubernetes automation by responding to specific configurations or needs in real-time.

Use-Case: Automatically Creating a ConfigMap for Each New Namespace

This example policy generates a ConfigMap in every new namespace, setting default configuration values for all resources in that namespace.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: generate-configmap
spec:
  rules:
    - name: add-default-configmap
      match:
        resources:
          kinds:
            - Namespace
      generate:
        kind: ConfigMap
        name: default-config
        namespace: "{{request.object.metadata.name}}"
        data:
          default-key: "default-value"

This generation policy is triggered whenever a new namespace is created, automatically provisioning a ConfigMap with default settings. This approach is especially useful in multi-tenant environments, ensuring new namespaces have essential configurations in place.

Conclusion

Extending Kyverno policies enables Kubernetes administrators to establish and enforce tailored security and operational practices within their clusters. By leveraging Kyverno’s capabilities in validation, mutation, and generation, you can automate compliance, streamline operations, and reinforce security standards seamlessly.

Kyverno: A Detailed Way of Enforcing Standard and Custom Policies

In the Kubernetes ecosystem, security and governance are key aspects that need continuous attention. While Kubernetes offers some out-of-the-box (OOTB) security features such as Pod Security Admission (PSA), these might not be sufficient for complex environments with varying compliance requirements. This is where Kyverno comes into play, providing a powerful yet flexible solution for managing and enforcing policies across your cluster.

In this post, we will explore the key differences between Kyverno and PSA, explain how Kyverno can be used in different use cases, and show you how to install and deploy policies with it. Although custom policy creation will be covered in a separate post, we will reference some pre-built policies you can use right away.

What is Pod Security Admission (PSA)?

Kubernetes introduced Pod Security Admission (PSA) as a replacement for the now deprecated PodSecurityPolicy (PSP). PSA focuses on enforcing three predefined levels of security: Privileged, Baseline, and Restricted. These levels control what pods are allowed to run in a namespace based on their security context configurations.

  • Privileged: Minimal restrictions, allowing privileged containers and host access.
  • Baseline: Applies standard restrictions, disallowing privileged containers and limiting host access.
  • Restricted: The strictest level, ensuring secure defaults and enforcing best practices for running containers.

While PSA is effective for basic security requirements, it lacks flexibility when enforcing fine-grained or custom policies. We have a full article covering this topic that you can read here.

Kyverno vs. PSA: Key Differences

Kyverno extends beyond the capabilities of PSA by offering more granular control and flexibility. Here’s how it compares:

  1. Policy Types: While PSA focuses solely on security, Kyverno allows the creation of policies for validation, mutation, and generation of resources. This means you can modify or generate new resources, not just enforce security rules.
  2. Customizability: Kyverno supports custom policies that can enforce your organization’s compliance requirements. You can write policies that govern specific resource types, such as ensuring that all deployments have certain labels or that container images come from a trusted registry.
  3. Policy as Code: Kyverno policies are written in YAML, allowing for easy integration with CI/CD pipelines and GitOps workflows. This makes policy management declarative and version-controlled, which is not the case with PSA.
  4. Audit and Reporting: With Kyverno, you can generate detailed audit logs and reports on policy violations, giving administrators a clear view of how policies are enforced and where violations occur. PSA lacks this built-in reporting capability.
  5. Enforcement and Mutation: While PSA primarily enforces restrictions on pods, Kyverno allows not only validation of configurations but also modification of resources (mutation) when required. This adds an additional layer of flexibility, such as automatically adding annotations or labels.

When to Use Kyverno Over PSA

While PSA might be sufficient for simpler environments, Kyverno becomes a valuable tool in scenarios requiring:

  • Custom Compliance Rules: For example, enforcing that all containers use a specific base image or restricting specific container capabilities across different environments.
  • CI/CD Integrations: Kyverno can integrate into your CI/CD pipelines, ensuring that resources comply with organizational policies before they are deployed.
  • Complex Governance: When managing large clusters with multiple teams, Kyverno’s policy hierarchy and scope allow for finer control over who can deploy what and how resources are configured.

If your organization needs a more robust and flexible security solution, Kyverno is a better fit compared to PSA’s more generic approach.

Installing Kyverno

To start using Kyverno, you’ll need to install it in your Kubernetes cluster. This is a straightforward process using Helm, which makes it easy to manage and update.

Step-by-Step Installation

Add the Kyverno Helm repository:

    helm repo add kyverno https://kyverno.github.io/kyverno/

    Update Helm repositories:

      helm repo update

      Install Kyverno in your Kubernetes cluster:

        helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace

        Verify the installation:

          kubectl get pods -n kyverno

          After installation, Kyverno will begin enforcing policies across your cluster, but you’ll need to deploy some policies to get started.

          Deploying Policies with Kyverno

          Kyverno policies are written in YAML, just like Kubernetes resources, which makes them easy to read and manage. You can find several ready-to-use policies from the Kyverno Policy Library, or create your own to match your requirements.

          Here is an example of a simple validation policy that ensures all pods use trusted container images from a specific registry:

          apiVersion: kyverno.io/v1
          kind: ClusterPolicy
          metadata:
            name: require-trusted-registry
          spec:
            validationFailureAction: enforce
            rules:
            - name: check-registry
              match:
                resources:
                  kinds:
                  - Pod
              validate:
                message: "Only images from 'myregistry.com' are allowed."
                pattern:
                  spec:
                    containers:
                    - image: "myregistry.com/*"

          This policy will automatically block the deployment of any pod that uses an image from a registry other than myregistry.com.

          Applying the Policy

          To apply the above policy, save it to a YAML file (e.g., trusted-registry-policy.yaml) and run the following command:

          kubectl apply -f trusted-registry-policy.yaml

          Once applied, Kyverno will enforce this policy across your cluster.

          Viewing Kyverno Policy Reports

          Kyverno generates detailed reports on policy violations, which are useful for audits and tracking policy compliance. To check the reports, you can use the following commands:

          List all Kyverno policy reports:

            kubectl get clusterpolicyreport

            Describe a specific policy report to get more details:

              kubectl describe clusterpolicyreport <report-name>

              These reports can be integrated into your monitoring tools to trigger alerts when critical violations occur.

              Conclusion

              Kyverno offers a flexible and powerful way to enforce policies in Kubernetes, making it an essential tool for organizations that need more than the basic capabilities provided by PSA. Whether you need to ensure compliance with internal security standards, automate resource modifications, or integrate policies into CI/CD pipelines, Kyverno’s extensive feature set makes it a go-to choice for Kubernetes governance.

              For now, start with the out-of-the-box policies available in Kyverno’s library. In future posts, we’ll dive deeper into creating custom policies tailored to your specific needs.

              Kubernetes Policy Enforcement: Understanding Pod Security Admission (PSA)

              In Kubernetes, security is a key concern, especially as containers and microservices grow in complexity. One of the essential features of Kubernetes for policy enforcement is Pod Security Admission (PSA), which replaces the deprecated Pod Security Policies (PSP). PSA provides a more straightforward and flexible approach to enforce security policies, helping administrators safeguard clusters by ensuring that only compliant pods are allowed to run.

              This article will guide you through PSA, the available Pod Security Standards, how to configure them, and how to apply security policies to specific namespaces using labels.

              What is Pod Security Admission (PSA)?

              PSA is a built-in admission controller introduced in Kubernetes 1.23 to replace Pod Security Policies (PSPs). PSPs had a steep learning curve and could become cumbersome when scaling security policies across various environments. PSA simplifies this process by applying Kubernetes Pod Security Standards based on predefined security levels without needing custom logic for each policy.

              With PSA, cluster administrators can restrict the permissions of pods by using labels that correspond to specific Pod Security Standards. PSA operates at the namespace level, enabling better granularity in controlling security policies for different workloads.

              Pod Security Standards

              Kubernetes provides three key Pod Security Standards in the PSA framework:

              • Privileged: No restrictions; permits all features and is the least restrictive mode. This is not recommended for production workloads but can be used in controlled environments or for workloads requiring elevated permissions.
              • Baseline: Provides a good balance between usability and security, restricting the most dangerous aspects of pod privileges while allowing common configurations. It is suitable for most applications that don’t need special permissions.
              • Restricted: The most stringent level of security. This level is intended for workloads that require the highest level of isolation and control, such as multi-tenant clusters or workloads exposed to the internet.

              Each standard includes specific rules to limit pod privileges, such as disallowing privileged containers, restricting access to the host network, and preventing changes to certain security contexts.

              Setting Up Pod Security Admission (PSA)

              To enable PSA, you need to label your namespaces based on the security level you want to enforce. The label format is as follows:

              kubectl label --overwrite ns  pod-security.kubernetes.io/enforce=<value>

              For example, to enforce a restricted security policy on the production namespace, you would run:

              kubectl label --overwrite ns production pod-security.kubernetes.io/enforce=restricted

              In this example, Kubernetes will automatically apply the rules associated with the restricted policy to all pods deployed in the production namespace.

              Additional PSA Modes

              PSA also provides additional modes for greater control:

              • Audit: Logs a policy violation but allows the pod to be created.
              • Warn: Issues a warning but permits the pod creation.
              • Enforce: Blocks pod creation if it violates the policy.

              To configure these modes, use the following labels:

              kubectl label --overwrite ns      pod-security.kubernetes.io/enforce=baseline     pod-security.kubernetes.io/audit=restricted     pod-security.kubernetes.io/warn=baseline

              This setup enforces the baseline standard while issuing warnings and logging violations for restricted-level rules.

              Example: Configuring Pod Security in a Namespace

              Let’s walk through an example of configuring baseline security for the dev namespace. First, you need to apply the PSA labels:

              kubectl create namespace dev
              kubectl label --overwrite ns dev pod-security.kubernetes.io/enforce=baseline

              Now, any pod deployed in the dev namespace will be checked against the baseline security standard. If a pod violates the baseline policy (for instance, by attempting to run a privileged container), it will be blocked from starting.

              You can also combine warn and audit modes to track violations without blocking pods:

              kubectl label --overwrite ns dev     pod-security.kubernetes.io/enforce=baseline     pod-security.kubernetes.io/warn=restricted     pod-security.kubernetes.io/audit=privileged

              In this case, PSA will allow pods to run if they meet the baseline policy, but it will issue warnings for restricted-level violations and log any privileged-level violations.

              Applying Policies by Default

              One of the strengths of PSA is its simplicity in applying policies at the namespace level, but administrators might wonder if there’s a way to apply a default policy across new namespaces automatically. As of now, Kubernetes does not natively provide an option to apply PSA policies globally by default. However, you can use admission webhooks or automation tools such as OPA Gatekeeper or Kyverno to enforce default policies for new namespaces.

              Conclusion

              Pod Security Admission (PSA) simplifies policy enforcement in Kubernetes clusters, making it easier to ensure compliance with security standards across different environments. By configuring Pod Security Standards at the namespace level and using labels, administrators can control the security level of workloads with ease. The flexibility of PSA allows for efficient security management without the complexity associated with the older Pod Security Policies (PSPs).

              For more details on configuring PSA and Pod Security Standards, check the official Kubernetes PSA documentation and Pod Security Standards documentation.

              Understanding Helm Hooks: A Guide to Using Hooks in Your Helm Charts

              Helm, the package manager for Kubernetes, is a powerful tool for managing complex Kubernetes applications. One of its advanced features is Helm hooks. Hooks allow you to intervene at different points of a Helm operation, such as before or after an install, upgrade, or delete. In this article, we’ll dive into the available hooks, explore how to define them in your Helm charts, and provide use cases for each.

              What Are Helm Hooks?

              Helm hooks are special annotations that can be added to resources in a Helm chart. These hooks trigger specific actions during the release lifecycle of a Helm chart. They enable developers to perform tasks like setting up prerequisites, cleaning up after an operation, or validating a deployment.

              Available Helm Hooks and Use Cases

              Helm provides several hooks that can be used at different stages of a Helm operation. Here’s a breakdown of the available hooks:

              1. pre-install

              Execution Timing: After templates are rendered but before any resources are created in Kubernetes.
              Use Case: Imagine you need to create a ConfigMap or secret that your application relies on but isn’t part of the main chart. The pre-install hook can create this resource before the rest of the chart is deployed.

              apiVersion: v1
              kind: Job
              metadata:
                name: setup-config
                annotations:
              "helm.sh/hook": pre-install
              spec:
                template:
              spec:
                containers:
                  - name: config-creator
                    image: busybox
                    command: ['sh', '-c', 'echo "config data" > /config/config.txt']
              

              2. post-install

              Execution Timing: After all resources are created in Kubernetes.
              Use Case: If your application requires some initial data to be loaded into a database after it’s up and running, a post-install hook could be used to run a job that populates this data.

              apiVersion: batch/v1
              kind: Job
              metadata:
                name: init-database
                annotations:
              "helm.sh/hook": post-install
              spec:
                template:
              spec:
                containers:
                  - name: db-init
                    image: busybox
                    command: ['sh', '-c', 'init-db-command']
              

              3. pre-delete

              Execution Timing: On a deletion request, before any resources are deleted from Kubernetes.
              Use Case: Use a pre-delete hook to gracefully shut down services or to back up important data before deleting a Helm release.

              apiVersion: batch/v1
              kind: Job
              metadata:
                name: backup-before-delete
                annotations:
              "helm.sh/hook": pre-delete
              spec:
                template:
              spec:
                containers:
                  - name: backup
                    image: busybox
                    command: ['sh', '-c', 'backup-command']
              

              4. post-delete

              Execution Timing: After all the release’s resources have been deleted.
              Use Case: A post-delete hook could be used to clean up resources external to Kubernetes that were created by the Helm chart, such as cloud resources or database entries.

              apiVersion: batch/v1
              kind: Job
              metadata:
                name: cleanup
                annotations:
              "helm.sh/hook": post-delete
              spec:
                template:
              spec:
                containers:
                  - name: cleanup
                    image: busybox
                    command: ['sh', '-c', 'cleanup-command']
              

              5. pre-upgrade

              Execution Timing: On an upgrade request, after templates are rendered, but before any resources are updated.
              Use Case: Suppose you need to validate some preconditions before applying an upgrade. A pre-upgrade hook can run a job to check the environment or validate the new configuration.

              apiVersion: batch/v1
              kind: Job
              metadata:
                name: pre-upgrade-check
                annotations:
              "helm.sh/hook": pre-upgrade
              spec:
                template:
              spec:
                containers:
                  - name: upgrade-check
                    image: busybox
                    command: ['sh', '-c', 'upgrade-check-command']
              

              6. post-upgrade

              Execution Timing: After all resources have been upgraded.
              Use Case: After a successful upgrade, a post-upgrade hook might be used to trigger a job that verifies the application’s functionality or migrates data to a new schema.

              apiVersion: batch/v1
              kind: Job
              metadata:
                name: post-upgrade-verify
                annotations:
              "helm.sh/hook": post-upgrade
              spec:
                template:
              spec:
                containers:
                  - name: verification
                    image: busybox
                    command: ['sh', '-c', 'verify-upgrade']
              

              7. pre-rollback

              Execution Timing: On a rollback request, after templates are rendered, but before any resources are rolled back.
              Use Case: This hook can be used to prepare the system for rollback, such as backing up data or notifying other systems of the impending rollback.

              apiVersion: batch/v1
              kind: Job
              metadata:
                name: pre-rollback-backup
                annotations:
              "helm.sh/hook": pre-rollback
              spec:
                template:
              spec:
                containers:
                  - name: backup
                    image: busybox
                    command: ['sh', '-c', 'rollback-backup']
              

              8. post-rollback

              Execution Timing: After all resources have been modified by the rollback.
              Use Case: Use a post-rollback hook to verify the state of the application after a rollback or to notify external systems of the rollback’s completion.

              apiVersion: batch/v1
              kind: Job
              metadata:
                name: post-rollback-verify
                annotations:
              "helm.sh/hook": post-rollback
              spec:
                template:
              spec:
                containers:
                  - name: verify
                    image: busybox
                    command: ['sh', '-c', 'verify-rollback']
              

              9. test

              Execution Timing: Executes when the helm test subcommand is invoked.
              Use Case: This hook is ideal for running tests against a Helm release to verify that it is functioning as expected.

              apiVersion: batch/v1
              kind: Job
              metadata:
                name: test-application
                annotations:
              "helm.sh/hook": test
              spec:
                template:
              spec:
                containers:
                  - name: test
                    image: busybox
                    command: ['sh', '-c', 'run-tests']
              

              Annotations in Helm Hooks

              In addition to defining hooks, Helm allows you to control hook execution using annotations:

              • helm.sh/resource-policy: Prevents resources from being deleted after the hook is executed. Possible value:
              • keep: Keeps the resource after the hook is executed, useful for debugging or retaining logs.
              • helm.sh/hook-weight: Specifies the order in which hooks should be executed. Hooks with lower weights are executed before those with higher weights.
              • helm.sh/hook-delete-policy: Controls when the hook resources should be deleted. Possible values include:
              • hook-succeeded: Deletes the resource if the hook execution is successful.
              • hook-failed: Deletes the resource if the hook execution fails.
              • before-hook-creation: Deletes previous hook resources before creating new ones, ensuring only one instance of the hook is active.

              Conclusion

              Helm hooks are powerful tools that provide fine-grained control over the deployment lifecycle of your applications in Kubernetes. By understanding and leveraging these hooks, you can ensure that your Helm deployments are both robust and reliable. Make sure to use the appropriate annotations to further fine-tune the behavior of your hooks, optimizing them for your specific use case.

              Advanced Helm Tips and Tricks: Uncommon Commands and Flags for Better Kubernetes Management

              Managing Kubernetes resources effectively can sometimes feel overwhelming, but Helm, the Kubernetes package manager, offers several commands and flags that make the process smoother and more intuitive. In this article, we’ll dive into some lesser-known Helm commands and flags, explaining their uses, benefits, and practical examples.

              1. helm get values: Retrieving Deployed Chart Values

              The helm get values command is essential when you need to see the configuration values of a deployed Helm chart. This is particularly useful when you have a chart deployed but lack access to its original configuration file. With this command, you can achieve an “Infrastructure as Code” approach by capturing the current state of your deployment.

              Usage:

              helm get values <release-name> \[flags]
              • <release-name>: The name of your Helm release.

              Example:

              To get the values of a deployed chart named my-release:

              helm get values my-release --namespace my-namespace

              This command outputs the current values used for the deployment, which is valuable for documentation, replicating the environment, or modifying deployments.

              2. Understanding helm upgrade Flags: --reset-values, --reuse-values, and --reset-then-reuse

              The helm upgrade command is typically used to upgrade or modify an existing Helm release. However, the behavior of this command can be finely tuned using several flags: --reset-values, --reuse-values, and --reset-then-reuse.

              • --reset-values: Ignores the previous values and uses only the values provided in the current command. Use this flag when you want to override the existing configuration entirely.

              Example Scenario: You are deploying a new version of your application, and you want to ensure that no old values are retained.

                helm upgrade my-release my-chart --reset-values --set newKey=newValue
              • --reuse-values: Reuses the previous release’s values and merges them with any new values provided. This flag is useful when you want to keep most of the old configuration but apply a few tweaks.

              Example Scenario: You need to add a new environment variable to an existing deployment without affecting the other settings.

                helm upgrade my-release my-chart --reuse-values --set newEnv=production
              • --reset-then-reuse: A combination of the two. It resets to the original values and then merges the old values back, allowing you to start with a clean slate while retaining specific configurations.

              Example Scenario: Useful in complex environments where you want to ensure the chart is using the original default settings but retain some custom values.

                helm upgrade my-release my-chart --reset-then-reuse --set version=2.0

              3. helm lint: Ensuring Chart Quality in CI/CD Pipelines

              The helm lint command checks Helm charts for syntax errors, best practices, and other potential issues. This is especially useful when integrating Helm into a CI/CD pipeline, as it ensures your charts are reliable and adhere to best practices before deployment.

              Usage:

              helm lint <chart-path> [flags]
              • <chart-path>: Path to the Helm chart you want to validate.

              Example:

              helm lint ./my-chart/

              This command scans the my-chart directory for issues like missing fields, incorrect YAML structure, or deprecated usage. If you’re automating deployments, integrating helm lint into your pipeline helps catch problems early.

              Integrating helm lint in a CI/CD Pipeline:

              In a Jenkins pipeline, for example, you could add the following stage:

              pipeline {
                agent any
                stages {
              stage('Lint Helm Chart') {
                steps {
                  script {
                    sh 'helm lint ./my-chart/'
                  }
                }
              }
              // Other stages like build, test, deploy
                }
              }

              By adding this stage, you ensure that any syntax or structural issues are caught before proceeding to build or deployment stages.

              4. helm rollback: Reverting to a Previous Release

              The helm rollback command allows you to revert a release to a previous version. This can be incredibly useful in case of a failed upgrade or deployment, as it provides a way to quickly restore a known good state.

              Usage:

              helm rollback <release-name> [revision] [flags]
              • <release-name>: The name of your Helm release.
              • [revision]: The revision number to which you want to roll back. If omitted, Helm will roll back to the previous release by default.

              Example:

              To roll back a release named my-release to its previous version:

              helm rollback my-release

              To roll back to a specific revision, say revision 3:

              helm rollback my-release 3

              This command can be a lifesaver when a recent change breaks your application, allowing you to quickly restore service continuity while investigating the issue.

              5. helm verify: Validating a Chart Before Use

              The helm verify command checks the integrity and validity of a chart before it is deployed. This command ensures that the chart’s package file has not been tampered with or corrupted. It’s particularly useful when you are pulling charts from external repositories or using charts shared across multiple teams.

              Usage:

              helm verify <chart-path>
              • <chart-path>: Path to the Helm chart archive (.tgz file).

              Example:

              To verify a downloaded chart named my-chart:

              helm verify ./my-chart.tgz

              If the chart passes the verification, Helm will output a success message. If it fails, you’ll see details of the issues, which could range from missing files to checksum mismatches.

              Conclusion

              Leveraging these advanced Helm commands and flags can significantly enhance your Kubernetes management capabilities. Whether you are retrieving existing deployment configurations, fine-tuning your Helm upgrades, or ensuring the quality of your charts in a CI/CD pipeline, these tricks help you maintain a robust and efficient Kubernetes environment.

              Exposing TCP Ports Using Istio Ingress Gateway

              Istio has become an essential tool for managing HTTP traffic within Kubernetes clusters, offering advanced features such as Canary Deployments, mTLS, and end-to-end visibility. However, some tasks, like exposing a TCP port using the Istio IngressGateway, can be challenging if you’ve never done it before. This article will guide you through the process of exposing TCP ports with Istio Ingress Gateway, complete with real-world examples and practical use cases.

              Understanding the Context

              Istio is often used to manage HTTP traffic in Kubernetes, providing powerful capabilities such as traffic management, security, and observability. The Istio IngressGateway serves as the entry point for external traffic into the Kubernetes cluster, typically handling HTTP and HTTPS traffic. However, Istio also supports TCP traffic, which is necessary for use cases like exposing databases or other non-HTTP services running in the cluster to external consumers.

              Exposing a TCP port through Istio involves configuring the IngressGateway to handle TCP traffic and route it to the appropriate service. This setup is particularly useful in scenarios where you need to expose services like TIBCO EMS or Kubernetes-based databases to other internal or external applications.

              Steps to Expose a TCP Port with Istio IngressGateway

              1.- Modify the Istio IngressGateway Service:

              Before configuring the Gateway, you must ensure that the Istio IngressGateway service is configured to listen on the new TCP port. This step is crucial if you’re using a NodePort service, as this port needs to be opened on the Load Balancer.

                 apiVersion: v1
                 kind: Service
                 metadata:
               name: istio-ingressgateway
               namespace: istio-system
                 spec:
               ports:
               - name: http2
                 port: 80
                 targetPort: 80
               - name: https
                 port: 443
                 targetPort: 443
               - name: tcp
                 port: 31400
                 targetPort: 31400
                 protocol: TCP
              

              2.- Update the Istio IngressGateway service to include the new port 31400 for TCP traffic.

              Configure the Istio IngressGateway: After modifying the service, configure the Istio IngressGateway to listen on the desired TCP port.

              apiVersion: networking.istio.io/v1beta1
              kind: Gateway
              metadata:
                name: tcp-ingress-gateway
                namespace: istio-system
              spec:
                selector:
              istio: ingressgateway
                servers:
                - port:
              	  number: 31400
              	  name: tcp
              	  protocol: TCP
              	hosts:
              	- "*"
              

              In this example, the IngressGateway is configured to listen on port 31400 for TCP traffic.

              3.- Create a Service and VirtualService:

              After configuring the gateway, you need to create a Service that represents the backend application and a VirtualService to route the TCP traffic.

              apiVersion: v1
              kind: Service
              metadata:
                name: tcp-service
                namespace: default
              spec:
                ports:
                - port: 31400
              	targetPort: 8080
              	protocol: TCP
                selector:
              app: tcp-app
              

              The Service above maps port 31400 on the IngressGateway to port 8080 on the backend application.

              apiVersion: networking.istio.io/v1beta1
              kind: VirtualService
              metadata:
                name: tcp-virtual-service
                namespace: default
              spec:
                hosts:
                - "*"
                gateways:
                - tcp-ingress-gateway
                tcp:
                - match:
              	- port: 31400
              	route:
              	- destination:
              		host: tcp-service
              		port:
              		  number: 8080
              

              The VirtualService routes TCP traffic coming to port 31400 on the gateway to the tcp-service on port 8080.

              4.- Apply the Configuration

              Apply the above configurations using kubectl to create the necessary Kubernetes resources.

              kubectl apply -f istio-ingressgateway-service.yaml
              kubectl apply -f tcp-ingress-gateway.yaml
              kubectl apply -f tcp-service.yaml
              kubectl apply -f tcp-virtual-service.yaml
              

              After applying these configurations, the Istio IngressGateway will expose the TCP port to external traffic.

              Practical Use Cases

              • Exposing TIBCO EMS Server: One common scenario is exposing a TIBCO EMS (Enterprise Message Service) server running within a Kubernetes cluster to other internal applications or external consumers. By configuring the Istio IngressGateway to handle TCP traffic, you can securely expose EMS’s TCP port, allowing it to communicate with services outside the Kubernetes environment.
              • Exposing Databases: Another use case is exposing a database running within Kubernetes to external services or different clusters. By exposing the database’s TCP port through the Istio IngressGateway, you enable other applications to interact with it, regardless of their location.
              • Exposing a Custom TCP-Based Service: Suppose you have a custom application running within Kubernetes that communicates over TCP, such as a game server or a custom TCP-based API service. You can use the Istio IngressGateway to expose this service to external users, making it accessible from outside the cluster.

              Conclusion

              Exposing TCP ports using the Istio IngressGateway can be a powerful technique for managing non-HTTP traffic in your Kubernetes cluster. With the steps outlined in this article, you can confidently expose services like TIBCO EMS, databases, or custom TCP-based applications to external consumers, enhancing the flexibility and connectivity of your applications.

              ConfigMap with Optional Values in Kubernetes

              Kubernetes ConfigMaps are a powerful tool for managing configuration data separately from application code. However, they can sometimes lead to issues during deployment, particularly when a ConfigMap referenced in a Pod specification is missing, causing the application to fail to start. This is a common scenario that can lead to a CreateContainerConfigError and halt your deployment pipeline.

              Understanding the Problem

              When a ConfigMap is referenced in a Pod’s specification, Kubernetes expects the ConfigMap to be present. If it is not, Kubernetes will not start the Pod, leading to a failed deployment. This can be problematic in situations where certain configuration data is optional or environment-specific, such as proxy settings that are only necessary in certain environments.

              Making ConfigMap Values Optional

              Kubernetes provides a way to define ConfigMap items as optional, allowing your application to start even if the ConfigMap is not present. This can be particularly useful for environment variables that only need to be set under certain conditions.

              Here’s a basic example of how to make a ConfigMap optional:

              apiVersion: v1
              kind: Pod
              metadata:
                name: example-pod
              spec:
                containers:
                - name: example-container
                  image: nginx
                  env:
                  - name: OPTIONAL_ENV_VAR
                    valueFrom:
                      configMapKeyRef:
                        name: example-configmap
                        key: optional-key
                        optional: true
              

              In this example:

              • name: example-configmap refers to the ConfigMap that might or might not be present.
              • optional: true ensures that the Pod will still start even if example-configmap or the optional-key within it is missing.

              Practical Use Case: Proxy Configuration

              A common use case for optional ConfigMap values is setting environment variables for proxy configuration. In many enterprise environments, proxy settings are only required in certain deployment environments (e.g., staging, production) but not in others (e.g., local development).

              apiVersion: v1
              kind: ConfigMap
              metadata:
                name: proxy-config
              data:
                HTTP_PROXY: "http://proxy.example.com"
                HTTPS_PROXY: "https://proxy.example.com"
              

              In your Pod specification, you could reference these proxy settings as optional:

              apiVersion: v1
              kind: Pod
              metadata:
                name: app-pod
              spec:
                containers:
                - name: app-container
                  image: my-app-image
                  env:
                  - name: HTTP_PROXY
                    valueFrom:
                      configMapKeyRef:
                        name: proxy-config
                        key: HTTP_PROXY
                        optional: true
                  - name: HTTPS_PROXY
                    valueFrom:
                      configMapKeyRef:
                        name: proxy-config
                        key: HTTPS_PROXY
                        optional: true
              

              In this setup, if the proxy-config ConfigMap is missing, the application will still start, simply without the proxy settings.

              Sample Application

              Let’s walk through a simple example to demonstrate this concept. We will create a deployment for an application that uses optional configuration values.

              1. Create the ConfigMap (Optional):
              apiVersion: v1
              kind: ConfigMap
              metadata:
                name: app-config
              data:
                GREETING: "Hello, World!"
              
              1. Deploy the Application:
              apiVersion: apps/v1
              kind: Deployment
              metadata:
                name: hello-world-deployment
              spec:
                replicas: 1
                selector:
                  matchLabels:
                    app: hello-world
                template:
                  metadata:
                    labels:
                      app: hello-world
                  spec:
                    containers:
                    - name: hello-world
                      image: busybox
                      command: ["sh", "-c", "echo $GREETING"]
                      env:
                      - name: GREETING
                        valueFrom:
                          configMapKeyRef:
                            name: app-config
                            key: GREETING
                            optional: true
              
              1. Deploy and Test:
              2. Deploy the application using kubectl apply -f <your-deployment-file>.yaml.
              3. If the app-config ConfigMap is present, the Pod will output “Hello, World!”.
              4. If the ConfigMap is missing, the Pod will start, but no greeting will be echoed.

              Conclusion

              Optional ConfigMap values are a simple yet effective way to make your Kubernetes deployments more resilient and adaptable to different environments. By marking ConfigMap keys as optional, you can prevent deployment failures and allow your applications to handle missing configuration gracefully.

              TIBCO BW ECS Logging Support

              TIBCO BW ECS Logging Support is becoming one demanded feature based on the increased usage of the Elastic Common Schema for the log aggregation solution based on the Elastic stack (previously known as ELK stack)

              We have already commented a lot about the importance of log aggregation solutions and their benefits, especially when discussing container architecture. Because of that, today, we will focus on how we can adapt our BW applications to support this new logging format.

              Because the first thing that we need to know is the following statement: Yes, this can be done. And it can be done non-dependant on your deployment model. So, the solution provided here works for both on-premises solutions as well as container deployments using BWCE.

              TIBCO BW Logging Background

              TIBCO BusinessWorks (container or not) relies on its logging capabilities in the logback library, and this library is configured using a file named logback.xml that could have the configuration that you need, as you can see in the picture below:

              BW ECS Logging: Sample of Logback.xml default config

              Logback is a well-known library for Java-based developments and has an architecture based on a core solution and plug-ins that extend its current capabilities. It’s this plug-in approach that we are going to do to support ECS.

              Even in the ECS Official documentation covers the configuration of enabling this logging configuration when using the Logback solution as you can see in the picture below and this official link:

              BW ECS Logging: ECS Java Dependency Information

              In our case, we don’t need to add the dependency anywhere but just download the dependency, as we will need to include to the existing OSGI bundles for the TIBCO BW installation. We will need just two files that are the following ones:

              • ecs-logging-core-1.5.0.jar
              • logback-ecs-encoder-1.5.0.jar

              At the moment of writing this article, current versions are 1.5.0 for each of them, but keep a look to make sure you’re using a recent version of this software to avoid any problems with support and vulnerabilities.

              Once we have these libraries, we need to add it to the BW system installation, and we need to do it differently if we are using a TIBCO on-premises installation or a TIBCO BW base installation. To be honest, the things we need to do are the same; the process of doing it is different.

              Because, in the end, what we need to do is just a simple task. Include these JAR files as part of the current logback OSGI bundle that TIBCO BW loads. So, let’s see how we can do that and start with an on-premises installation. We will use the TIBCO BWCE 2.8.2 version as an example, but similar steps will be required for other versions.

              On-premise installation is the easiest way to do it, but just because it has fewer steps than when we are doing it in a TIBCO BWCE base image. So, in this case, we will go to the following location: <TIBCO_HOME>/bwce/2.8/system/shared/com.tibco.tpcl.logback_1.2.1600.002/

              • We will place the download JARs in that folder
              BW ECS Logging: JAR location
              • We will open the META-INF/MANIFEST.MF and do the following modifications:
                • Add those JARs to the Bundle-Classpath section:
              BW ECS Logging: Bundle-Classpath changes
              • Include the following package (co.elastic.logging.logback) as part of the exported packages by adding it to the Exported-packagesection:
              BW ECS Logging: Export-Package changes

              Once this is done, our TIBCO BW installation supports ECS format. and we just need to configure the logback.xml to use it, and we can do that relying on the official documentation on the ECS page. We need to include the following encoder, as shown below:

               <encoder class="co.elastic.logging.logback.EcsEncoder">
                  <serviceName>my-application</serviceName>
                  <serviceVersion>my-application-version</serviceVersion>
                  <serviceEnvironment>my-application-environment</serviceEnvironment>
                  <serviceNodeName>my-application-cluster-node</serviceNodeName>
              </encoder>
              

              For example, if we modify the default logback.xml configuration file with this information, we will have something like this:

              <?xml version="1.0" encoding="UTF-8"?>
              <configuration scan="true">
                
                <!-- *=============================================================* -->
                <!-- *  APPENDER: Console Appender                                 * -->
                <!-- *=============================================================* -->  
                <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
                  <encoder class="co.elastic.logging.logback.EcsEncoder">
                    <serviceName>a</serviceName>
                    <serviceVersion>b</serviceVersion>
                    <serviceEnvironment>c</serviceEnvironment>
                    <serviceNodeName>d</serviceNodeName>
                </encoder>
                </appender>
              
              
              
                <!-- *=============================================================* -->
                <!-- * LOGGER: Thor Framework loggers                              * -->
                <!-- *=============================================================* -->
                <logger name="com.tibco.thor.frwk">
                  <level value="INFO"/>
                </logger>
                
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Framework loggers                     * -->
                <!-- *=============================================================* -->
                <logger name="com.tibco.bw.frwk">
                  <level value="WARN"/>
                </logger>  
                
                <logger name="com.tibco.bw.frwk.engine">
                  <level value="INFO"/>
                </logger>   
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Engine loggers                        * -->
                <!-- *=============================================================* --> 
                <logger name="com.tibco.bw.core">
                  <level value="WARN"/>
                </logger>
                
                <logger name="com.tibco.bx">
                  <level value="ERROR"/>
                </logger>
              
                <logger name="com.tibco.pvm">
                  <level value="ERROR"/>
                </logger>
                
                <logger name="configuration.management.logger">
                  <level value="INFO"/>
                </logger>
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Palette and Activity loggers          * -->
                <!-- *=============================================================* -->
                
                <!-- Default Log activity logger -->
                <logger name="com.tibco.bw.palette.generalactivities.Log">
                  <level value="DEBUG"/>
                </logger>
                
                <logger name="com.tibco.bw.palette">
                  <level value="ERROR"/>
                </logger>
              
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Binding loggers                       * -->
                <!-- *=============================================================* -->
                
                <!-- SOAP Binding logger -->
                <logger name="com.tibco.bw.binding.soap">
                  <level value="ERROR"/>
                </logger>
                
                <!-- REST Binding logger -->
                <logger name="com.tibco.bw.binding.rest">
                  <level value="ERROR"/>
                </logger>
                
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Shared Resource loggers               * -->
                <!-- *=============================================================* --> 
                <logger name="com.tibco.bw.sharedresource">
                  <level value="ERROR"/>
                </logger>
                
                
                 
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Schema Cache loggers                  * -->
                <!-- *=============================================================* -->
                <logger name="com.tibco.bw.cache.runtime.xsd">
                  <level value="ERROR"/>
                </logger> 
                
                <logger name="com.tibco.bw.cache.runtime.wsdl">
                  <level value="ERROR"/>
                </logger> 
                
                  
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Governance loggers                    * -->
                <!-- *=============================================================* -->  
                <!-- Governance: Policy Director logger1 --> 
                <logger name="com.tibco.governance">
                  <level value="ERROR"/>
                </logger>
                 
                <logger name="com.tibco.amx.governance">
                  <level value="WARN"/>
                </logger>
                 
                <!-- Governance: Policy Director logger2 -->
                <logger name="com.tibco.governance.pa.action.runtime.PolicyProperties">
                  <level value="ERROR"/>
                </logger>
                
                <!-- Governance: SPM logger1 -->
                <logger name="com.tibco.governance.spm">
                  <level value="ERROR"/>
                </logger>
                
                <!-- Governance: SPM logger2 -->
                <logger name="rta.client">
                  <level value="ERROR"/>
                </logger>
                
                
                  
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Miscellaneous Loggers                 * -->
                <!-- *=============================================================* --> 
                <logger name="com.tibco.bw.platformservices">
                  <level value="INFO"/>
                </logger>
                
                <logger name="com.tibco.bw.core.runtime.statistics">
                  <level value="ERROR"/>
                </logger>
                
              
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: Other loggers                                       * -->
                <!-- *=============================================================* -->  
                <logger name="org.apache.axis2">
                  <level value="ERROR"/>
                </logger>
              
                <logger name="org.eclipse">
                  <level value="ERROR"/>
                </logger>
                
                <logger name="org.quartz">
                  <level value="ERROR"/>
                </logger>
                
                <logger name="org.apache.commons.httpclient.util.IdleConnectionHandler">
                  <level value="ERROR"/>
                </logger>
                
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: User loggers.  User's custom loggers should be      * -->
                <!-- *         configured in this section.                         * -->
                <!-- *=============================================================* -->
              
                <!-- *=============================================================* -->
                <!-- * ROOT                                                        * -->
                <!-- *=============================================================* --> 
                <root level="ERROR">
                 <appender-ref ref="STDOUT" />
                </root>
                
              </configuration>
              
              

              You can also do more custom configurations based on the information available on the ECS encoder configuration page here.

              How to enable TIBCO BW ECS Logging Support?

              For BWCE, the steps are similar, but we need to be aware that all the runtime components are packaged inside the base-runtime-version.zip that we download from our TIBCO eDelivery site, so we will need to use a tool to open that ZIP and do the following modifications:

              • We will place the download JARs on that folder /tibco.home/bwce/2.8/system/shared/com.tibco.tpcl.logback_1.2.1600.004
              BW ECS Logging: JAR location
              • We will open the META-INF/MANIFEST.MF and do the following modifications:
                • Add those JARs to the Bundle-Classpath section:
              BW ECS Logging: Bundle-Classpath changes
              • Include the following package (co.elastic.logging.logback) as part of the exported packages by adding it to the Exported-packagesection:
              BW ECS Logging: Export-package changes
              • Additionally we will need to modify the bwappnode in the location /tibco.home/bwce/2.8/bin to add the JAR files also to the classpath that the BWCE base image use to run to ensure this is loading:
              BW ECS Logging: bwappnode change

              Now we can build our BWCE base image as usual and modify the logback.xml as explained above. Here you can see a sample application using this configuration:

              {"@timestamp":"2023-08-28T12:49:08.524Z","log.level": "INFO","message":"TIBCO BusinessWorks version 2.8.2, build V17, 2023-05-19","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"main","log.logger":"com.tibco.thor.frwk"}
              
              <>@BWEclipseAppNode> {"@timestamp":"2023-08-28T12:49:25.435Z","log.level": "INFO","message":"Started by BusinessStudio.","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"main","log.logger":"com.tibco.thor.frwk.Deployer"}
              {"@timestamp":"2023-08-28T12:49:32.795Z","log.level": "INFO","message":"TIBCO-BW-FRWK-300002: BW Engine [Main] started successfully.","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"main","log.logger":"com.tibco.bw.frwk.engine.BWEngine"}
              {"@timestamp":"2023-08-28T12:49:34.338Z","log.level": "INFO","message":"TIBCO-THOR-FRWK-300001: Started OSGi Framework of AppNode [BWEclipseAppNode] in AppSpace [BWEclipseAppSpace] of Domain [BWEclipseDomain]","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"Framework Event Dispatcher: Equinox Container: 1395256a-27a2-4e91-b774-310e85b0b87c","log.logger":"com.tibco.thor.frwk.Deployer"}
              {"@timestamp":"2023-08-28T12:49:34.456Z","log.level": "INFO","message":"TIBCO-THOR-FRWK-300018: Deploying BW Application [t3:1.0].","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"Framework Event Dispatcher: Equinox Container: 1395256a-27a2-4e91-b774-310e85b0b87c","log.logger":"com.tibco.thor.frwk.Application"}
              {"@timestamp":"2023-08-28T12:49:34.524Z","log.level": "INFO","message":"TIBCO-THOR-FRWK-300021: All Application dependencies are resolved for Application [t3:1.0]","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"Framework Event Dispatcher: Equinox Container: 1395256a-27a2-4e91-b774-310e85b0b87c","log.logger":"com.tibco.thor.frwk.Application"}
              {"@timestamp":"2023-08-28T12:49:34.541Z","log.level": "INFO","message":"Started by BusinessStudio, ignoring .enabled settings.","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"Framework Event Dispatcher: Equinox Container: 1395256a-27a2-4e91-b774-310e85b0b87c","log.logger":"com.tibco.thor.frwk.Application"}
              {"@timestamp":"2023-08-28T12:49:35.842Z","log.level": "INFO","message":"TIBCO-THOR-FRWK-300006: Started BW Application [t3:1.0]","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"EventAdminThread #1","log.logger":"com.tibco.thor.frwk.Application"}
              {"@timestamp":"2023-08-28T12:49:35.954Z","log.level": "INFO","message":"aaaaaaa&#10;","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"bwEngThread:In-Memory Process Worker-1","log.logger":"com.tibco.bw.palette.generalactivities.Log.t3.module.Log"}
              gosh: stopping shell
              

              Boosting Kubernetes Security: Exploring KubeSec – A Must-Have Tool for Safeguarding Your Cluster

              KubeSec is another tool to help improve the security of our Kubernetes cluster. And we’re seeing so many agencies focus on security to highlight this topic’s importance in modern architectures and deployments. Security is a key component now, probably the most crucial. We need all to step up our game on that topic, and that’s why it is essential to have tools in our toolset to help us on that task without being fully security experts on each of the technologies, such as Kubernetes in this case.

              KubeSec is an open-source tool developed by a cloud-native and open-source security consultancy named ControlPlane that helps us perform a security risk analysis on Kubernetes resources.

              How Does KubeSec Work?

              KubeSec works based on the Kubernetes Manifest Files you use to deploy the different resources, so you need to provide the YAML file to one of the running ways this tool supports. This is an important topic, “one of the running ways,” because KubeSec supports many different running modes that help us cover other use cases.

              You can run KubeSec in the following ones:

              • HTTP Mode: KubeSec will be listening to HTTP requests with the content of the YAML and provide a report based on that. This is useful in cases needing server mode execution, such as CICD pipelines, or just security servers to be used by some teams, such as DevOps or Platform Engineering. Also, another critical use-case of this mode is to be part of a Kubernetes Admission Controller on your Kubernetes Cluster so that you can enforce this when developers are deploying resources into the platform itself.
              • SaaS Mode: Similar to HTTP mode but without needing to host it yourself, all available behind kubesec.io kubesec.io when the SaaS mode is of your preference, and you’re not managing sensitive information on those components.
              • CLI Mode: Just to run it yourself as part of your local tests, you will have available another CLI command here: kubesec scan k8s-deployment.yaml
              • Docker Mode: Similar to CLI mode but as part of a docker image, it can also be compatible with the CICD pipelines based on containerized workloads.

              KubeScan Output Report

              What you will get out of the execution if KubeScan of any of its forms is a JSON report that you can use to improve and score the security level of your Kubernetes resources and some ways to improve it. The reason behind using JSON as the output also simplifies the tool’s usage in automated workloads such as CICD pipelines. Here you can see a sample of the output report you will get:

              kubesec sample output

              The important thing about the output is the kind of information you will receive from it. As you can see in the picture above, it is separated into two different sections per object. The first one is the “score,” that are the implemented things related to security that provide some score for the security of the object. But also you will have an advice section that provides some things and configurations you can do to improve that score, and because of that, also the global security of the Kubernetes object itself.

              Kubescan also leverages another tool we have commented not far enough on this site, Kubeconform, so you can also specify the target Kubernetes version you’re hitting to have a much more precise report of your specific Kubernetes Manifest. To do that, you can specify the argument --kubernetes-version when you’re launching the command, as you can see in the picture below:

              kubesec command with kubernetes-version option

               How To Install KubeScan?

              Installation also provides different ways and flavors to see what is best for you. Here are some of the options available at the moment for writing this article:

              Conclusion

              Emphasizing the paramount importance of security in today’s intricate architectures, KubeSec emerges as a vital asset for bolstering the protection of Kubernetes clusters. Developed by ControlPlane, this open-source tool facilitates comprehensive security risk assessments of Kubernetes resources. Offering versatility through multiple operational modes—such as HTTP, SaaS, CLI, and Docker—KubeSec provides tailored support for diverse scenarios. Its JSON-based output streamlines integration into automated workflows, while its synergy with Kubeconform ensures precise analysis of Kubernetes Manifests. KubeSec’s user-friendly approach empowers security experts and novices, catalyzing an elevated standard of Kubernetes security across the board.