Kyverno: A Detailed Way of Enforcing Standard and Custom Policies

Kyverno: A Detailed Way of Enforcing Standard and Custom Policies

In the Kubernetes ecosystem, security and governance are key aspects that need continuous attention. While Kubernetes offers some out-of-the-box (OOTB) security features such as Pod Security Admission (PSA), these might not be sufficient for complex environments with varying compliance requirements. This is where Kyverno comes into play, providing a powerful yet flexible solution for managing and enforcing policies across your cluster.

In this post, we will explore the key differences between Kyverno and PSA, explain how Kyverno can be used in different use cases, and show you how to install and deploy policies with it. Although custom policy creation will be covered in a separate post, we will reference some pre-built policies you can use right away.

What is Pod Security Admission (PSA)?

Kubernetes introduced Pod Security Admission (PSA) as a replacement for the now deprecated PodSecurityPolicy (PSP). PSA focuses on enforcing three predefined levels of security: Privileged, Baseline, and Restricted. These levels control what pods are allowed to run in a namespace based on their security context configurations.

  • Privileged: Minimal restrictions, allowing privileged containers and host access.
  • Baseline: Applies standard restrictions, disallowing privileged containers and limiting host access.
  • Restricted: The strictest level, ensuring secure defaults and enforcing best practices for running containers.

While PSA is effective for basic security requirements, it lacks flexibility when enforcing fine-grained or custom policies. We have a full article covering this topic that you can read here.

Kyverno vs. PSA: Key Differences

Kyverno extends beyond the capabilities of PSA by offering more granular control and flexibility. Here’s how it compares:

  1. Policy Types: While PSA focuses solely on security, Kyverno allows the creation of policies for validation, mutation, and generation of resources. This means you can modify or generate new resources, not just enforce security rules.
  2. Customizability: Kyverno supports custom policies that can enforce your organization’s compliance requirements. You can write policies that govern specific resource types, such as ensuring that all deployments have certain labels or that container images come from a trusted registry.
  3. Policy as Code: Kyverno policies are written in YAML, allowing for easy integration with CI/CD pipelines and GitOps workflows. This makes policy management declarative and version-controlled, which is not the case with PSA.
  4. Audit and Reporting: With Kyverno, you can generate detailed audit logs and reports on policy violations, giving administrators a clear view of how policies are enforced and where violations occur. PSA lacks this built-in reporting capability.
  5. Enforcement and Mutation: While PSA primarily enforces restrictions on pods, Kyverno allows not only validation of configurations but also modification of resources (mutation) when required. This adds an additional layer of flexibility, such as automatically adding annotations or labels.

When to Use Kyverno Over PSA

While PSA might be sufficient for simpler environments, Kyverno becomes a valuable tool in scenarios requiring:

  • Custom Compliance Rules: For example, enforcing that all containers use a specific base image or restricting specific container capabilities across different environments.
  • CI/CD Integrations: Kyverno can integrate into your CI/CD pipelines, ensuring that resources comply with organizational policies before they are deployed.
  • Complex Governance: When managing large clusters with multiple teams, Kyverno’s policy hierarchy and scope allow for finer control over who can deploy what and how resources are configured.

If your organization needs a more robust and flexible security solution, Kyverno is a better fit compared to PSA’s more generic approach.

Installing Kyverno

To start using Kyverno, you’ll need to install it in your Kubernetes cluster. This is a straightforward process using Helm, which makes it easy to manage and update.

Step-by-Step Installation

Add the Kyverno Helm repository:

    helm repo add kyverno https://kyverno.github.io/kyverno/

    Update Helm repositories:

      helm repo update

      Install Kyverno in your Kubernetes cluster:

        helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace

        Verify the installation:

          kubectl get pods -n kyverno

          After installation, Kyverno will begin enforcing policies across your cluster, but you’ll need to deploy some policies to get started.

          Deploying Policies with Kyverno

          Kyverno policies are written in YAML, just like Kubernetes resources, which makes them easy to read and manage. You can find several ready-to-use policies from the Kyverno Policy Library, or create your own to match your requirements.

          Here is an example of a simple validation policy that ensures all pods use trusted container images from a specific registry:

          apiVersion: kyverno.io/v1
          kind: ClusterPolicy
          metadata:
            name: require-trusted-registry
          spec:
            validationFailureAction: enforce
            rules:
            - name: check-registry
              match:
                resources:
                  kinds:
                  - Pod
              validate:
                message: "Only images from 'myregistry.com' are allowed."
                pattern:
                  spec:
                    containers:
                    - image: "myregistry.com/*"

          This policy will automatically block the deployment of any pod that uses an image from a registry other than myregistry.com.

          Applying the Policy

          To apply the above policy, save it to a YAML file (e.g., trusted-registry-policy.yaml) and run the following command:

          kubectl apply -f trusted-registry-policy.yaml

          Once applied, Kyverno will enforce this policy across your cluster.

          Viewing Kyverno Policy Reports

          Kyverno generates detailed reports on policy violations, which are useful for audits and tracking policy compliance. To check the reports, you can use the following commands:

          List all Kyverno policy reports:

            kubectl get clusterpolicyreport

            Describe a specific policy report to get more details:

              kubectl describe clusterpolicyreport <report-name>

              These reports can be integrated into your monitoring tools to trigger alerts when critical violations occur.

              Conclusion

              Kyverno offers a flexible and powerful way to enforce policies in Kubernetes, making it an essential tool for organizations that need more than the basic capabilities provided by PSA. Whether you need to ensure compliance with internal security standards, automate resource modifications, or integrate policies into CI/CD pipelines, Kyverno’s extensive feature set makes it a go-to choice for Kubernetes governance.

              For now, start with the out-of-the-box policies available in Kyverno’s library. In future posts, we’ll dive deeper into creating custom policies tailored to your specific needs.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Kubernetes Pod Security Admission Explained: Enforcing PSA Policies the Right Way

              Kubernetes Pod Security Admission Explained: Enforcing PSA Policies the Right Way

              In Kubernetes, security is a key concern, especially as containers and microservices grow in complexity. One of the essential features of Kubernetes for policy enforcement is Pod Security Admission (PSA), which replaces the deprecated Pod Security Policies (PSP). PSA provides a more straightforward and flexible approach to enforce security policies, helping administrators safeguard clusters by ensuring that only compliant pods are allowed to run.

              This article will guide you through PSA, the available Pod Security Standards, how to configure them, and how to apply security policies to specific namespaces using labels.

              What is Pod Security Admission (PSA)?

              PSA is a built-in admission controller introduced in Kubernetes 1.23 to replace Pod Security Policies (PSPs). PSPs had a steep learning curve and could become cumbersome when scaling security policies across various environments. PSA simplifies this process by applying Kubernetes Pod Security Standards based on predefined security levels without needing custom logic for each policy.

              With PSA, cluster administrators can restrict the permissions of pods by using labels that correspond to specific Pod Security Standards. PSA operates at the namespace level, enabling better granularity in controlling security policies for different workloads.

              Pod Security Standards

              Kubernetes provides three key Pod Security Standards in the PSA framework:

              • Privileged: No restrictions; permits all features and is the least restrictive mode. This is not recommended for production workloads but can be used in controlled environments or for workloads requiring elevated permissions.
              • Baseline: Provides a good balance between usability and security, restricting the most dangerous aspects of pod privileges while allowing common configurations. It is suitable for most applications that don’t need special permissions.
              • Restricted: The most stringent level of security. This level is intended for workloads that require the highest level of isolation and control, such as multi-tenant clusters or workloads exposed to the internet.

              Each standard includes specific rules to limit pod privileges, such as disallowing privileged containers, restricting access to the host network, and preventing changes to certain security contexts.

              Setting Up Pod Security Admission (PSA)

              To enable PSA, you need to label your namespaces based on the security level you want to enforce. The label format is as follows:

              kubectl label --overwrite ns  pod-security.kubernetes.io/enforce=<value>

              For example, to enforce a restricted security policy on the production namespace, you would run:

              kubectl label --overwrite ns production pod-security.kubernetes.io/enforce=restricted

              In this example, Kubernetes will automatically apply the rules associated with the restricted policy to all pods deployed in the production namespace.

              Additional PSA Modes

              PSA also provides additional modes for greater control:

              • Audit: Logs a policy violation but allows the pod to be created.
              • Warn: Issues a warning but permits the pod creation.
              • Enforce: Blocks pod creation if it violates the policy.

              To configure these modes, use the following labels:

              kubectl label --overwrite ns      pod-security.kubernetes.io/enforce=baseline     pod-security.kubernetes.io/audit=restricted     pod-security.kubernetes.io/warn=baseline

              This setup enforces the baseline standard while issuing warnings and logging violations for restricted-level rules.

              Example: Configuring Pod Security in a Namespace

              Let’s walk through an example of configuring baseline security for the dev namespace. First, you need to apply the PSA labels:

              kubectl create namespace dev
              kubectl label --overwrite ns dev pod-security.kubernetes.io/enforce=baseline

              Now, any pod deployed in the dev namespace will be checked against the baseline security standard. If a pod violates the baseline policy (for instance, by attempting to run a privileged container), it will be blocked from starting.

              You can also combine warn and audit modes to track violations without blocking pods:

              kubectl label --overwrite ns dev     pod-security.kubernetes.io/enforce=baseline     pod-security.kubernetes.io/warn=restricted     pod-security.kubernetes.io/audit=privileged

              In this case, PSA will allow pods to run if they meet the baseline policy, but it will issue warnings for restricted-level violations and log any privileged-level violations.

              Applying Policies by Default

              One of the strengths of PSA is its simplicity in applying policies at the namespace level, but administrators might wonder if there’s a way to apply a default policy across new namespaces automatically. As of now, Kubernetes does not natively provide an option to apply PSA policies globally by default. However, you can use admission webhooks or automation tools such as OPA Gatekeeper or Kyverno to enforce default policies for new namespaces.

              Conclusion

              Pod Security Admission (PSA) simplifies policy enforcement in Kubernetes clusters, making it easier to ensure compliance with security standards across different environments. By configuring Pod Security Standards at the namespace level and using labels, administrators can control the security level of workloads with ease. The flexibility of PSA allows for efficient security management without the complexity associated with the older Pod Security Policies (PSPs).

              For more details on configuring PSA and Pod Security Standards, check the official Kubernetes PSA documentation and Pod Security Standards documentation.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Helm Hooks Explained: Complete Guide to Using Hooks in Helm Charts

              Helm Hooks Explained: Complete Guide to Using Hooks in Helm Charts

              Helm hooks are a powerful yet often misunderstood feature ofHelm Hooks: Complete Guide to Using Hooks in Helm Charts

              Helm hooks are a powerful—but often misunderstood—feature of Helm, the Kubernetes package manager. They allow you to execute Kubernetes resources at specific points in the Helm release lifecycle, enabling advanced deployment workflows, validations, migrations, and cleanups.

              In this complete guide to Helm hooks, you’ll learn:

              • What Helm hooks are and how they work internally
              • All available Helm hooks and when to use each one
              • Real-world use cases with practical examples
              • Best practices and common pitfalls when working with Helm hooks in Kubernetes

              If you build or maintain Helm charts in production, understanding Helm hooks is essential.

              What Are Helm Hooks?

              Helm hooks are Kubernetes resources annotated with special metadata that instruct Helm to execute them at specific lifecycle events, such as:

              • Before or after an install
              • Before or after an upgrade
              • Before or after a rollback
              • During deletion
              • When running tests

              From a technical perspective, Helm hooks are implemented using annotations on standard Kubernetes resources (most commonly Job objects).

              Helm evaluates these annotations during a Helm operation and executes the hooked resources outside the normal install/upgrade flow, giving you fine-grained lifecycle control.

              Available Helm Hooks and Use Cases

              Helm provides several hooks that correspond to different lifecycle stages. Below is a detailed breakdown of all Helm hooks, including execution timing and common use cases.

              1. pre-install

              Execution timing
              After templates are rendered, but before any Kubernetes resources are created.

              Typical use cases

              • Creating prerequisites (ConfigMaps, Secrets)
              • Performing environment validation
              • Preparing external dependencies
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: setup-config
                annotations:
                  "helm.sh/hook": pre-install
              spec:
                template:
                  spec:
                    containers:
                      - name: config-creator
                        image: busybox
                        command: ['sh', '-c', 'echo "config data" > /config/config.txt']
                    restartPolicy: Never

              2. post-install

              Execution timing
              After all resources have been successfully created.

              Typical use cases

              • Database initialization
              • Data seeding
              • Post-deployment verification
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: init-database
                annotations:
                  "helm.sh/hook": post-install
              spec:
                template:
                  spec:
                    containers:
                      - name: db-init
                        image: busybox
                        command: ['sh', '-c', 'init-db-command']
                    restartPolicy: Never

              3. pre-delete

              Execution timing
              Triggered before Helm deletes any resources.

              Typical use cases

              • Backups
              • Graceful shutdowns
              • External cleanup preparation
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: backup-before-delete
                annotations:
                  "helm.sh/hook": pre-delete
              spec:
                template:
                  spec:
                    containers:
                      - name: backup
                        image: busybox
                        command: ['sh', '-c', 'backup-command']
                    restartPolicy: Never

              4. post-delete

              Execution timing
              After all release resources have been deleted.

              Typical use cases

              • Cleaning up cloud resources
              • Removing external state
              • Audit logging
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: cleanup
                annotations:
                  "helm.sh/hook": post-delete
              spec:
                template:
                  spec:
                    containers:
                      - name: cleanup
                        image: busybox
                        command: ['sh', '-c', 'cleanup-command']
                    restartPolicy: Never

              5. pre-upgrade

              Execution timing
              Before Helm applies any upgrade changes.

              Typical use cases

              • Schema validation
              • Pre-upgrade checks
              • Compatibility verification
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: pre-upgrade-check
                annotations:
                  "helm.sh/hook": pre-upgrade
              spec:
                template:
                  spec:
                    containers:
                      - name: upgrade-check
                        image: busybox
                        command: ['sh', '-c', 'upgrade-check-command']
                    restartPolicy: Never

              6. post-upgrade

              Execution timing
              After all upgraded resources are applied.

              Typical use cases

              • Data migrations
              • Smoke tests
              • Post-upgrade validation
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: post-upgrade-verify
                annotations:
                  "helm.sh/hook": post-upgrade
              spec:
                template:
                  spec:
                    containers:
                      - name: verification
                        image: busybox
                        command: ['sh', '-c', 'verify-upgrade']
                    restartPolicy: Never

              7. pre-rollback

              Execution timing
              Before Helm reverts to a previous release revision.

              Typical use cases

              • Data snapshots
              • Notifications
              • Rollback preparation
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: pre-rollback-backup
                annotations:
                  "helm.sh/hook": pre-rollback
              spec:
                template:
                  spec:
                    containers:
                      - name: backup
                        image: busybox
                        command: ['sh', '-c', 'rollback-backup']
                    restartPolicy: Never

              8. post-rollback

              Execution timing
              After rollback resources are restored.

              Typical use cases

              • State verification
              • Alerting
              • Post-incident actions
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: post-rollback-verify
                annotations:
                  "helm.sh/hook": post-rollback
              spec:
                template:
                  spec:
                    containers:
                      - name: verify
                        image: busybox
                        command: ['sh', '-c', 'verify-rollback']
                    restartPolicy: Never

              9. test

              Execution timing
              Executed only when running helm test.

              Typical use cases

              • Integration tests
              • Health checks
              • End-to-end validation
              apiVersion: batch/v1
              kind: Job
              metadata:
                name: test-application
                annotations:
                  "helm.sh/hook": test
              spec:
                template:
                  spec:
                    containers:
                      - name: test
                        image: busybox
                        command: ['sh', '-c', 'run-tests']
                    restartPolicy: Never

              Helm Hook Annotations Explained

              Helm provides additional annotations to control hook behavior:

              • helm.sh/hook-weight
                Controls execution order. Lower values run first.
              • helm.sh/hook-delete-policy
                Determines when hook resources are deleted:
                • hook-succeeded
                • hook-failed
                • before-hook-creation
              • helm.sh/resource-policy: keep
                Prevents Helm from deleting the resource, useful for debugging.

              These annotations are critical for avoiding orphaned jobs and unexpected hook behavior.

              Best Practices for Using Helm Hooks

              ✔ Use hooks sparingly — avoid overloading charts with logic
              ✔ Prefer idempotent hook jobs
              ✔ Always define restartPolicy: Never for Jobs
              ✔ Clean up hook resources with hook-delete-policy
              ✔ Avoid using hooks for core application logic

              Conclusion

              Helm hooks give you precise control over the Kubernetes deployment lifecycle, making them invaluable for advanced Helm charts and production workflows. When used correctly, they enable safer upgrades, cleaner rollbacks, and more reliable deployments.

              FAQ & Takeaways

              What are Helm hooks?

              Helm hooks are Kubernetes resources annotated so that Helm executes them at specific points in the release lifecycle (e.g., before or after install, upgrade, delete). They allow you to prepare prerequisites, run jobs, or clean up resources.

              How do I use Helm hooks in my Helm charts?

              You add helm.sh/hook annotations to Kubernetes manifests in your chart. These annotations tell Helm when to run the resource (pre-install, post-install, pre-delete, etc.). Jobs are commonly used to implement hook tasks.

              When should I use pre-install vs post-install hooks?

              Use a pre-install hook when you need to create prerequisites (like ConfigMaps or Secrets) or validate the environment before deploying. Use a post-install hook when you need to initialize a database, seed data, or run verification jobs after the chart is installed.

              Are Helm hooks removed automatically?

              By default, hook resources are deleted after execution, but you can control this with the helm.sh/hook-delete-policy annotation (e.g., hook-succeeded, hook-failed, before-hook-creation) or keep them for debugging with helm.sh/resource-policy: keep.

              What is the difference between Helm hooks and Helm tests?

              Helm hooks run automatically at specified lifecycle events (install, upgrade, delete, rollback), whereas Helm tests run only when you invoke helm test. Tests are used to validate the health or functionality of your deployment.

              To deepen your Helm expertise, check out our
              👉 comprehensive Helm charts guide

              Advanced Helm Commands and Flags Every Kubernetes Engineer Should Know

              Advanced Helm Commands and Flags Every Kubernetes Engineer Should Know

              Managing Kubernetes resources effectively can sometimes feel overwhelming, but Helm, the Kubernetes package manager, offers several commands and flags that make the process smoother and more intuitive. In this article, we’ll dive into some lesser-known Helm commands and flags, explaining their uses, benefits, and practical examples.

              These advanced commands are essential for mastering Helm in production. For the complete toolkit including fundamentals, testing, and deployment patterns, visit our Helm package management guide.

              1. helm get values: Retrieving Deployed Chart Values

              The helm get values command is essential when you need to see the configuration values of a deployed Helm chart. This is particularly useful when you have a chart deployed but lack access to its original configuration file. With this command, you can achieve an “Infrastructure as Code” approach by capturing the current state of your deployment.

              Usage:

              helm get values <release-name> [flags]

              Example:

              To get the values of a deployed chart named my-release:

              helm get values my-release --namespace my-namespace

              This command outputs the current values used for the deployment, which is valuable for documentation, replicating the environment, or modifying deployments.

              2. Understanding helm upgrade Flags: --reset-values, --reuse-values, and --reset-then-reuse

              The helm upgrade command is typically used to upgrade or modify an existing Helm release. However, the behavior of this command can be finely tuned using several flags: --reset-values, --reuse-values, and --reset-then-reuse.

              • --reset-values: Ignores the previous values and uses only the values provided in the current command. Use this flag when you want to override the existing configuration entirely.

              Example Scenario: You are deploying a new version of your application, and you want to ensure that no old values are retained.

                helm upgrade my-release my-chart --reset-values --set newKey=newValue
              • --reuse-values: Reuses the previous release’s values and merges them with any new values provided. This flag is useful when you want to keep most of the old configuration but apply a few tweaks.

              Example Scenario: You need to add a new environment variable to an existing deployment without affecting the other settings.

                helm upgrade my-release my-chart --reuse-values --set newEnv=production
              • --reset-then-reuse: A combination of the two. It resets to the original values and then merges the old values back, allowing you to start with a clean slate while retaining specific configurations.

              Example Scenario: Useful in complex environments where you want to ensure the chart is using the original default settings but retain some custom values.

                helm upgrade my-release my-chart --reset-then-reuse --set version=2.0

              3. helm lint: Ensuring Chart Quality in CI/CD Pipelines

              The helm lint command checks Helm charts for syntax errors, best practices, and other potential issues. This is especially useful when integrating Helm into a CI/CD pipeline, as it ensures your charts are reliable and adhere to best practices before deployment.

              Usage:

              helm lint <chart-path> [flags]
              • <chart-path>: Path to the Helm chart you want to validate.

              Example:

              helm lint ./my-chart/

              This command scans the my-chart directory for issues like missing fields, incorrect YAML structure, or deprecated usage. If you’re automating deployments, integrating helm lint into your pipeline helps catch problems early. By adding this command in your CICD pipeline, you ensure that any syntax or structural issues are caught before proceeding to build or deployment stages. You can lear more about helm testing in the linked article

              4. helm rollback: Reverting to a Previous Release

              The helm rollback command allows you to revert a release to a previous version. This can be incredibly useful in case of a failed upgrade or deployment, as it provides a way to quickly restore a known good state.

              Usage:

              helm rollback <release-name> [revision] [flags]
              • [revision]: The revision number to which you want to roll back. If omitted, Helm will roll back to the previous release by default.

              Example:

              To roll back a release named my-release to its previous version:

              helm rollback my-release

              To roll back to a specific revision, say revision 3:

              helm rollback my-release 3

              This command can be a lifesaver when a recent change breaks your application, allowing you to quickly restore service continuity while investigating the issue.

              5. helm verify: Validating a Chart Before Use

              The helm verify command checks the integrity and validity of a chart before it is deployed. This command ensures that the chart’s package file has not been tampered with or corrupted. It’s particularly useful when you are pulling charts from external repositories or using charts shared across multiple teams.

              Usage:

              helm verify <chart-path>

              Example:

              To verify a downloaded chart named my-chart:

              helm verify ./my-chart.tgz

              If the chart passes the verification, Helm will output a success message. If it fails, you’ll see details of the issues, which could range from missing files to checksum mismatches.

              Conclusion

              Leveraging these advanced Helm commands and flags can significantly enhance your Kubernetes management capabilities. Whether you are retrieving existing deployment configurations, fine-tuning your Helm upgrades, or ensuring the quality of your charts in a CI/CD pipeline, these tricks help you maintain a robust and efficient Kubernetes environment.

              Exposing TCP Ports with Istio Ingress Gateway in Kubernetes (Step-by-Step Guide)

              Exposing TCP Ports with Istio Ingress Gateway in Kubernetes (Step-by-Step Guide)

              Istio has become an essential tool for managing HTTP traffic within Kubernetes clusters, offering advanced features such as Canary Deployments, mTLS, and end-to-end visibility. However, some tasks, like exposing a TCP port using the Istio IngressGateway, can be challenging if you’ve never done it before. This article will guide you through the process of exposing TCP ports with Istio Ingress Gateway, complete with real-world examples and practical use cases.

              Understanding the Context

              Istio is often used to manage HTTP traffic in Kubernetes, providing powerful capabilities such as traffic management, security, and observability. The Istio IngressGateway serves as the entry point for external traffic into the Kubernetes cluster, typically handling HTTP and HTTPS traffic. However, Istio also supports TCP traffic, which is necessary for use cases like exposing databases or other non-HTTP services running in the cluster to external consumers.

              Exposing a TCP port through Istio involves configuring the IngressGateway to handle TCP traffic and route it to the appropriate service. This setup is particularly useful in scenarios where you need to expose services like TIBCO EMS or Kubernetes-based databases to other internal or external applications.

              Steps to Expose a TCP Port with Istio IngressGateway

              1.- Modify the Istio IngressGateway Service:

              Before configuring the Gateway, you must ensure that the Istio IngressGateway service is configured to listen on the new TCP port. This step is crucial if you’re using a NodePort service, as this port needs to be opened on the Load Balancer.

                 apiVersion: v1
                 kind: Service
                 metadata:
               name: istio-ingressgateway
               namespace: istio-system
                 spec:
               ports:
               - name: http2
                 port: 80
                 targetPort: 80
               - name: https
                 port: 443
                 targetPort: 443
               - name: tcp
                 port: 31400
                 targetPort: 31400
                 protocol: TCP
              

              2.- Update the Istio IngressGateway service to include the new port 31400 for TCP traffic.

              Configure the Istio IngressGateway: After modifying the service, configure the Istio IngressGateway to listen on the desired TCP port.

              apiVersion: networking.istio.io/v1beta1
              kind: Gateway
              metadata:
                name: tcp-ingress-gateway
                namespace: istio-system
              spec:
                selector:
              istio: ingressgateway
                servers:
                - port:
              	  number: 31400
              	  name: tcp
              	  protocol: TCP
              	hosts:
              	- "*"
              

              In this example, the IngressGateway is configured to listen on port 31400 for TCP traffic.

              3.- Create a Service and VirtualService:

              After configuring the gateway, you need to create a Service that represents the backend application and a VirtualService to route the TCP traffic.

              apiVersion: v1
              kind: Service
              metadata:
                name: tcp-service
                namespace: default
              spec:
                ports:
                - port: 31400
              	targetPort: 8080
              	protocol: TCP
                selector:
              app: tcp-app
              

              The Service above maps port 31400 on the IngressGateway to port 8080 on the backend application.

              apiVersion: networking.istio.io/v1beta1
              kind: VirtualService
              metadata:
                name: tcp-virtual-service
                namespace: default
              spec:
                hosts:
                - "*"
                gateways:
                - tcp-ingress-gateway
                tcp:
                - match:
              	- port: 31400
              	route:
              	- destination:
              		host: tcp-service
              		port:
              		  number: 8080
              

              The VirtualService routes TCP traffic coming to port 31400 on the gateway to the tcp-service on port 8080.

              4.- Apply the Configuration

              Apply the above configurations using kubectl to create the necessary Kubernetes resources.

              kubectl apply -f istio-ingressgateway-service.yaml
              kubectl apply -f tcp-ingress-gateway.yaml
              kubectl apply -f tcp-service.yaml
              kubectl apply -f tcp-virtual-service.yaml
              

              After applying these configurations, the Istio IngressGateway will expose the TCP port to external traffic.

              Practical Use Cases

              • Exposing TIBCO EMS Server: One common scenario is exposing a TIBCO EMS (Enterprise Message Service) server running within a Kubernetes cluster to other internal applications or external consumers. By configuring the Istio IngressGateway to handle TCP traffic, you can securely expose EMS’s TCP port, allowing it to communicate with services outside the Kubernetes environment.
              • Exposing Databases: Another use case is exposing a database running within Kubernetes to external services or different clusters. By exposing the database’s TCP port through the Istio IngressGateway, you enable other applications to interact with it, regardless of their location.
              • Exposing a Custom TCP-Based Service: Suppose you have a custom application running within Kubernetes that communicates over TCP, such as a game server or a custom TCP-based API service. You can use the Istio IngressGateway to expose this service to external users, making it accessible from outside the cluster.

              Conclusion

              Exposing TCP ports using the Istio IngressGateway can be a powerful technique for managing non-HTTP traffic in your Kubernetes cluster. With the steps outlined in this article, you can confidently expose services like TIBCO EMS, databases, or custom TCP-based applications to external consumers, enhancing the flexibility and connectivity of your applications.

              ConfigMap Optional Values in Kubernetes: Avoid CreateContainerConfigError

              ConfigMap Optional Values in Kubernetes: Avoid CreateContainerConfigError

              Kubernetes ConfigMaps are a powerful tool for managing configuration data separately from application code. However, they can sometimes lead to issues during deployment, particularly when a ConfigMap referenced in a Pod specification is missing, causing the application to fail to start. This is a common scenario that can lead to a CreateContainerConfigError and halt your deployment pipeline.

              Understanding the Problem

              When a ConfigMap is referenced in a Pod’s specification, Kubernetes expects the ConfigMap to be present. If it is not, Kubernetes will not start the Pod, leading to a failed deployment. This can be problematic in situations where certain configuration data is optional or environment-specific, such as proxy settings that are only necessary in certain environments.

              Making ConfigMap Values Optional

              Kubernetes provides a way to define ConfigMap items as optional, allowing your application to start even if the ConfigMap is not present. This can be particularly useful for environment variables that only need to be set under certain conditions.

              Here’s a basic example of how to make a ConfigMap optional:

              apiVersion: v1
              kind: Pod
              metadata:
                name: example-pod
              spec:
                containers:
                - name: example-container
                  image: nginx
                  env:
                  - name: OPTIONAL_ENV_VAR
                    valueFrom:
                      configMapKeyRef:
                        name: example-configmap
                        key: optional-key
                        optional: true
              

              In this example:

              • name: example-configmap refers to the ConfigMap that might or might not be present.
              • optional: true ensures that the Pod will still start even if example-configmap or the optional-key within it is missing.

              Practical Use Case: Proxy Configuration

              A common use case for optional ConfigMap values is setting environment variables for proxy configuration. In many enterprise environments, proxy settings are only required in certain deployment environments (e.g., staging, production) but not in others (e.g., local development).

              apiVersion: v1
              kind: ConfigMap
              metadata:
                name: proxy-config
              data:
                HTTP_PROXY: "http://proxy.example.com"
                HTTPS_PROXY: "https://proxy.example.com"
              

              In your Pod specification, you could reference these proxy settings as optional:

              apiVersion: v1
              kind: Pod
              metadata:
                name: app-pod
              spec:
                containers:
                - name: app-container
                  image: my-app-image
                  env:
                  - name: HTTP_PROXY
                    valueFrom:
                      configMapKeyRef:
                        name: proxy-config
                        key: HTTP_PROXY
                        optional: true
                  - name: HTTPS_PROXY
                    valueFrom:
                      configMapKeyRef:
                        name: proxy-config
                        key: HTTPS_PROXY
                        optional: true
              

              In this setup, if the proxy-config ConfigMap is missing, the application will still start, simply without the proxy settings.

              Sample Application

              Let’s walk through a simple example to demonstrate this concept. We will create a deployment for an application that uses optional configuration values.

              1. Create the ConfigMap (Optional):
              apiVersion: v1
              kind: ConfigMap
              metadata:
                name: app-config
              data:
                GREETING: "Hello, World!"
              
              1. Deploy the Application:
              apiVersion: apps/v1
              kind: Deployment
              metadata:
                name: hello-world-deployment
              spec:
                replicas: 1
                selector:
                  matchLabels:
                    app: hello-world
                template:
                  metadata:
                    labels:
                      app: hello-world
                  spec:
                    containers:
                    - name: hello-world
                      image: busybox
                      command: ["sh", "-c", "echo $GREETING"]
                      env:
                      - name: GREETING
                        valueFrom:
                          configMapKeyRef:
                            name: app-config
                            key: GREETING
                            optional: true
              
              1. Deploy and Test:
              2. Deploy the application using kubectl apply -f <your-deployment-file>.yaml.
              3. If the app-config ConfigMap is present, the Pod will output “Hello, World!”.
              4. If the ConfigMap is missing, the Pod will start, but no greeting will be echoed.

              Conclusion

              Optional ConfigMap values are a simple yet effective way to make your Kubernetes deployments more resilient and adaptable to different environments. By marking ConfigMap keys as optional, you can prevent deployment failures and allow your applications to handle missing configuration gracefully.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Enable ECS Logging in TIBCO BusinessWorks with Logback

              Enable ECS Logging in TIBCO BusinessWorks with Logback

              TIBCO BW ECS Logging Support is becoming one demanded feature based on the increased usage of the Elastic Common Schema for the log aggregation solution based on the Elastic stack (previously known as ELK stack)

              This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

              We have already commented a lot about the importance of log aggregation solutions and their benefits, especially when discussing container architecture. Because of that, today, we will focus on how we can adapt our BW applications to support this new logging format.

              Because the first thing that we need to know is the following statement: Yes, this can be done. And it can be done non-dependant on your deployment model. So, the solution provided here works for both on-premises solutions as well as container deployments using BWCE.

              TIBCO BW Logging Background

              TIBCO BusinessWorks (container or not) relies on its logging capabilities in the logback library, and this library is configured using a file named logback.xml that could have the configuration that you need, as you can see in the picture below:

              BW ECS Logging: Sample of Logback.xml default config

              Logback is a well-known library for Java-based developments and has an architecture based on a core solution and plug-ins that extend its current capabilities. It’s this plug-in approach that we are going to do to support ECS.

              Even in the ECS Official documentation covers the configuration of enabling this logging configuration when using the Logback solution as you can see in the picture below and this official link:

              BW ECS Logging: ECS Java Dependency Information

              In our case, we don’t need to add the dependency anywhere but just download the dependency, as we will need to include to the existing OSGI bundles for the TIBCO BW installation. We will need just two files that are the following ones:

              • ecs-logging-core-1.5.0.jar
              • logback-ecs-encoder-1.5.0.jar

              At the moment of writing this article, current versions are 1.5.0 for each of them, but keep a look to make sure you’re using a recent version of this software to avoid any problems with support and vulnerabilities.

              Once we have these libraries, we need to add it to the BW system installation, and we need to do it differently if we are using a TIBCO on-premises installation or a TIBCO BW base installation. To be honest, the things we need to do are the same; the process of doing it is different.

              Because, in the end, what we need to do is just a simple task. Include these JAR files as part of the current logback OSGI bundle that TIBCO BW loads. So, let’s see how we can do that and start with an on-premises installation. We will use the TIBCO BWCE 2.8.2 version as an example, but similar steps will be required for other versions.

              On-premise installation is the easiest way to do it, but just because it has fewer steps than when we are doing it in a TIBCO BWCE base image. So, in this case, we will go to the following location: <TIBCO_HOME>/bwce/2.8/system/shared/com.tibco.tpcl.logback_1.2.1600.002/

              • We will place the download JARs in that folder
              BW ECS Logging: JAR location
              • We will open the META-INF/MANIFEST.MF and do the following modifications:
                • Add those JARs to the Bundle-Classpath section:
              BW ECS Logging: Bundle-Classpath changes
              • Include the following package (co.elastic.logging.logback) as part of the exported packages by adding it to the Exported-packagesection:
              BW ECS Logging: Export-Package changes

              Once this is done, our TIBCO BW installation supports ECS format. and we just need to configure the logback.xml to use it, and we can do that relying on the official documentation on the ECS page. We need to include the following encoder, as shown below:

               <encoder class="co.elastic.logging.logback.EcsEncoder">
                  <serviceName>my-application</serviceName>
                  <serviceVersion>my-application-version</serviceVersion>
                  <serviceEnvironment>my-application-environment</serviceEnvironment>
                  <serviceNodeName>my-application-cluster-node</serviceNodeName>
              </encoder>
              

              For example, if we modify the default logback.xml configuration file with this information, we will have something like this:

              <?xml version="1.0" encoding="UTF-8"?>
              <configuration scan="true">
                
                <!-- *=============================================================* -->
                <!-- *  APPENDER: Console Appender                                 * -->
                <!-- *=============================================================* -->  
                <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
                  <encoder class="co.elastic.logging.logback.EcsEncoder">
                    <serviceName>a</serviceName>
                    <serviceVersion>b</serviceVersion>
                    <serviceEnvironment>c</serviceEnvironment>
                    <serviceNodeName>d</serviceNodeName>
                </encoder>
                </appender>
              
              
              
                <!-- *=============================================================* -->
                <!-- * LOGGER: Thor Framework loggers                              * -->
                <!-- *=============================================================* -->
                <logger name="com.tibco.thor.frwk">
                  <level value="INFO"/>
                </logger>
                
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Framework loggers                     * -->
                <!-- *=============================================================* -->
                <logger name="com.tibco.bw.frwk">
                  <level value="WARN"/>
                </logger>  
                
                <logger name="com.tibco.bw.frwk.engine">
                  <level value="INFO"/>
                </logger>   
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Engine loggers                        * -->
                <!-- *=============================================================* --> 
                <logger name="com.tibco.bw.core">
                  <level value="WARN"/>
                </logger>
                
                <logger name="com.tibco.bx">
                  <level value="ERROR"/>
                </logger>
              
                <logger name="com.tibco.pvm">
                  <level value="ERROR"/>
                </logger>
                
                <logger name="configuration.management.logger">
                  <level value="INFO"/>
                </logger>
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Palette and Activity loggers          * -->
                <!-- *=============================================================* -->
                
                <!-- Default Log activity logger -->
                <logger name="com.tibco.bw.palette.generalactivities.Log">
                  <level value="DEBUG"/>
                </logger>
                
                <logger name="com.tibco.bw.palette">
                  <level value="ERROR"/>
                </logger>
              
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Binding loggers                       * -->
                <!-- *=============================================================* -->
                
                <!-- SOAP Binding logger -->
                <logger name="com.tibco.bw.binding.soap">
                  <level value="ERROR"/>
                </logger>
                
                <!-- REST Binding logger -->
                <logger name="com.tibco.bw.binding.rest">
                  <level value="ERROR"/>
                </logger>
                
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Shared Resource loggers               * -->
                <!-- *=============================================================* --> 
                <logger name="com.tibco.bw.sharedresource">
                  <level value="ERROR"/>
                </logger>
                
                
                 
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Schema Cache loggers                  * -->
                <!-- *=============================================================* -->
                <logger name="com.tibco.bw.cache.runtime.xsd">
                  <level value="ERROR"/>
                </logger> 
                
                <logger name="com.tibco.bw.cache.runtime.wsdl">
                  <level value="ERROR"/>
                </logger> 
                
                  
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Governance loggers                    * -->
                <!-- *=============================================================* -->  
                <!-- Governance: Policy Director logger1 --> 
                <logger name="com.tibco.governance">
                  <level value="ERROR"/>
                </logger>
                 
                <logger name="com.tibco.amx.governance">
                  <level value="WARN"/>
                </logger>
                 
                <!-- Governance: Policy Director logger2 -->
                <logger name="com.tibco.governance.pa.action.runtime.PolicyProperties">
                  <level value="ERROR"/>
                </logger>
                
                <!-- Governance: SPM logger1 -->
                <logger name="com.tibco.governance.spm">
                  <level value="ERROR"/>
                </logger>
                
                <!-- Governance: SPM logger2 -->
                <logger name="rta.client">
                  <level value="ERROR"/>
                </logger>
                
                
                  
                <!-- *=============================================================* -->
                <!-- * LOGGER: BusinessWorks Miscellaneous Loggers                 * -->
                <!-- *=============================================================* --> 
                <logger name="com.tibco.bw.platformservices">
                  <level value="INFO"/>
                </logger>
                
                <logger name="com.tibco.bw.core.runtime.statistics">
                  <level value="ERROR"/>
                </logger>
                
              
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: Other loggers                                       * -->
                <!-- *=============================================================* -->  
                <logger name="org.apache.axis2">
                  <level value="ERROR"/>
                </logger>
              
                <logger name="org.eclipse">
                  <level value="ERROR"/>
                </logger>
                
                <logger name="org.quartz">
                  <level value="ERROR"/>
                </logger>
                
                <logger name="org.apache.commons.httpclient.util.IdleConnectionHandler">
                  <level value="ERROR"/>
                </logger>
                
                
                
                <!-- *=============================================================* -->
                <!-- * LOGGER: User loggers.  User's custom loggers should be      * -->
                <!-- *         configured in this section.                         * -->
                <!-- *=============================================================* -->
              
                <!-- *=============================================================* -->
                <!-- * ROOT                                                        * -->
                <!-- *=============================================================* --> 
                <root level="ERROR">
                 <appender-ref ref="STDOUT" />
                </root>
                
              </configuration>
              
              

              You can also do more custom configurations based on the information available on the ECS encoder configuration page here.

              How to enable TIBCO BW ECS Logging Support?

              For BWCE, the steps are similar, but we need to be aware that all the runtime components are packaged inside the base-runtime-version.zip that we download from our TIBCO eDelivery site, so we will need to use a tool to open that ZIP and do the following modifications:

              • We will place the download JARs on that folder /tibco.home/bwce/2.8/system/shared/com.tibco.tpcl.logback_1.2.1600.004
              BW ECS Logging: JAR location
              • We will open the META-INF/MANIFEST.MF and do the following modifications:
                • Add those JARs to the Bundle-Classpath section:
              BW ECS Logging: Bundle-Classpath changes
              • Include the following package (co.elastic.logging.logback) as part of the exported packages by adding it to the Exported-packagesection:
              BW ECS Logging: Export-package changes
              • Additionally we will need to modify the bwappnode in the location /tibco.home/bwce/2.8/bin to add the JAR files also to the classpath that the BWCE base image use to run to ensure this is loading:
              BW ECS Logging: bwappnode change

              Now we can build our BWCE base image as usual and modify the logback.xml as explained above. Here you can see a sample application using this configuration:

              {"@timestamp":"2023-08-28T12:49:08.524Z","log.level": "INFO","message":"TIBCO BusinessWorks version 2.8.2, build V17, 2023-05-19","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"main","log.logger":"com.tibco.thor.frwk"}
              
              <>@BWEclipseAppNode> {"@timestamp":"2023-08-28T12:49:25.435Z","log.level": "INFO","message":"Started by BusinessStudio.","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"main","log.logger":"com.tibco.thor.frwk.Deployer"}
              {"@timestamp":"2023-08-28T12:49:32.795Z","log.level": "INFO","message":"TIBCO-BW-FRWK-300002: BW Engine [Main] started successfully.","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"main","log.logger":"com.tibco.bw.frwk.engine.BWEngine"}
              {"@timestamp":"2023-08-28T12:49:34.338Z","log.level": "INFO","message":"TIBCO-THOR-FRWK-300001: Started OSGi Framework of AppNode [BWEclipseAppNode] in AppSpace [BWEclipseAppSpace] of Domain [BWEclipseDomain]","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"Framework Event Dispatcher: Equinox Container: 1395256a-27a2-4e91-b774-310e85b0b87c","log.logger":"com.tibco.thor.frwk.Deployer"}
              {"@timestamp":"2023-08-28T12:49:34.456Z","log.level": "INFO","message":"TIBCO-THOR-FRWK-300018: Deploying BW Application [t3:1.0].","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"Framework Event Dispatcher: Equinox Container: 1395256a-27a2-4e91-b774-310e85b0b87c","log.logger":"com.tibco.thor.frwk.Application"}
              {"@timestamp":"2023-08-28T12:49:34.524Z","log.level": "INFO","message":"TIBCO-THOR-FRWK-300021: All Application dependencies are resolved for Application [t3:1.0]","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"Framework Event Dispatcher: Equinox Container: 1395256a-27a2-4e91-b774-310e85b0b87c","log.logger":"com.tibco.thor.frwk.Application"}
              {"@timestamp":"2023-08-28T12:49:34.541Z","log.level": "INFO","message":"Started by BusinessStudio, ignoring .enabled settings.","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"Framework Event Dispatcher: Equinox Container: 1395256a-27a2-4e91-b774-310e85b0b87c","log.logger":"com.tibco.thor.frwk.Application"}
              {"@timestamp":"2023-08-28T12:49:35.842Z","log.level": "INFO","message":"TIBCO-THOR-FRWK-300006: Started BW Application [t3:1.0]","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"EventAdminThread #1","log.logger":"com.tibco.thor.frwk.Application"}
              {"@timestamp":"2023-08-28T12:49:35.954Z","log.level": "INFO","message":"aaaaaaa&#10;","ecs.version": "1.2.0","service.name":"a","service.version":"b","service.environment":"c","service.node.name":"d","event.dataset":"a","process.thread.name":"bwEngThread:In-Memory Process Worker-1","log.logger":"com.tibco.bw.palette.generalactivities.Log.t3.module.Log"}
              gosh: stopping shell
              

              KubeSec Explained: How to Scan and Improve Kubernetes Security with YAML Analysis

              KubeSec Explained: How to Scan and Improve Kubernetes Security with YAML Analysis

              KubeSec is another tool to help improve the security of our Kubernetes cluster. And we’re seeing so many agencies focus on security to highlight this topic’s importance in modern architectures and deployments. Security is a key component now, probably the most crucial. We need all to step up our game on that topic, and that’s why it is essential to have tools in our toolset to help us on that task without being fully security experts on each of the technologies, such as Kubernetes in this case.

              KubeSec is an open-source tool developed by a cloud-native and open-source security consultancy named ControlPlane that helps us perform a security risk analysis on Kubernetes resources.

              How Does KubeSec Work?

              KubeSec works based on the Kubernetes Manifest Files you use to deploy the different resources, so you need to provide the YAML file to one of the running ways this tool supports. This is an important topic, “one of the running ways,” because KubeSec supports many different running modes that help us cover other use cases.

              You can run KubeSec in the following ones:

              • HTTP Mode: KubeSec will be listening to HTTP requests with the content of the YAML and provide a report based on that. This is useful in cases needing server mode execution, such as CICD pipelines, or just security servers to be used by some teams, such as DevOps or Platform Engineering. Also, another critical use-case of this mode is to be part of a Kubernetes Admission Controller on your Kubernetes Cluster so that you can enforce this when developers are deploying resources into the platform itself.
              • SaaS Mode: Similar to HTTP mode but without needing to host it yourself, all available behind kubesec.io kubesec.io when the SaaS mode is of your preference, and you’re not managing sensitive information on those components.
              • CLI Mode: Just to run it yourself as part of your local tests, you will have available another CLI command here: kubesec scan k8s-deployment.yaml
              • Docker Mode: Similar to CLI mode but as part of a docker image, it can also be compatible with the CICD pipelines based on containerized workloads.

              KubeScan Output Report

              What you will get out of the execution if KubeScan of any of its forms is a JSON report that you can use to improve and score the security level of your Kubernetes resources and some ways to improve it. The reason behind using JSON as the output also simplifies the tool’s usage in automated workloads such as CICD pipelines. Here you can see a sample of the output report you will get:

              kubesec sample output

              The important thing about the output is the kind of information you will receive from it. As you can see in the picture above, it is separated into two different sections per object. The first one is the “score,” that are the implemented things related to security that provide some score for the security of the object. But also you will have an advice section that provides some things and configurations you can do to improve that score, and because of that, also the global security of the Kubernetes object itself.

              Kubescan also leverages another tool we have commented not far enough on this site, Kubeconform, so you can also specify the target Kubernetes version you’re hitting to have a much more precise report of your specific Kubernetes Manifest. To do that, you can specify the argument --kubernetes-version when you’re launching the command, as you can see in the picture below:

              kubesec command with kubernetes-version option

               How To Install KubeScan?

              Installation also provides different ways and flavors to see what is best for you. Here are some of the options available at the moment for writing this article:

              Conclusion

              Emphasizing the paramount importance of security in today’s intricate architectures, KubeSec emerges as a vital asset for bolstering the protection of Kubernetes clusters. Developed by ControlPlane, this open-source tool facilitates comprehensive security risk assessments of Kubernetes resources. Offering versatility through multiple operational modes—such as HTTP, SaaS, CLI, and Docker—KubeSec provides tailored support for diverse scenarios. Its JSON-based output streamlines integration into automated workflows, while its synergy with Kubeconform ensures precise analysis of Kubernetes Manifests. KubeSec’s user-friendly approach empowers security experts and novices, catalyzing an elevated standard of Kubernetes security across the board.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Enable SwaggerUI in TIBCO BusinessWorks When Offloading SSL (BWCE Fix)

              Enable SwaggerUI in TIBCO BusinessWorks When Offloading SSL (BWCE Fix)

              SwaggerUI TIBCO BusinessWorks is one of the features available by default to all the TIBCO BusinessWorks REST Service developed. As you probably know, SwaggerUI is just an HTML Page with a graphical representation of the Swagger definition file (or OpenAPI specification to be more accurate with the current version of the standards in use) that helps to understand the operation and capabilities exposed by the service and also provide an easy way to test the service as you can see in the picture below:

              This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

              How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificate: SwaggerUI view from TIBCO BWCE app

              This interface is provided out of the box for any REST Service developed using TIBCO BusinessWorks that uses a different port (7777 by default) in case we’re talking about an on-premises deployment or in the /swagger endpoint in case we are talking about a TIBCO BusinessWorks Container Edition.

               How does SwaggerUI work to load the Swagger Specification?

              SwaggerUI works in a particular way. When you reach the URL of the SwaggerUI, there is another URL that is usually part of a text field inside the web page that holds the link to the JSON or YAML document that stores the actual specification, as you can see in the picture below:

              How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificate: SwaggerUI highlighting the 2 URL loaded in the process

              So, you can think that this is a 2-call kind of process:

              • First call loads the SwaggerUI as a graphical container
              • Then, based on the internal URL provided there, do a second call to retrieve the document specification
              • And with that information, render the information in the SwaggerUI format.

              The issue is raised when the SwaggerUI is exposed behind a Load Balancer because the second URL needs to use the advertised URL as the backend server is not reached directly by the client browsing the SwaggerUI. This is solved out of the box with Kubernetes capabilities in the case of TIBCO BWCE, and for the on-premises deployment, it offers two properties to handle that as follows:

              # ------------------------------------------------------------------------------
              # Section:  BW REST Swagger Configuration.  The properties in this section
              # are applicable to the Swagger framework that is utilized by the BW REST 
              # Binding.
              #
              # Note: There are additional BW REST Swagger configuration properties that
              # can be specified in the BW AppNode configuration file "config.ini".  Refer to
              # the BW AppNode configuration file's section "BW REST Swagger configuration" 
              # for details. 
              # ------------------------------------------------------------------------------
              # Swagger framework reverse proxy host name.  This property is optional and 
              # it specifies the reverse proxy host name on which Swagger framework serves 
              # the API's, documentation  endpoint, api-docs, etc.. 
              bw.rest.docApi.reverseProxy.hostName=localhost
              
              # Swagger framework port.  This property is optional and it specifies the 
              # reverse proxy port on which Swagger framework serves the API's, documentation
              # endpoint, api-docs, etc.
              bw.rest.docApi.reverseProxy.port=0000
              

              You can browse the official documentation page for more detailed information.

              That solves the main issue regarding the hostname and the port to be reached as the final user requires. Still, there is an outstanding component on the URL that could generate an issue, and that’s the protocol, so, in a nutshell, if this is exposed using HTTP or HTTPS.

              How to Handle Swagger URL when offloading SSL?

              Until the release of TIBCO BWCE 2.8.3, the protocol depended on the HTTP Connector configuration you used to expose the swagger component. So, if you use an HTTP connector without SSL configuration, it will try to reach the endpoint using an HTTP connection. In the other case, if you use an HTTP connector with an SSL connection, it will try to use an HTTPS connection. That seems fine, but some use cases could generate a problem:

              SSL Certificate offloaded in the Load Balancer: If we offload the SSL configuration on the Load Balancer as it is used in traditional on-premises deployments and some of the Kubernetes configurations, the consumer will establish an HTTPS connection to the Load Balancer, but internally the communication with the BWCE will be done using HTTP, so, in this case, it will generate a mismatch, because in the second call of the requests it will guess that as the HTTP Connector from BWCE is not using HTTPS, the URL should be reached using HTTP but that’s not the case as the communication goes through the Load Balancer that is handled the security.

              Service Mesh Service Exposition: Similar to the previous case, but in that case, close to the Kubernetes deployment. Suppose we are using Service Mesh such as Istio or others. In that case, security is one of the things that needs to be handled. Hence, the situation is the same as the scenario above because the BWCE doesn’t know the security configuration but is impacting the default endpoint generated.

              How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificates?

              Since BWCE 2.8.3, there is a new JVM property that we can use to force the endpoint generated to be HTTPS even if the HTTP Connector used by the BWCE application doesn’t have any security configuration that helps us to solve this issue in the cases above and similar scenario. The property can be added as any other JVM property using the BW_JAVA_OPTS environment property, and the value is this: bw.rest.enable.secure.swagger.url =true

              Increase HTTP Logs in TIBCO BusinessWorks for Debugging and Troubleshooting

              Increase HTTP Logs in TIBCO BusinessWorks for Debugging and Troubleshooting

              Increasing the HTTP logs in TIBCO BusinessWorks when you are debugging or troubleshooting an HTTP-based integration that could be related to a REST or SOAP service is one of the most used and helpful things you can do when developing with TIBCO BusinessWorks.

              This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

              The primary purpose of increasing the HTTP logs is to get complete knowledge about what information you are sending and which communication you are receiving from your partner communicator to help understand an error or unexpected behavior.

              What are the primary use cases for increasing the HTTP logs?

              In the end, all the different use cases are variations of the primary use case: “Get full knowledge about the HTTP exchange communication between both parties.” Still, some more detailed ones can be listed below:

              • Understand why a backend server is rejecting a call that could be related to Authentication or Authorization, and you need to see the detailed response by the backend server.
              • Verify the value of each HTTP Header you are sending that could affect the communication’s compression or accepted content type.
              • See why you’re rejecting a call from a consumer

              Splitting the communication based on the source

              The most important thing to understand is that the logs usually depend on the library you are using, and it is not the same library used to expose an HTTP-based Server as the library you use to consume an HTTP-based service such as REST or a SOAP service.

              Starting from what you expose, this is the easiest thing because this will be defined by the HTTP Connector resources you’re using, as you can see in the picture below:

              HTTP Shared Resources Connector in BW

              All HTTP Connector Resources that you can use to expose REST and SOAP services are based on the Jetty Service implementation, and that means that the loggers that you need to change their configuration are related to the Jetty server itself.

              More complex, in theory, are the ones related to the client communication when our TIBCO BusinessWorks application consumes an HTTP-based service provided by a backend because each of these communications has its own HTTP Client Shared Resources. The configuration of each of them will be different because one of the settings we can get here is the Implementation Library, and that will have a direct effect on the way to change the log configuration:

              HTTP Client Resource in BW that shows the different implementation libraries to detect the logger to Increasing HTTP Logs in TIBCO BusinessWorks

              You have three options when you define an HTTP Client Resource, as you can see in the picture above:

              • Apache HttpComponents: The default one supports HTTP1, SOAP and REST services.
              • Jetty HTTP Client: This client only supports HTTP flows such as HTTP1 and HTTP2, and it would be the primary option when you’re working with HTTP2 flows.
              • Apache Commons: Similar to the first one, but this is currently deprecated, and to be honest, if you have some client component using this configuration, you should change it when you can to the Apache HttpComponents.

              So, if we’re consuming a SOAP and REST service, it is clear that we will be using the implementation library Apache HttpComponents, and that will give us the logger we need to use.

              Because for Apache HttpComponents, we can rely on the following logger: “org.apache.http” and in case we want to extend the server side, or we’re using Jetty HTTP client, we can use this one: “org.eclipse.jetty.http”

              We need to be aware that we cannot extend it just for a single HTTP Client resource because the configuration will be based on the Implementation Library, so in case we set the DEBUG level for the Apache HttpComponents library, it will affect all Shared Resources using this implementation Library, and you’ll need to differentiate based on the data inside the log so that will be part of your data analysis.

              How to set HTTP Logs in TIBCO BusinessWorks?

              Now that we have the loggers, we must set it to a DEBUG (or TRACE) level. We need to know how to do it, and we have several options depending on how we would like to do it and what access we have. The scope of this article is TIBCO BusinessWorks Container Edition, but you can easily extrapolate part of this knowledge to an on-premises TIBCO BusinessWorks installation.

              TIBCO BusinessWorks (container or not) relies on its logging capabilities in the log back library, and this library is configured using a file named logback.xml that could have the configuration that you need, as you can see in the picture below:

              logback.xml configuration with the default structure in TIBCO BW

              So if we want to add a new logging configuration, we need to add a new element loggerto the file with the following structure:

                <logger name="%LOGGER_WE_WANT_TO_SEE">
                  <level value="%LEVEL_WE_WANT_TO_SEE%"/>
                </logger>    
              

              So, the logger was precise based on the previous section, and the level will depend on how much info you want to see. The log Levels are the following ones: ERROR, WARN, INFO, DEBUG, TRACE. DEBUG and TRACE are the ones that show more information.

              In our case, DEBUG should be enough to get the full HTTP Request and HTTP Response, but you can also apply it to other things where you could need a different log level.

              Now you need to add that to the logback.xml file, and to do that, you have several options, as commented:

              • You can find the logback.xml inside the BWCE container (or the AppNode configuration folder) and modify its content. The default location of this file is this one: /tmp/tibco.home/bwce/<VERSION>/config/logback.xml To do this, you will need to have access to do a kubectl exec on the bwce container, and if you do the change, the change will be temporary and lost in the next restart. That could be something good or bad, depending on your goal.
              • If you want to have it permanent or don’t have access to the container, you have two options. The first one is to include a custom copy of the logback.xml in the /resources/custom-logback/ folder in the BWCE base image and set the environment variable CUSTOM_LOGBACK to TRUE value, and that will override the default logback.xml configuration with the content of this file. As commented, this will be “permanent” and will apply since the first deployment of the app with this configuration. You can find more info the official doc here.
              • There is also an additional one since BWCE 2.7.0 and above that allows you to change the logback.xml content without a new copy or changing the base image, and that’s based on the usage of the environment property BW_LOGGER_OVERRIDES with the content in the following way (logger=value) so in our case it would be something like this org.apache.http=DEBUG and in the next deployment you will get this configuration. Similar to the previous one, this will be permanent but doesn’t require adding a file to the base image to be achievable.

              So, as you can see, you have different options depending on your needs and access levels.

              Conclusion

              In conclusion, enhancing HTTP logs within TIBCO BusinessWorks during debugging and troubleshooting is a vital strategy. Elevating log levels provides a comprehensive grasp of information exchange, aiding in analyzing errors and unexpected behaviors. Whether discerning backend rejection causes, scrutinizing HTTP header effects, or isolating consumer call rejections, amplified logs illuminate complex integration scenarios. Adaptations vary based on library usage, encompassing server exposure and service consumption. Configuration through the logback library involves tailored logger and level adjustments. This practice empowers developers to unravel integration intricacies efficiently, ensuring robust and seamless HTTP-based interactions across systems.