Integrate Kyverno CLI into CI/CD Pipelines with GitHub Actions for Kubernetes Policy Checks

Integrate Kyverno CLI into CI/CD Pipelines with GitHub Actions for Kubernetes Policy Checks

Introduction

As Kubernetes clusters become an integral part of infrastructure, maintaining compliance with security and configuration policies is crucial. Kyverno, a policy engine designed for Kubernetes, can be integrated into your CI/CD pipelines to enforce configuration standards and automate policy checks. In this article, we’ll walk through integrating Kyverno CLI with GitHub Actions, providing a seamless workflow for validating Kubernetes manifests before they reach your cluster.

What is Kyverno CLI?

Kyverno is a Kubernetes-native policy management tool, enabling users to enforce best practices, security protocols, and compliance across clusters. Kyverno CLI is a command-line interface that lets you apply, test, and validate policies against YAML manifests locally or in CI/CD pipelines. By integrating Kyverno CLI with GitHub Actions, you can automate these policy checks, ensuring code quality and compliance before deploying resources to Kubernetes.

Benefits of Using Kyverno CLI in CI/CD Pipelines

Integrating Kyverno into your CI/CD workflow provides several advantages:

  1. Automated Policy Validation: Detect policy violations early in the CI/CD pipeline, preventing misconfigured resources from deployment.
  2. Enhanced Security Compliance: Kyverno enables checks for security best practices and compliance frameworks.
  3. Faster Development: Early feedback on policy violations streamlines the process, allowing developers to fix issues promptly.

Setting Up Kyverno CLI in GitHub Actions

Step 1: Install Kyverno CLI

To use Kyverno in your pipeline, you need to install the Kyverno CLI in your GitHub Actions workflow. You can specify the Kyverno version required for your project or use the latest version.

Here’s a sample GitHub Actions YAML configuration to install Kyverno CLI:

name: CI Pipeline with Kyverno Policy Checks

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  kyverno-policy-check:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Code
        uses: actions/checkout@v2

      - name: Install Kyverno CLI
        run: |
          curl -LO https://github.com/kyverno/kyverno/releases/download/v<version>/kyverno-cli-linux.tar.gz
          tar -xzf kyverno-cli-linux.tar.gz
          sudo mv kyverno /usr/local/bin/

Replace <version> with the version of Kyverno CLI you wish to use. Alternatively, you can replace it with latest to always fetch the latest release.

Step 2: Define Policies for Validation

Create a directory in your repository to store Kyverno policies. These policies define the standards that your Kubernetes resources should comply with. For example, create a directory structure as follows:

.
└── .github
    └── policies
        ├── disallow-latest-tag.yaml
        └── require-requests-limits.yaml

Each policy is defined in YAML format and can be customized to meet specific requirements. Below are examples of policies that might be used:

  • Disallow latest Tag in Images: Prevents the use of the latest tag to ensure version consistency.
  • Enforce CPU/Memory Limits: Ensures resource limits are set for containers, which can prevent resource abuse.

Step 3: Add a GitHub Actions Step to Validate Manifests

In this step, you’ll use Kyverno CLI to validate Kubernetes manifests against the policies defined in the .github/policies directory. If a manifest fails validation, the pipeline will halt, preventing non-compliant resources from being deployed.

Here’s the YAML configuration to validate manifests:

- name: Validate Kubernetes Manifests
  run: |
    kyverno apply .github/policies -r manifests/

Replace manifests/ with the path to your Kubernetes manifests in the repository. This command applies all policies in .github/policies against each YAML file in the manifests directory, stopping the pipeline if any non-compliant configurations are detected.

Step 4: Handle Validation Results

To make the output of Kyverno CLI more readable, you can use additional GitHub Actions steps to format and handle the results. For instance, you might set up a conditional step to notify the team if any manifest is non-compliant:

- name: Check for Policy Violations
  if: failure()
  run: echo "Policy violation detected. Please review the failed validation."

Alternatively, you could configure notifications to alert your team through Slack, email, or other integrations whenever a policy violation is identified.

Example: Validating a Kubernetes Manifest

Suppose you have a manifest defining a Kubernetes deployment as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest  # Should trigger a violation

The policy disallow-latest-tag.yaml checks if any container image uses the latest tag and rejects it. When this manifest is processed, Kyverno CLI flags the image and halts the CI/CD pipeline with an error, preventing the deployment of this manifest until corrected.

Conclusion

Integrating Kyverno CLI into a GitHub Actions CI/CD pipeline offers a robust, automated solution for enforcing Kubernetes policies. With this setup, you can ensure Kubernetes resources are compliant with best practices and security standards before they reach production, enhancing the stability and security of your deployments.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Extending Kyverno Policies: Creating Custom Rules for Kubernetes Security

Extending Kyverno Policies: Creating Custom Rules for Kubernetes Security

Kyverno offers a robust, declarative approach to enforcing security and compliance standards within Kubernetes clusters by allowing users to define and enforce custom policies. For an in-depth look at Kyverno’s functionality, including core concepts and benefits, see my detailed article here. In this guide, we’ll focus on extending Kyverno policies, providing a structured walkthrough of its data model, and illustrating use cases to make the most of Kyverno in a Kubernetes environment.

Understanding the Kyverno Policy Data Model

Kyverno policies consist of several components that define how the policy should behave, which resources it should affect, and the specific rules that apply. Let’s dive into the main parts of the Kyverno policy model:

  1. Policy Definition: This is the root configuration where you define the policy’s metadata, including name, type, and scope. Policies can be created at the namespace level for specific areas or as cluster-wide rules to enforce uniform standards across the entire Kubernetes cluster.
  2. Rules: Policies are made up of rules that dictate what conditions Kyverno should enforce. Each rule can include logic for validation, mutation, or generation based on your needs.
  3. Match and Exclude Blocks: These sections allow fine-grained control over which resources the policy applies to. You can specify resources by their kinds (e.g., Pods, Deployments), namespaces, labels, and even specific names. This flexibility is crucial for creating targeted policies that impact only the resources you want to manage.
    1. Match block: Defines the conditions under which the rule applies to specific resources.
    2. Exclude block: Used to explicitly omit resources that match certain conditions, ensuring that unaffected resources are not inadvertently included.
  4. Validation, Mutation, and Generation Actions: Each rule can take different types of actions:
    1. Validation: Ensures resources meet specific criteria and blocks deployment if they don’t.
    2. Mutation: Adjusts resource configurations to align with predefined standards, which is useful for auto-remediation.
    3. Generation: Creates or manages additional resources based on existing resource configurations.

Example: Restricting Container Image Sources to Docker Hub

A common security requirement is to limit container images to trusted registries. The example below demonstrates a policy that only permits images from Docker Hub.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: restrict-dockerhub-images
spec:
  rules:
    - name: only-dockerhub-images
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Only Docker Hub images are allowed."
        pattern:
          spec:
            containers:
              - image: "docker.io/*"

This policy targets all Pod resources in the cluster and enforces a validation rule that restricts the image source to docker.io. If a Pod uses an image outside Docker Hub, Kyverno denies its deployment, reinforcing secure sourcing practices.

Practical Use-Cases for Kyverno Policies

Kyverno policies can handle a variety of Kubernetes management tasks through validation, mutation, and generation. Let’s explore examples for each type to illustrate Kyverno’s versatility:

1. Validation Policies

Validation policies in Kyverno ensure that resources comply with specific configurations or security standards, stopping any non-compliant resources from deploying.

Use-Case: Enforcing Resource Limits for Containers

This example prevents deployments that lack resource limits, ensuring all Pods specify CPU and memory constraints.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: enforce-resource-limits
spec:
  rules:
    - name: require-resource-limits
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Resource limits (CPU and memory) are required for all containers."
        pattern:
          spec:
            containers:
              - resources:
                  limits:
                    cpu: "?*"
                    memory: "?*"

By enforcing resource limits, this policy helps prevent resource contention in the cluster, fostering stable and predictable performance.

2. Mutation Policies

Mutation policies allow Kyverno to automatically adjust configurations in resources to meet compliance requirements. This approach is beneficial for consistent configurations without manual intervention.

Use-Case: Adding Default Labels to Pods

This policy adds a default label, environment: production, to all new Pods that lack this label, ensuring that resources align with organization-wide labeling standards.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-default-label
spec:
  rules:
    - name: add-environment-label
      match:
        resources:
          kinds:
            - Pod
      mutate:
        patchStrategicMerge:
          metadata:
            labels:
              environment: "production"

This mutation policy is an example of how Kyverno can standardize resource configurations at scale by dynamically adding missing information, reducing human error and ensuring labeling consistency.

3. Generation Policies

Generation policies in Kyverno are used to create or update related resources, enhancing Kubernetes automation by responding to specific configurations or needs in real-time.

Use-Case: Automatically Creating a ConfigMap for Each New Namespace

This example policy generates a ConfigMap in every new namespace, setting default configuration values for all resources in that namespace.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: generate-configmap
spec:
  rules:
    - name: add-default-configmap
      match:
        resources:
          kinds:
            - Namespace
      generate:
        kind: ConfigMap
        name: default-config
        namespace: "{{request.object.metadata.name}}"
        data:
          default-key: "default-value"

This generation policy is triggered whenever a new namespace is created, automatically provisioning a ConfigMap with default settings. This approach is especially useful in multi-tenant environments, ensuring new namespaces have essential configurations in place.

Conclusion

Extending Kyverno policies enables Kubernetes administrators to establish and enforce tailored security and operational practices within their clusters. By leveraging Kyverno’s capabilities in validation, mutation, and generation, you can automate compliance, streamline operations, and reinforce security standards seamlessly.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kyverno: A Detailed Way of Enforcing Standard and Custom Policies

Kyverno: A Detailed Way of Enforcing Standard and Custom Policies

In the Kubernetes ecosystem, security and governance are key aspects that need continuous attention. While Kubernetes offers some out-of-the-box (OOTB) security features such as Pod Security Admission (PSA), these might not be sufficient for complex environments with varying compliance requirements. This is where Kyverno comes into play, providing a powerful yet flexible solution for managing and enforcing policies across your cluster.

In this post, we will explore the key differences between Kyverno and PSA, explain how Kyverno can be used in different use cases, and show you how to install and deploy policies with it. Although custom policy creation will be covered in a separate post, we will reference some pre-built policies you can use right away.

What is Pod Security Admission (PSA)?

Kubernetes introduced Pod Security Admission (PSA) as a replacement for the now deprecated PodSecurityPolicy (PSP). PSA focuses on enforcing three predefined levels of security: Privileged, Baseline, and Restricted. These levels control what pods are allowed to run in a namespace based on their security context configurations.

  • Privileged: Minimal restrictions, allowing privileged containers and host access.
  • Baseline: Applies standard restrictions, disallowing privileged containers and limiting host access.
  • Restricted: The strictest level, ensuring secure defaults and enforcing best practices for running containers.

While PSA is effective for basic security requirements, it lacks flexibility when enforcing fine-grained or custom policies. We have a full article covering this topic that you can read here.

Kyverno vs. PSA: Key Differences

Kyverno extends beyond the capabilities of PSA by offering more granular control and flexibility. Here’s how it compares:

  1. Policy Types: While PSA focuses solely on security, Kyverno allows the creation of policies for validation, mutation, and generation of resources. This means you can modify or generate new resources, not just enforce security rules.
  2. Customizability: Kyverno supports custom policies that can enforce your organization’s compliance requirements. You can write policies that govern specific resource types, such as ensuring that all deployments have certain labels or that container images come from a trusted registry.
  3. Policy as Code: Kyverno policies are written in YAML, allowing for easy integration with CI/CD pipelines and GitOps workflows. This makes policy management declarative and version-controlled, which is not the case with PSA.
  4. Audit and Reporting: With Kyverno, you can generate detailed audit logs and reports on policy violations, giving administrators a clear view of how policies are enforced and where violations occur. PSA lacks this built-in reporting capability.
  5. Enforcement and Mutation: While PSA primarily enforces restrictions on pods, Kyverno allows not only validation of configurations but also modification of resources (mutation) when required. This adds an additional layer of flexibility, such as automatically adding annotations or labels.

When to Use Kyverno Over PSA

While PSA might be sufficient for simpler environments, Kyverno becomes a valuable tool in scenarios requiring:

  • Custom Compliance Rules: For example, enforcing that all containers use a specific base image or restricting specific container capabilities across different environments.
  • CI/CD Integrations: Kyverno can integrate into your CI/CD pipelines, ensuring that resources comply with organizational policies before they are deployed.
  • Complex Governance: When managing large clusters with multiple teams, Kyverno’s policy hierarchy and scope allow for finer control over who can deploy what and how resources are configured.

If your organization needs a more robust and flexible security solution, Kyverno is a better fit compared to PSA’s more generic approach.

Installing Kyverno

To start using Kyverno, you’ll need to install it in your Kubernetes cluster. This is a straightforward process using Helm, which makes it easy to manage and update.

Step-by-Step Installation

Add the Kyverno Helm repository:

    helm repo add kyverno https://kyverno.github.io/kyverno/

    Update Helm repositories:

      helm repo update

      Install Kyverno in your Kubernetes cluster:

        helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace

        Verify the installation:

          kubectl get pods -n kyverno

          After installation, Kyverno will begin enforcing policies across your cluster, but you’ll need to deploy some policies to get started.

          Deploying Policies with Kyverno

          Kyverno policies are written in YAML, just like Kubernetes resources, which makes them easy to read and manage. You can find several ready-to-use policies from the Kyverno Policy Library, or create your own to match your requirements.

          Here is an example of a simple validation policy that ensures all pods use trusted container images from a specific registry:

          apiVersion: kyverno.io/v1
          kind: ClusterPolicy
          metadata:
            name: require-trusted-registry
          spec:
            validationFailureAction: enforce
            rules:
            - name: check-registry
              match:
                resources:
                  kinds:
                  - Pod
              validate:
                message: "Only images from 'myregistry.com' are allowed."
                pattern:
                  spec:
                    containers:
                    - image: "myregistry.com/*"

          This policy will automatically block the deployment of any pod that uses an image from a registry other than myregistry.com.

          Applying the Policy

          To apply the above policy, save it to a YAML file (e.g., trusted-registry-policy.yaml) and run the following command:

          kubectl apply -f trusted-registry-policy.yaml

          Once applied, Kyverno will enforce this policy across your cluster.

          Viewing Kyverno Policy Reports

          Kyverno generates detailed reports on policy violations, which are useful for audits and tracking policy compliance. To check the reports, you can use the following commands:

          List all Kyverno policy reports:

            kubectl get clusterpolicyreport

            Describe a specific policy report to get more details:

              kubectl describe clusterpolicyreport <report-name>

              These reports can be integrated into your monitoring tools to trigger alerts when critical violations occur.

              Conclusion

              Kyverno offers a flexible and powerful way to enforce policies in Kubernetes, making it an essential tool for organizations that need more than the basic capabilities provided by PSA. Whether you need to ensure compliance with internal security standards, automate resource modifications, or integrate policies into CI/CD pipelines, Kyverno’s extensive feature set makes it a go-to choice for Kubernetes governance.

              For now, start with the out-of-the-box policies available in Kyverno’s library. In future posts, we’ll dive deeper into creating custom policies tailored to your specific needs.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Kubernetes Pod Security Admission Explained: Enforcing PSA Policies the Right Way

              Kubernetes Pod Security Admission Explained: Enforcing PSA Policies the Right Way

              In Kubernetes, security is a key concern, especially as containers and microservices grow in complexity. One of the essential features of Kubernetes for policy enforcement is Pod Security Admission (PSA), which replaces the deprecated Pod Security Policies (PSP). PSA provides a more straightforward and flexible approach to enforce security policies, helping administrators safeguard clusters by ensuring that only compliant pods are allowed to run.

              This article will guide you through PSA, the available Pod Security Standards, how to configure them, and how to apply security policies to specific namespaces using labels.

              What is Pod Security Admission (PSA)?

              PSA is a built-in admission controller introduced in Kubernetes 1.23 to replace Pod Security Policies (PSPs). PSPs had a steep learning curve and could become cumbersome when scaling security policies across various environments. PSA simplifies this process by applying Kubernetes Pod Security Standards based on predefined security levels without needing custom logic for each policy.

              With PSA, cluster administrators can restrict the permissions of pods by using labels that correspond to specific Pod Security Standards. PSA operates at the namespace level, enabling better granularity in controlling security policies for different workloads.

              Pod Security Standards

              Kubernetes provides three key Pod Security Standards in the PSA framework:

              • Privileged: No restrictions; permits all features and is the least restrictive mode. This is not recommended for production workloads but can be used in controlled environments or for workloads requiring elevated permissions.
              • Baseline: Provides a good balance between usability and security, restricting the most dangerous aspects of pod privileges while allowing common configurations. It is suitable for most applications that don’t need special permissions.
              • Restricted: The most stringent level of security. This level is intended for workloads that require the highest level of isolation and control, such as multi-tenant clusters or workloads exposed to the internet.

              Each standard includes specific rules to limit pod privileges, such as disallowing privileged containers, restricting access to the host network, and preventing changes to certain security contexts.

              Setting Up Pod Security Admission (PSA)

              To enable PSA, you need to label your namespaces based on the security level you want to enforce. The label format is as follows:

              kubectl label --overwrite ns  pod-security.kubernetes.io/enforce=<value>

              For example, to enforce a restricted security policy on the production namespace, you would run:

              kubectl label --overwrite ns production pod-security.kubernetes.io/enforce=restricted

              In this example, Kubernetes will automatically apply the rules associated with the restricted policy to all pods deployed in the production namespace.

              Additional PSA Modes

              PSA also provides additional modes for greater control:

              • Audit: Logs a policy violation but allows the pod to be created.
              • Warn: Issues a warning but permits the pod creation.
              • Enforce: Blocks pod creation if it violates the policy.

              To configure these modes, use the following labels:

              kubectl label --overwrite ns      pod-security.kubernetes.io/enforce=baseline     pod-security.kubernetes.io/audit=restricted     pod-security.kubernetes.io/warn=baseline

              This setup enforces the baseline standard while issuing warnings and logging violations for restricted-level rules.

              Example: Configuring Pod Security in a Namespace

              Let’s walk through an example of configuring baseline security for the dev namespace. First, you need to apply the PSA labels:

              kubectl create namespace dev
              kubectl label --overwrite ns dev pod-security.kubernetes.io/enforce=baseline

              Now, any pod deployed in the dev namespace will be checked against the baseline security standard. If a pod violates the baseline policy (for instance, by attempting to run a privileged container), it will be blocked from starting.

              You can also combine warn and audit modes to track violations without blocking pods:

              kubectl label --overwrite ns dev     pod-security.kubernetes.io/enforce=baseline     pod-security.kubernetes.io/warn=restricted     pod-security.kubernetes.io/audit=privileged

              In this case, PSA will allow pods to run if they meet the baseline policy, but it will issue warnings for restricted-level violations and log any privileged-level violations.

              Applying Policies by Default

              One of the strengths of PSA is its simplicity in applying policies at the namespace level, but administrators might wonder if there’s a way to apply a default policy across new namespaces automatically. As of now, Kubernetes does not natively provide an option to apply PSA policies globally by default. However, you can use admission webhooks or automation tools such as OPA Gatekeeper or Kyverno to enforce default policies for new namespaces.

              Conclusion

              Pod Security Admission (PSA) simplifies policy enforcement in Kubernetes clusters, making it easier to ensure compliance with security standards across different environments. By configuring Pod Security Standards at the namespace level and using labels, administrators can control the security level of workloads with ease. The flexibility of PSA allows for efficient security management without the complexity associated with the older Pod Security Policies (PSPs).

              For more details on configuring PSA and Pod Security Standards, check the official Kubernetes PSA documentation and Pod Security Standards documentation.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              KubeSec Explained: How to Scan and Improve Kubernetes Security with YAML Analysis

              KubeSec Explained: How to Scan and Improve Kubernetes Security with YAML Analysis

              KubeSec is another tool to help improve the security of our Kubernetes cluster. And we’re seeing so many agencies focus on security to highlight this topic’s importance in modern architectures and deployments. Security is a key component now, probably the most crucial. We need all to step up our game on that topic, and that’s why it is essential to have tools in our toolset to help us on that task without being fully security experts on each of the technologies, such as Kubernetes in this case.

              KubeSec is an open-source tool developed by a cloud-native and open-source security consultancy named ControlPlane that helps us perform a security risk analysis on Kubernetes resources.

              How Does KubeSec Work?

              KubeSec works based on the Kubernetes Manifest Files you use to deploy the different resources, so you need to provide the YAML file to one of the running ways this tool supports. This is an important topic, “one of the running ways,” because KubeSec supports many different running modes that help us cover other use cases.

              You can run KubeSec in the following ones:

              • HTTP Mode: KubeSec will be listening to HTTP requests with the content of the YAML and provide a report based on that. This is useful in cases needing server mode execution, such as CICD pipelines, or just security servers to be used by some teams, such as DevOps or Platform Engineering. Also, another critical use-case of this mode is to be part of a Kubernetes Admission Controller on your Kubernetes Cluster so that you can enforce this when developers are deploying resources into the platform itself.
              • SaaS Mode: Similar to HTTP mode but without needing to host it yourself, all available behind kubesec.io kubesec.io when the SaaS mode is of your preference, and you’re not managing sensitive information on those components.
              • CLI Mode: Just to run it yourself as part of your local tests, you will have available another CLI command here: kubesec scan k8s-deployment.yaml
              • Docker Mode: Similar to CLI mode but as part of a docker image, it can also be compatible with the CICD pipelines based on containerized workloads.

              KubeScan Output Report

              What you will get out of the execution if KubeScan of any of its forms is a JSON report that you can use to improve and score the security level of your Kubernetes resources and some ways to improve it. The reason behind using JSON as the output also simplifies the tool’s usage in automated workloads such as CICD pipelines. Here you can see a sample of the output report you will get:

              kubesec sample output

              The important thing about the output is the kind of information you will receive from it. As you can see in the picture above, it is separated into two different sections per object. The first one is the “score,” that are the implemented things related to security that provide some score for the security of the object. But also you will have an advice section that provides some things and configurations you can do to improve that score, and because of that, also the global security of the Kubernetes object itself.

              Kubescan also leverages another tool we have commented not far enough on this site, Kubeconform, so you can also specify the target Kubernetes version you’re hitting to have a much more precise report of your specific Kubernetes Manifest. To do that, you can specify the argument --kubernetes-version when you’re launching the command, as you can see in the picture below:

              kubesec command with kubernetes-version option

               How To Install KubeScan?

              Installation also provides different ways and flavors to see what is best for you. Here are some of the options available at the moment for writing this article:

              Conclusion

              Emphasizing the paramount importance of security in today’s intricate architectures, KubeSec emerges as a vital asset for bolstering the protection of Kubernetes clusters. Developed by ControlPlane, this open-source tool facilitates comprehensive security risk assessments of Kubernetes resources. Offering versatility through multiple operational modes—such as HTTP, SaaS, CLI, and Docker—KubeSec provides tailored support for diverse scenarios. Its JSON-based output streamlines integration into automated workflows, while its synergy with Kubeconform ensures precise analysis of Kubernetes Manifests. KubeSec’s user-friendly approach empowers security experts and novices, catalyzing an elevated standard of Kubernetes security across the board.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Enable SwaggerUI in TIBCO BusinessWorks When Offloading SSL (BWCE Fix)

              Enable SwaggerUI in TIBCO BusinessWorks When Offloading SSL (BWCE Fix)

              SwaggerUI TIBCO BusinessWorks is one of the features available by default to all the TIBCO BusinessWorks REST Service developed. As you probably know, SwaggerUI is just an HTML Page with a graphical representation of the Swagger definition file (or OpenAPI specification to be more accurate with the current version of the standards in use) that helps to understand the operation and capabilities exposed by the service and also provide an easy way to test the service as you can see in the picture below:

              This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

              How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificate: SwaggerUI view from TIBCO BWCE app

              This interface is provided out of the box for any REST Service developed using TIBCO BusinessWorks that uses a different port (7777 by default) in case we’re talking about an on-premises deployment or in the /swagger endpoint in case we are talking about a TIBCO BusinessWorks Container Edition.

               How does SwaggerUI work to load the Swagger Specification?

              SwaggerUI works in a particular way. When you reach the URL of the SwaggerUI, there is another URL that is usually part of a text field inside the web page that holds the link to the JSON or YAML document that stores the actual specification, as you can see in the picture below:

              How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificate: SwaggerUI highlighting the 2 URL loaded in the process

              So, you can think that this is a 2-call kind of process:

              • First call loads the SwaggerUI as a graphical container
              • Then, based on the internal URL provided there, do a second call to retrieve the document specification
              • And with that information, render the information in the SwaggerUI format.

              The issue is raised when the SwaggerUI is exposed behind a Load Balancer because the second URL needs to use the advertised URL as the backend server is not reached directly by the client browsing the SwaggerUI. This is solved out of the box with Kubernetes capabilities in the case of TIBCO BWCE, and for the on-premises deployment, it offers two properties to handle that as follows:

              # ------------------------------------------------------------------------------
              # Section:  BW REST Swagger Configuration.  The properties in this section
              # are applicable to the Swagger framework that is utilized by the BW REST 
              # Binding.
              #
              # Note: There are additional BW REST Swagger configuration properties that
              # can be specified in the BW AppNode configuration file "config.ini".  Refer to
              # the BW AppNode configuration file's section "BW REST Swagger configuration" 
              # for details. 
              # ------------------------------------------------------------------------------
              # Swagger framework reverse proxy host name.  This property is optional and 
              # it specifies the reverse proxy host name on which Swagger framework serves 
              # the API's, documentation  endpoint, api-docs, etc.. 
              bw.rest.docApi.reverseProxy.hostName=localhost
              
              # Swagger framework port.  This property is optional and it specifies the 
              # reverse proxy port on which Swagger framework serves the API's, documentation
              # endpoint, api-docs, etc.
              bw.rest.docApi.reverseProxy.port=0000
              

              You can browse the official documentation page for more detailed information.

              That solves the main issue regarding the hostname and the port to be reached as the final user requires. Still, there is an outstanding component on the URL that could generate an issue, and that’s the protocol, so, in a nutshell, if this is exposed using HTTP or HTTPS.

              How to Handle Swagger URL when offloading SSL?

              Until the release of TIBCO BWCE 2.8.3, the protocol depended on the HTTP Connector configuration you used to expose the swagger component. So, if you use an HTTP connector without SSL configuration, it will try to reach the endpoint using an HTTP connection. In the other case, if you use an HTTP connector with an SSL connection, it will try to use an HTTPS connection. That seems fine, but some use cases could generate a problem:

              SSL Certificate offloaded in the Load Balancer: If we offload the SSL configuration on the Load Balancer as it is used in traditional on-premises deployments and some of the Kubernetes configurations, the consumer will establish an HTTPS connection to the Load Balancer, but internally the communication with the BWCE will be done using HTTP, so, in this case, it will generate a mismatch, because in the second call of the requests it will guess that as the HTTP Connector from BWCE is not using HTTPS, the URL should be reached using HTTP but that’s not the case as the communication goes through the Load Balancer that is handled the security.

              Service Mesh Service Exposition: Similar to the previous case, but in that case, close to the Kubernetes deployment. Suppose we are using Service Mesh such as Istio or others. In that case, security is one of the things that needs to be handled. Hence, the situation is the same as the scenario above because the BWCE doesn’t know the security configuration but is impacting the default endpoint generated.

              How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificates?

              Since BWCE 2.8.3, there is a new JVM property that we can use to force the endpoint generated to be HTTPS even if the HTTP Connector used by the BWCE application doesn’t have any security configuration that helps us to solve this issue in the cases above and similar scenario. The property can be added as any other JVM property using the BW_JAVA_OPTS environment property, and the value is this: bw.rest.enable.secure.swagger.url =true

              ReadOnlyRootFilesystem for TIBCO BWCE: Securing Containers with Kubernetes Best Practices

              ReadOnlyRootFilesystem for TIBCO BWCE: Securing Containers with Kubernetes Best Practices

              This article will cover how to enhance the security of your TIBCO BWCE images by creating a ReadOnlyFileSystem Image for TIBCO BWCE. In previous articles, we have commented on the benefits that this kind of image provides several advantages in terms of security, focusing on aspects such as reducing the attack surface by limiting the kind of things any user can do, even if they gain access to running containers.

              This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

              The same applies in case any malware your image can have will have limited the possible actions they can do without any write access to most of the container.

              How ReadOnlyFileSystem affects a TIBCO BWCE image?

              This has a clear impact as the TIBCO BWCE image is an image that needs to write in several folders as part of the expected behavior of the application. That’s mandatory and non-dependent on the scripts you used to build your image.

              As you probably know, TIBCO BWCE ships two sets of scripts to build the Docker base image: the main ones and the ones included in the folder reducedStartupTime, as you can see in the GitHub page but also inside your docker folder in the TIBCO-HOME after the installation as you can see in the picture below.

              ReadOnlyRootFilesystem for TIBCO BWCE: Securing Containers with Kubernetes Best Practices

              The main difference between them is where the unzip of the bwce-runtime is made. In the case of the default script, the unzip is done in the startup process of the image, and in the reducedStartupTime this is done in the building of the image itself. So, you can start thinking that the default ones need some writing access as they need to unzip the file inside the container, and that’s true.

              But also, the reduced startupTime requires writing access to run the application; several activities are done regarding unzipping the EAR file, managing the properties file, and additional internal activities. So, no matter what kind of scripts you’re using, you must provide a write-access folder to do this activity.

              By default, all these activities are limited to a single folder. If you keep everything by default, this is the /tmp folder, so you must provide a volume for that folder.

              How to deploy a TIBCO BWCE application with the

              Now, that is clear that you need a volume for the /tmp folder, and now you need to define the kind of volume that you want to use for this one. As you know, there are several kinds of volumes that you can determine depending on the requirements that you have.

              In this case, the only requirement is to write access, but there is no need regarding storage and persistency, so, in that case, we can use an emptyDir mode. emptyDir content, which is erased when a pod is removed, is similar to the default behavior but allows writing permission on its content.

              To show how the YAML would like, we will use the default one that we have available in the documentation here:

              apiVersion: v1
              kind: Pod
              metadata:
                name: bookstore-demo
                labels:
                  app: bookstore-demo
              spec:
                containers:
                - name: bookstore-demo
                  image: bookstore-demo:2.4.4
                  imagePullPolicy: Never
                  envFrom:
                  - configMapRef:
                    name: name 
              

              So, we will change that to include the volume, as you can see here:

              apiVersion: v1
              kind: Pod
              metadata:
                name: bookstore-demo
                labels:
                  app: bookstore-demo
              spec:
                containers:
                - name: bookstore-demo
                  image: bookstore-demo:2.4.4
                  imagePullPolicy: Never
              	securityContext:
              		readOnlyRootFilesystem: true
                  envFrom:
                  - configMapRef:
                    name: name
                  volumeMounts:
                  - name: tmp
                    mountPath: /tmp
                volumes:
                - name: tmp
                  emptyDir: {}
              

              The changes are the following:

              • Include the volumes section with a single volume definition with the name of tmp with an emptyDirdefinition.
              • Include a volumeMountssection for the tmpvolume that is mounted in the /tmp path to allow to write on that specific path to enable also the unzip of the bwce-runtime as well as all the additional activities that are required.
              • To trigger this behavior, include the readOnlyRootFilesystem flag in the securityContext section.

              Conclusion

              Incorporating a ReadOnlyFileSystem approach into your TIBCO BWCE images is a proactive strategy to fortify your application’s security posture. By curbing unnecessary write access and minimizing the potential avenues for unauthorized actions, you’re taking a vital step towards safeguarding your containerized environment.

              This guide has unveiled the critical aspects of implementing such a security-enhancing measure, walking you through the process with clear instructions and practical examples. With a focus on reducing attack vectors and bolstering isolation, you can confidently deploy your TIBCO BWCE applications, knowing that you’ve fortified their runtime environment against potential threats.

              ReadOnlyRootFilesystem Explained: Strengthening Container Security in Kubernetes

              ReadOnlyRootFilesystem Explained: Strengthening Container Security in Kubernetes

              Introduction

              One such important security feature is the use of ReadOnlyRootFilesystem, a powerful tool that can significantly enhance the security posture of your containers.

              In the rapidly evolving software development and deployment landscape, containers have emerged as a revolutionary technology. Offering portability, efficiency, and scalability, containers have become the go-to solution for packaging and delivering applications. However, with these benefits come specific security challenges that must be addressed to ensure the integrity of your containerized applications.

              A ReadOnlyRootFilesystem is precisely what it sounds like a filesystem that can only be read from, not written to. In containerization, the contents of a container’s filesystem are locked in a read-only state, preventing any modifications or alterations during runtime.

               Advantages of Using ReadOnlyRootFilesystem

              • Reduced Attack Surface: One of the fundamental principles of cybersecurity is reducing the attack surface – the potential points of entry for malicious actors. Enforcing a ReadOnlyRootFilesystem eliminates the possibility of an attacker gaining write access to your container. This simple yet effective measure significantly limits their ability to inject malicious code, tamper with critical files, or install malware.
              • Immutable Infrastructure: Immutable infrastructure is a concept where components are never changed once deployed. This approach ensures consistency and repeatability, as any changes are made by deploying a new instance rather than modifying an existing one. By applying a ReadOnlyRootFilesystem, you’re essentially embracing the principles of immutable infrastructure within your containers, making them more resistant to unauthorized modifications.
              • Malware Mitigation: Malware often relies on gaining written access to a system to carry out its malicious activities. By employing a ReadOnlyRootFilesystem, you erect a significant barrier against malware attempting to establish persistence or exfiltrate sensitive data. Even if an attacker manages to compromise a container, their ability to install and execute malicious code is severely restricted.
              • Enhanced Forensics and Auditing: In the unfortunate event of a security breach, having a ReadOnlyRootFilesystem in place can assist in forensic analysis. Since the filesystem remains unaltered, investigators can more accurately trace the attack vector, determine the extent of the breach, and identify the vulnerable entry points.

              Implementation Considerations

              Implementing a ReadOnlyRootFilesystem in your containerized applications requires a deliberate approach:

              • Image Design: Build your container images with the ReadOnlyRootFilesystem concept in mind. Make sure to separate read-only and writable areas of the filesystem. This might involve creating volumes for writable data or using environment variables to customize runtime behavior.
              • Runtime Configuration: Containers often require write access for temporary files, logs, or other runtime necessities. Carefully design your application to use designated directories or volumes for these purposes while keeping the critical components read-only.
              • Testing and Validation: Thoroughly test your containerized application with the ReadOnlyRootFilesystem configuration to ensure it functions as intended. Pay attention to any runtime errors, permission issues, or unexpected behavior that may arise.

              How to Define a Pod to be ReadOnlyRootFilesystem?

              To define a Pod as “ReadOnlyRootFilesystem,” this is one of the flags that belong to the securityContext section of the pod, as you can see in the sample below:

              apiVersion: v1
              kind: Pod
              metadata:
                name: <Pod name>
              spec:
                containers:
                - name: <container name>
                  image: <image>
                  securityContext:
                    readOnlyRootFilesystem: true
              

              Conclusion

              As the adoption of containers continues to surge, so does the importance of robust security measures. Incorporating a ReadOnlyRootFilesystem into your container strategy is a proactive step towards safeguarding your applications and data. By reducing the attack surface, fortifying against malware, and enabling better forensics, you’re enhancing the overall security posture of your containerized environment.

              As you embrace immutable infrastructure within your containers, you’ll be better prepared to face the ever-evolving landscape of cybersecurity threats. Remember, when it comes to container security, a ReadOnlyRootFilesystem can be the shield that protects your digital assets from potential harm.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Prevent Server Information Disclosure in Kubernetes with Istio Service Mesh

              Prevent Server Information Disclosure in Kubernetes with Istio Service Mesh

              In today’s digital landscape, where data breaches and cyber threats are becoming increasingly sophisticated, ensuring the security of your servers is paramount. One of the critical security concerns that organizations must address is “Server Information Disclosure.” Server Information Disclosure occurs when sensitive information about a server’s configuration, technology stack, or internal structure is inadvertently exposed to unauthorized parties. Hackers can exploit this vulnerability to gain insights into potential weak points and launch targeted attacks. Such breaches can lead to data theft, service disruption, and reputation damage.

              Information Disclosure and Istio Service Mesh

              One example is the Server HTTP Header, usually included in most of the HTTP responses where you have the server that is providing this response. The values can vary depending on the stack, but matters such as Jetty, Tomcat, or similar ones are usually seen. But also, if you are using a Service Mesh such as Istio, you will see the header with a value of istio-envoy, as you can see here:

              Information Disclosure of Server Implementation using Istio Service mesh

              As commented, this is of such importance for several levels of security, such as:

              • Data Privacy: Server information leakage can expose confidential data, undermining user trust and violating data privacy regulations such as GDPR and HIPAA.
              • Reduced Attack Surface: By concealing server details, you minimize the attack surface available to potential attackers.
              • Security by Obscurity: While not a foolproof approach, limiting disclosure adds an extra layer of security, making it harder for hackers to gather intelligence.

              How to mitigate that with Istio Service Mesh?

              When using Istio, we can define different rules to add and remove HTTP headers based on our needs, as you can see in the following documentation here: https://discuss.istio.io/t/remove-header-operation/1692 using simple clauses to the definition of your VirtualService as you can see here:

              apiVersion: networking.istio.io/v1alpha3
              kind: VirtualService
              metadata:
                name: k8snode-virtual-service
              spec:
                hosts:
                - "example.com"
                gateways:
                - k8snode-gateway
                http:
                  headers:
                    response:
                      remove:
                        - "x-my-fault-source"
                - route:
                  - destination:
                      host: k8snode-service
                      subset: version-1 
              

              Unfortunately, this is not useful for all HTTP headers, especially the “main” ones, so the ones that are not custom added by your workloads but the ones that are mainly used and defined in the HTTP W3C standard https://www.w3.org/Protocols/

              So, in the case of the Server HTTP header is a little bit more complex to do, and you need to use an EnvoyFilter, one of the most sophisticated objects part of the Istio Service Mesh. Based on the words in the official Istio documentation, an EnvoyFilter provides a mechanism to customize the Envoy configuration generated by Istio Pilot. So, you can use EnvoyFilter to modify values for certain fields, add specific filters, or even add entirely new listeners, clusters, etc.

              EnvoyFilter Implementation to Remove Header

              So now that we know that we need to create a custom EnvoyFilter let’s see which one we need to use to remove the Server header and how this is made to get more knowledge about this component. Here you can see the EnvoyFilter for that job:

              ---
              apiVersion: networking.istio.io/v1alpha3
              kind: EnvoyFilter
              metadata:
                name: gateway-response-remove-headers
                namespace: istio-system
              spec:
                workloadSelector:
                  labels:
                    istio: ingressgateway
                configPatches:
                - applyTo: NETWORK_FILTER
                  match:
                    context: GATEWAY
                    listener:
                      filterChain:
                        filter:
                          name: "envoy.filters.network.http_connection_manager"
                  patch:
                    operation: MERGE
                    value:
                      typed_config:
                        "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
                        server_header_transformation: PASS_THROUGH
                - applyTo: ROUTE_CONFIGURATION
                  match:
                    context: GATEWAY
                  patch:
                    operation: MERGE
                    value:
                      response_headers_to_remove:
                      - "server"
              

              So let’s focus on the parts of the specification of the EnvoyFilter where we can get for one side the usual workloadSelector, to know where this component will be applied, that in this case will be the istio ingressgateway. Then we enter into the configPatches section, that are the sections where we use the customization that we need to do, and in our case, we have two of them:

              Both act on the context: GATEWAY and apply to two different objects: NETWORK\_FILTER AND ROUTE\_CONFIGURATION. You can also use filters on sidecars to affect the behavior of them. The first bit what it does is including the custom filter http\_connection\_maanger that allows the manipulation of the HTTP context, including for our primary purpose also the HTTP header, and then we have the section bit that acts on the ROUTE\_CONFIGURATION removing the server header as we can see by using the option response_header_to_remove

              Conclusion

              As you can see, this is not easy to implement. Still, at the same time, it is evidence of the power and low-level capabilities that you have when using a robust service mesh such as Istio to interact and modify the behavior of any tiny detail that you want for your benefit and, in this case, also to improve and increase the security of your workloads deployed behind the Service Mesh scope.

              In the ever-evolving landscape of cybersecurity threats, safeguarding your servers against information disclosure is crucial to protect sensitive data and maintain your organization’s integrity. Istio empowers you to fortify your server security by providing robust tools for traffic management, encryption, and access control.

              Remember, the key to adequate server security is a proactive approach that addresses vulnerabilities before they can be exploited. Take the initiative to implement Istio and elevate your server protection.

              📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

              Istio Security Policies Explained: PeerAuthentication, RequestAuthentication, and AuthorizationPolicy

              Istio Security Policies Explained: PeerAuthentication, RequestAuthentication, and AuthorizationPolicy

              Istio Security Policies are crucial in securing microservices within a service mesh environment. We have discussed Istio and the capabilities that it can introduce to your Kubernetes workloads. Still, today we’re going to be more detailed regarding the different objects and resources that would help us make our workloads much more secure and enforce the communication between them. These objects include PeerAuthentication, RequestAuthentication, and AuthorizationPolicy objects.

              PeerAuthentication: Enforcing security on pod-to-pod communication

              PeerAuthentication focuses on securing communication between services by enforcing mutual TLS (Transport Layer Security) authentication and authorization. It enables administrators to define authentication policies for workloads based on the source of the requests, such as specific namespaces or service accounts. Configuring PeerAuthentication ensures that only authenticated and authorized services can communicate, preventing unauthorized access and man-in-the-middle attacks. This can be achieved depending on the value of the mode where defining this object being STRICT for only allowed mTLS communication, PERMISSIVE to allow both kinds of communication, DISABLE to forbid the mTLS connection and keep the traffic insecure, and UNSET to use the inherit option. This is a sample of the definition of the object:

              apiVersion: security.istio.io/v1beta1
              kind: PeerAuthentication
              metadata:
                name: default
                namespace: foo
              spec:
                mtls:
                  mode: PERMISSIVE
              

              RequestAuthentication: Defining authentication methods for Istio Workloads

              RequestAuthentication, on the other hand, provides fine-grained control over the authentication of inbound requests. It allows administrators to specify rules and requirements for validating and authenticating incoming requests based on factors like JWT (JSON Web Tokens) validation, API keys, or custom authentication methods. With RequestAuthentication, service owners can enforce specific authentication mechanisms for different endpoints or routes, ensuring that only authenticated clients can access protected resources. Here you can see a sample of a RequestAuthentication object:

              apiVersion: security.istio.io/v1beta1
              kind: RequestAuthentication
              metadata:
                name: jwt-auth-policy
                namespace: my-namespace
              spec:
                selector:
                  matchLabels:
                    app: my-app
                jwtRules:
                  - issuer: "issuer.example.com"
                    jwksUri: "https://example.com/.well-known/jwks.json"
              

              As commented, JWT validation is the most used approach as JWT tokens are becoming the de-facto industry standard for incoming validations and the OAuth V2 authorization protocol. Here you can define the rules the JWT needs to meet to be considered a valid request. But the RequestAuthentication only describes the “authentication methods” supported by the workloads but doesn’t enforce it or provide any details regarding the Authorization.

              That means that if you define a workload to need to use JWT authentication, sending the request with the token will validate that token and ensure it is not expired. It meets all the rules you have specified in the object definition, but it will also allow bypassing requests with no token at all, as you’re just defining what the workloads support but not enforcing it. To do that, we need to introduce the last object of this set, the AuthorizationPolicy object.

              AuthorizationPolicy: Fine-grained Authorization Policy Definition for Istio Policies

              AuthorizationPolicy offers powerful access control capabilities to regulate traffic flow within the service mesh. It allows administrators to define rules and conditions based on attributes like source, destination, headers, and even request payload to determine whether a request should be allowed or denied. AuthorizationPolicy helps enforce fine-grained authorization rules, granting or denying access to specific resources or actions based on the defined policies. Only authorized clients with appropriate permissions can access specific endpoints or perform particular operations within the service mesh. Here you can see a sample of an Authorization Policy object:

              apiVersion: security.istio.io/v1beta1
              kind: AuthorizationPolicy
              metadata:
                name: rbac-policy
                namespace: my-namespace
              spec:
                selector:
                  matchLabels:
                    app: my-app
                rules:
                  - from:
                      - source:
                          principals: ["user:user@example.com"]
                    to:
                      - operation:
                          methods: ["GET"]
              

              Here you can go as detailed as you need; you can apply rules on the source of the request to ensure that only some recommendations can go through (for example, requests that are from a JWT token to use in combination with the RequestAuthenitcation object), but also rules on the target, if this is going to a specific host or path or method or a combination of both. Also, you can apply to ALLOW rules or DENY rules (or even CUSTOM) and define a set of them, and all of them will be enforced as a whole. The evaluation is determined by the following rules as stated in the Istio Official Documentation:

              • If there are any CUSTOM policies that match the request, evaluate and deny the request if the evaluation result is denied.
              • If there are any DENY policies that match the request, deny the request.
              • If there are no ALLOW policies for the workload, allow the request.
              • If any of the ALLOW policies match the request, allow the request. Deny the request.
              Authorization Policy validation flow from: https://istio.io/latest/docs/concepts/security/

              This will provide all the requirements you could need to be able to do a full definition of all the security policies needed.

               Conclusion

              In conclusion, Istio’s Security Policies provide robust mechanisms for enhancing the security of microservices within a service mesh environment. The PeerAuthentication, RequestAuthentication, and AuthorizationPolicy objects offer a comprehensive toolkit to enforce authentication and authorization controls, ensuring secure communication and access control within the service mesh. By leveraging these Istio Security Policies, organizations can strengthen the security posture of their microservices, safeguarding sensitive data and preventing unauthorized access or malicious activities within their service mesh environment.