How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificate

SwaggerUI TIBCO BusinessWorks is one of the features available by default to all the TIBCO BusinessWorks REST Service developed. As you probably know, SwaggerUI is just an HTML Page with a graphical representation of the Swagger definition file (or OpenAPI specification to be more accurate with the current version of the standards in use) that helps to understand the operation and capabilities exposed by the service and also provide an easy way to test the service as you can see in the picture below:

How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificate: SwaggerUI view from TIBCO BWCE app

This interface is provided out of the box for any REST Service developed using TIBCO BusinessWorks that uses a different port (7777 by default) in case we’re talking about an on-premises deployment or in the /swagger endpoint in case we are talking about a TIBCO BusinessWorks Container Edition.

 How does SwaggerUI work to load the Swagger Specification?

SwaggerUI works in a particular way. When you reach the URL of the SwaggerUI, there is another URL that is usually part of a text field inside the web page that holds the link to the JSON or YAML document that stores the actual specification, as you can see in the picture below:

How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificate: SwaggerUI highlighting the 2 URL loaded in the process

So, you can think that this is a 2-call kind of process:

  • First call loads the SwaggerUI as a graphical container
  • Then, based on the internal URL provided there, do a second call to retrieve the document specification
  • And with that information, render the information in the SwaggerUI format.

The issue is raised when the SwaggerUI is exposed behind a Load Balancer because the second URL needs to use the advertised URL as the backend server is not reached directly by the client browsing the SwaggerUI. This is solved out of the box with Kubernetes capabilities in the case of TIBCO BWCE, and for the on-premises deployment, it offers two properties to handle that as follows:

# ------------------------------------------------------------------------------
# Section:  BW REST Swagger Configuration.  The properties in this section
# are applicable to the Swagger framework that is utilized by the BW REST 
# Binding.
#
# Note: There are additional BW REST Swagger configuration properties that
# can be specified in the BW AppNode configuration file "config.ini".  Refer to
# the BW AppNode configuration file's section "BW REST Swagger configuration" 
# for details. 
# ------------------------------------------------------------------------------
# Swagger framework reverse proxy host name.  This property is optional and 
# it specifies the reverse proxy host name on which Swagger framework serves 
# the API's, documentation  endpoint, api-docs, etc.. 
bw.rest.docApi.reverseProxy.hostName=localhost

# Swagger framework port.  This property is optional and it specifies the 
# reverse proxy port on which Swagger framework serves the API's, documentation
# endpoint, api-docs, etc.
bw.rest.docApi.reverseProxy.port=0000

You can browse the official documentation page for more detailed information.

That solves the main issue regarding the hostname and the port to be reached as the final user requires. Still, there is an outstanding component on the URL that could generate an issue, and that’s the protocol, so, in a nutshell, if this is exposed using HTTP or HTTPS.

How to Handle Swagger URL when offloading SSL?

Until the release of TIBCO BWCE 2.8.3, the protocol depended on the HTTP Connector configuration you used to expose the swagger component. So, if you use an HTTP connector without SSL configuration, it will try to reach the endpoint using an HTTP connection. In the other case, if you use an HTTP connector with an SSL connection, it will try to use an HTTPS connection. That seems fine, but some use cases could generate a problem:

SSL Certificate offloaded in the Load Balancer: If we offload the SSL configuration on the Load Balancer as it is used in traditional on-premises deployments and some of the Kubernetes configurations, the consumer will establish an HTTPS connection to the Load Balancer, but internally the communication with the BWCE will be done using HTTP, so, in this case, it will generate a mismatch, because in the second call of the requests it will guess that as the HTTP Connector from BWCE is not using HTTPS, the URL should be reached using HTTP but that’s not the case as the communication goes through the Load Balancer that is handled the security.

Service Mesh Service Exposition: Similar to the previous case, but in that case, close to the Kubernetes deployment. Suppose we are using Service Mesh such as Istio or others. In that case, security is one of the things that needs to be handled. Hence, the situation is the same as the scenario above because the BWCE doesn’t know the security configuration but is impacting the default endpoint generated.

How To Enable SwaggerUI TIBCO BusinessWorks when Offloading SSL Certificates?

Since BWCE 2.8.3, there is a new JVM property that we can use to force the endpoint generated to be HTTPS even if the HTTP Connector used by the BWCE application doesn’t have any security configuration that helps us to solve this issue in the cases above and similar scenario. The property can be added as any other JVM property using the BW_JAVA_OPTS environment property, and the value is this: bw.rest.enable.secure.swagger.url =true

Increasing HTTP Logs in TIBCO BusinessWorks in 5 Minutes

Increasing the HTTP logs in TIBCO BusinessWorks when you are debugging or troubleshooting an HTTP-based integration that could be related to a REST or SOAP service is one of the most used and helpful things you can do when developing with TIBCO BusinessWorks.

The primary purpose of increasing the HTTP logs is to get complete knowledge about what information you are sending and which communication you are receiving from your partner communicator to help understand an error or unexpected behavior.

What are the primary use cases for increasing the HTTP logs?

In the end, all the different use cases are variations of the primary use case: “Get full knowledge about the HTTP exchange communication between both parties.” Still, some more detailed ones can be listed below:

  • Understand why a backend server is rejecting a call that could be related to Authentication or Authorization, and you need to see the detailed response by the backend server.
  • Verify the value of each HTTP Header you are sending that could affect the communication’s compression or accepted content type.
  • See why you’re rejecting a call from a consumer

Splitting the communication based on the source

The most important thing to understand is that the logs usually depend on the library you are using, and it is not the same library used to expose an HTTP-based Server as the library you use to consume an HTTP-based service such as REST or a SOAP service.

Starting from what you expose, this is the easiest thing because this will be defined by the HTTP Connector resources you’re using, as you can see in the picture below:

HTTP Shared Resources Connector in BW

All HTTP Connector Resources that you can use to expose REST and SOAP services are based on the Jetty Service implementation, and that means that the loggers that you need to change their configuration are related to the Jetty server itself.

More complex, in theory, are the ones related to the client communication when our TIBCO BusinessWorks application consumes an HTTP-based service provided by a backend because each of these communications has its own HTTP Client Shared Resources. The configuration of each of them will be different because one of the settings we can get here is the Implementation Library, and that will have a direct effect on the way to change the log configuration:

HTTP Client Resource in BW that shows the different implementation libraries to detect the logger to Increasing HTTP Logs in TIBCO BusinessWorks

You have three options when you define an HTTP Client Resource, as you can see in the picture above:

  • Apache HttpComponents: The default one supports HTTP1, SOAP and REST services.
  • Jetty HTTP Client: This client only supports HTTP flows such as HTTP1 and HTTP2, and it would be the primary option when you’re working with HTTP2 flows.
  • Apache Commons: Similar to the first one, but this is currently deprecated, and to be honest, if you have some client component using this configuration, you should change it when you can to the Apache HttpComponents.

So, if we’re consuming a SOAP and REST service, it is clear that we will be using the implementation library Apache HttpComponents, and that will give us the logger we need to use.

Because for Apache HttpComponents, we can rely on the following logger: “org.apache.http” and in case we want to extend the server side, or we’re using Jetty HTTP client, we can use this one: “org.eclipse.jetty.http”

We need to be aware that we cannot extend it just for a single HTTP Client resource because the configuration will be based on the Implementation Library, so in case we set the DEBUG level for the Apache HttpComponents library, it will affect all Shared Resources using this implementation Library, and you’ll need to differentiate based on the data inside the log so that will be part of your data analysis.

How to set HTTP Logs in TIBCO BusinessWorks?

Now that we have the loggers, we must set it to a DEBUG (or TRACE) level. We need to know how to do it, and we have several options depending on how we would like to do it and what access we have. The scope of this article is TIBCO BusinessWorks Container Edition, but you can easily extrapolate part of this knowledge to an on-premises TIBCO BusinessWorks installation.

TIBCO BusinessWorks (container or not) relies on its logging capabilities in the log back library, and this library is configured using a file named logback.xml that could have the configuration that you need, as you can see in the picture below:

logback.xml configuration with the default structure in TIBCO BW

So if we want to add a new logging configuration, we need to add a new element loggerto the file with the following structure:

  <logger name="%LOGGER_WE_WANT_TO_SEE">
    <level value="%LEVEL_WE_WANT_TO_SEE%"/>
  </logger>    

So, the logger was precise based on the previous section, and the level will depend on how much info you want to see. The log Levels are the following ones: ERROR, WARN, INFO, DEBUG, TRACE. DEBUG and TRACE are the ones that show more information.

In our case, DEBUG should be enough to get the full HTTP Request and HTTP Response, but you can also apply it to other things where you could need a different log level.

Now you need to add that to the logback.xml file, and to do that, you have several options, as commented:

  • You can find the logback.xml inside the BWCE container (or the AppNode configuration folder) and modify its content. The default location of this file is this one: /tmp/tibco.home/bwce/<VERSION>/config/logback.xml To do this, you will need to have access to do a kubectl exec on the bwce container, and if you do the change, the change will be temporary and lost in the next restart. That could be something good or bad, depending on your goal.
  • If you want to have it permanent or don’t have access to the container, you have two options. The first one is to include a custom copy of the logback.xml in the /resources/custom-logback/ folder in the BWCE base image and set the environment variable CUSTOM_LOGBACK to TRUE value, and that will override the default logback.xml configuration with the content of this file. As commented, this will be “permanent” and will apply since the first deployment of the app with this configuration. You can find more info the official doc here.
  • There is also an additional one since BWCE 2.7.0 and above that allows you to change the logback.xml content without a new copy or changing the base image, and that’s based on the usage of the environment property BW_LOGGER_OVERRIDES with the content in the following way (logger=value) so in our case it would be something like this org.apache.http=DEBUG and in the next deployment you will get this configuration. Similar to the previous one, this will be permanent but doesn’t require adding a file to the base image to be achievable.

So, as you can see, you have different options depending on your needs and access levels.

Conclusion

In conclusion, enhancing HTTP logs within TIBCO BusinessWorks during debugging and troubleshooting is a vital strategy. Elevating log levels provides a comprehensive grasp of information exchange, aiding in analyzing errors and unexpected behaviors. Whether discerning backend rejection causes, scrutinizing HTTP header effects, or isolating consumer call rejections, amplified logs illuminate complex integration scenarios. Adaptations vary based on library usage, encompassing server exposure and service consumption. Configuration through the logback library involves tailored logger and level adjustments. This practice empowers developers to unravel integration intricacies efficiently, ensuring robust and seamless HTTP-based interactions across systems.

How To Create a ReadOnlyFileSystem Image for TIBCO BWCE

This article will cover how to enhance the security of your TIBCO BWCE images by creating a ReadOnlyFileSystem Image for TIBCO BWCE. In previous articles, we have commented on the benefits that this kind of image provides several advantages in terms of security, focusing on aspects such as reducing the attack surface by limiting the kind of things any user can do, even if they gain access to running containers.

The same applies in case any malware your image can have will have limited the possible actions they can do without any write access to most of the container.

How ReadOnlyFileSystem affects a TIBCO BWCE image?

This has a clear impact as the TIBCO BWCE image is an image that needs to write in several folders as part of the expected behavior of the application. That’s mandatory and non-dependent on the scripts you used to build your image.

As you probably know, TIBCO BWCE ships two sets of scripts to build the Docker base image: the main ones and the ones included in the folder reducedStartupTime, as you can see in the GitHub page but also inside your docker folder in the TIBCO-HOME after the installation as you can see in the picture below.

The main difference between them is where the unzip of the bwce-runtime is made. In the case of the default script, the unzip is done in the startup process of the image, and in the reducedStartupTime this is done in the building of the image itself. So, you can start thinking that the default ones need some writing access as they need to unzip the file inside the container, and that’s true.

But also, the reduced startupTime requires writing access to run the application; several activities are done regarding unzipping the EAR file, managing the properties file, and additional internal activities. So, no matter what kind of scripts you’re using, you must provide a write-access folder to do this activity.

By default, all these activities are limited to a single folder. If you keep everything by default, this is the /tmp folder, so you must provide a volume for that folder.

How to deploy a TIBCO BWCE application with the

Now, that is clear that you need a volume for the /tmp folder, and now you need to define the kind of volume that you want to use for this one. As you know, there are several kinds of volumes that you can determine depending on the requirements that you have.

In this case, the only requirement is to write access, but there is no need regarding storage and persistency, so, in that case, we can use an emptyDir mode. emptyDir content, which is erased when a pod is removed, is similar to the default behavior but allows writing permission on its content.

To show how the YAML would like, we will use the default one that we have available in the documentation here:

apiVersion: v1
kind: Pod
metadata:
  name: bookstore-demo
  labels:
    app: bookstore-demo
spec:
  containers:
  - name: bookstore-demo
    image: bookstore-demo:2.4.4
    imagePullPolicy: Never
    envFrom:
    - configMapRef:
      name: name 

So, we will change that to include the volume, as you can see here:

apiVersion: v1
kind: Pod
metadata:
  name: bookstore-demo
  labels:
    app: bookstore-demo
spec:
  containers:
  - name: bookstore-demo
    image: bookstore-demo:2.4.4
    imagePullPolicy: Never
	securityContext:
		readOnlyRootFilesystem: true
    envFrom:
    - configMapRef:
      name: name
    volumeMounts:
    - name: tmp
      mountPath: /tmp
  volumes:
  - name: tmp
    emptyDir: {}

The changes are the following:

  • Include the volumes section with a single volume definition with the name of tmp with an emptyDirdefinition.
  • Include a volumeMountssection for the tmpvolume that is mounted in the /tmp path to allow to write on that specific path to enable also the unzip of the bwce-runtime as well as all the additional activities that are required.
  • To trigger this behavior, include the readOnlyRootFilesystem flag in the securityContext section.

Conclusion

Incorporating a ReadOnlyFileSystem approach into your TIBCO BWCE images is a proactive strategy to fortify your application’s security posture. By curbing unnecessary write access and minimizing the potential avenues for unauthorized actions, you’re taking a vital step towards safeguarding your containerized environment.

This guide has unveiled the critical aspects of implementing such a security-enhancing measure, walking you through the process with clear instructions and practical examples. With a focus on reducing attack vectors and bolstering isolation, you can confidently deploy your TIBCO BWCE applications, knowing that you’ve fortified their runtime environment against potential threats.

Enhancing Container Security: The Vital Role of ReadOnlyRootFilesystem

Introduction

One such important security feature is the use of ReadOnlyRootFilesystem, a powerful tool that can significantly enhance the security posture of your containers.

In the rapidly evolving software development and deployment landscape, containers have emerged as a revolutionary technology. Offering portability, efficiency, and scalability, containers have become the go-to solution for packaging and delivering applications. However, with these benefits come specific security challenges that must be addressed to ensure the integrity of your containerized applications.

A ReadOnlyRootFilesystem is precisely what it sounds like a filesystem that can only be read from, not written to. In containerization, the contents of a container’s filesystem are locked in a read-only state, preventing any modifications or alterations during runtime.

 Advantages of Using ReadOnlyRootFilesystem

  • Reduced Attack Surface: One of the fundamental principles of cybersecurity is reducing the attack surface – the potential points of entry for malicious actors. Enforcing a ReadOnlyRootFilesystem eliminates the possibility of an attacker gaining write access to your container. This simple yet effective measure significantly limits their ability to inject malicious code, tamper with critical files, or install malware.
  • Immutable Infrastructure: Immutable infrastructure is a concept where components are never changed once deployed. This approach ensures consistency and repeatability, as any changes are made by deploying a new instance rather than modifying an existing one. By applying a ReadOnlyRootFilesystem, you’re essentially embracing the principles of immutable infrastructure within your containers, making them more resistant to unauthorized modifications.
  • Malware Mitigation: Malware often relies on gaining written access to a system to carry out its malicious activities. By employing a ReadOnlyRootFilesystem, you erect a significant barrier against malware attempting to establish persistence or exfiltrate sensitive data. Even if an attacker manages to compromise a container, their ability to install and execute malicious code is severely restricted.
  • Enhanced Forensics and Auditing: In the unfortunate event of a security breach, having a ReadOnlyRootFilesystem in place can assist in forensic analysis. Since the filesystem remains unaltered, investigators can more accurately trace the attack vector, determine the extent of the breach, and identify the vulnerable entry points.

Implementation Considerations

Implementing a ReadOnlyRootFilesystem in your containerized applications requires a deliberate approach:

  • Image Design: Build your container images with the ReadOnlyRootFilesystem concept in mind. Make sure to separate read-only and writable areas of the filesystem. This might involve creating volumes for writable data or using environment variables to customize runtime behavior.
  • Runtime Configuration: Containers often require write access for temporary files, logs, or other runtime necessities. Carefully design your application to use designated directories or volumes for these purposes while keeping the critical components read-only.
  • Testing and Validation: Thoroughly test your containerized application with the ReadOnlyRootFilesystem configuration to ensure it functions as intended. Pay attention to any runtime errors, permission issues, or unexpected behavior that may arise.

How to Define a Pod to be ReadOnlyRootFilesystem?

To define a Pod as “ReadOnlyRootFilesystem,” this is one of the flags that belong to the securityContext section of the pod, as you can see in the sample below:

apiVersion: v1
kind: Pod
metadata:
  name: <Pod name>
spec:
  containers:
  - name: <container name>
    image: <image>
    securityContext:
      readOnlyRootFilesystem: true

Conclusion

As the adoption of containers continues to surge, so does the importance of robust security measures. Incorporating a ReadOnlyRootFilesystem into your container strategy is a proactive step towards safeguarding your applications and data. By reducing the attack surface, fortifying against malware, and enabling better forensics, you’re enhancing the overall security posture of your containerized environment.

As you embrace immutable infrastructure within your containers, you’ll be better prepared to face the ever-evolving landscape of cybersecurity threats. Remember, when it comes to container security, a ReadOnlyRootFilesystem can be the shield that protects your digital assets from potential harm.

Exploring Ephemeral Containers in Kubernetes: Unveiling a Powerful Debugging Tool

In the dynamic and ever-evolving world of container orchestration, Kubernetes continues to reign as the ultimate choice for managing, deploying, and scaling containerized applications. As Kubernetes evolves, so do its features and capabilities, and one such fascinating addition is the concept of Ephemeral Containers. In this blog post, we will delve into the world of Ephemeral Containers, understanding what they are, exploring their primary use cases, and learning how to implement them, all with guidance from the Kubernetes official documentation.

 What Are Ephemeral Containers?

Ephemeral Containers, introduced as an alpha feature in Kubernetes 1.16 and reached a stable level on the Kubernetes 1.25 version, offer a powerful toolset for debugging, diagnosing, and troubleshooting issues within your Kubernetes pods without requiring you to alter your pod’s original configuration. Unlike regular containers that are part of the central pod’s definition, ephemeral containers are dynamically added to a running pod for a short-lived duration, providing you with a temporary environment to execute diagnostic tasks.

The good thing about ephemeral containers is that they allow you to have all the required tools to do the job (debug, data recovery, or anything else that could be required) without adding more devices to the base containers and increasing the security risk based on that action.

 Main Use-Cases of Ephemeral Containers

  • Troubleshooting and Debugging: Ephemeral Containers shine brightest when troubleshooting and debugging. They allow you to inject a new container into a problematic pod to gather logs, examine files, run commands, or even install diagnostic tools on the fly. This is particularly valuable when encountering issues that are difficult to reproduce or diagnose in a static environment.
  • Log Collection and Analysis: When a pod encounters issues, inspecting its logs is often essential. Ephemeral Containers make this process seamless by enabling you to spin up a temporary container with log analysis tools, giving you instant access to log files and aiding in identifying the root cause of problems.
  • Data Recovery and Repair: Ephemeral Containers can also be used for data recovery and repair scenarios. Imagine a situation where a database pod faces corruption. With an ephemeral container, you can mount the pod’s storage volume, perform data recovery operations, and potentially repair the data without compromising the running pod.
  • Resource Monitoring and Analysis: Performance bottlenecks or resource constraints can sometimes affect a pod’s functionality. Ephemeral Containers allow you to analyze resource utilization, run diagnostics, and profile the pod’s environment, helping you optimize its performance.

Implementing Ephemeral Containers

Thanks to Kubernetes ‘ user-friendly approach, implementing ephemeral containers is straightforward. Kubernetes provides the kubectl debug command, which streamlines the process of attaching ephemeral containers to pods. This command allows you to specify the pod and namespace and even choose the debugging container image to be injected.

kubectl debug <pod-name> -n <namespace> --image=<debug-container-image>

You can go even beyond, and instead of adding the ephemeral containers to the running pod, you can do the same to a copy of the pod, as you can see in the following command:

kubectl debug myapp -it --image=ubuntu --share-processes --copy-to=myapp-debug

Finally, once you have done your duty, you can permanently remove it using a kubectl delete command, and that’s it.

It’s essential to notice that all these actions require direct access to the environment. Even that temporarily generates a mismatch on the “infrastructure-as-code” deployment, as we’re manipulating the runtime status temporarily. Hence, this approach is much more challenging to implement if you use some GitOps practices or tools such as Rancher Fleet or ArgoCD.

Conclusion

Ephemeral Containers, while currently a stable feature since the Kubernetes 1.25 release, offer impressive capabilities for debugging and diagnosing issues within your Kubernetes pods. By allowing you to inject temporary containers into running pods dynamically, they empower you to troubleshoot problems, collect logs, recover data, and optimize performance without disrupting your application’s core functionality. As Kubernetes continues to evolve, adding features like Ephemeral Containers demonstrates its commitment to providing developers with tools to simplify the management and maintenance of containerized applications. So, the next time you encounter a stubborn issue within your Kubernetes environment, remember that Ephemeral Containers might be the debugging superhero you need!

For more detailed information and usage examples, check out the Kubernetes official documentation on Ephemeral Containers. Happy debugging!

Safeguarding Your Servers: Preventing Information Disclosure with Istio Service Mesh

In today’s digital landscape, where data breaches and cyber threats are becoming increasingly sophisticated, ensuring the security of your servers is paramount. One of the critical security concerns that organizations must address is “Server Information Disclosure.” Server Information Disclosure occurs when sensitive information about a server’s configuration, technology stack, or internal structure is inadvertently exposed to unauthorized parties. Hackers can exploit this vulnerability to gain insights into potential weak points and launch targeted attacks. Such breaches can lead to data theft, service disruption, and reputation damage.

Information Disclosure and Istio Service Mesh

One example is the Server HTTP Header, usually included in most of the HTTP responses where you have the server that is providing this response. The values can vary depending on the stack, but matters such as Jetty, Tomcat, or similar ones are usually seen. But also, if you are using a Service Mesh such as Istio, you will see the header with a value of istio-envoy, as you can see here:

Information Disclosure of Server Implementation using Istio Service mesh

As commented, this is of such importance for several levels of security, such as:

  • Data Privacy: Server information leakage can expose confidential data, undermining user trust and violating data privacy regulations such as GDPR and HIPAA.
  • Reduced Attack Surface: By concealing server details, you minimize the attack surface available to potential attackers.
  • Security by Obscurity: While not a foolproof approach, limiting disclosure adds an extra layer of security, making it harder for hackers to gather intelligence.

How to mitigate that with Istio Service Mesh?

When using Istio, we can define different rules to add and remove HTTP headers based on our needs, as you can see in the following documentation here: https://discuss.istio.io/t/remove-header-operation/1692 using simple clauses to the definition of your VirtualService as you can see here:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: k8snode-virtual-service
spec:
  hosts:
  - "example.com"
  gateways:
  - k8snode-gateway
  http:
    headers:
      response:
        remove:
          - "x-my-fault-source"
  - route:
    - destination:
        host: k8snode-service
        subset: version-1 

Unfortunately, this is not useful for all HTTP headers, especially the “main” ones, so the ones that are not custom added by your workloads but the ones that are mainly used and defined in the HTTP W3C standard https://www.w3.org/Protocols/

So, in the case of the Server HTTP header is a little bit more complex to do, and you need to use an EnvoyFilter, one of the most sophisticated objects part of the Istio Service Mesh. Based on the words in the official Istio documentation, an EnvoyFilter provides a mechanism to customize the Envoy configuration generated by Istio Pilot. So, you can use EnvoyFilter to modify values for certain fields, add specific filters, or even add entirely new listeners, clusters, etc.

EnvoyFilter Implementation to Remove Header

So now that we know that we need to create a custom EnvoyFilter let’s see which one we need to use to remove the Server header and how this is made to get more knowledge about this component. Here you can see the EnvoyFilter for that job:

---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: gateway-response-remove-headers
  namespace: istio-system
spec:
  workloadSelector:
    labels:
      istio: ingressgateway
  configPatches:
  - applyTo: NETWORK_FILTER
    match:
      context: GATEWAY
      listener:
        filterChain:
          filter:
            name: "envoy.filters.network.http_connection_manager"
    patch:
      operation: MERGE
      value:
        typed_config:
          "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
          server_header_transformation: PASS_THROUGH
  - applyTo: ROUTE_CONFIGURATION
    match:
      context: GATEWAY
    patch:
      operation: MERGE
      value:
        response_headers_to_remove:
        - "server"

So let’s focus on the parts of the specification of the EnvoyFilter where we can get for one side the usual workloadSelector, to know where this component will be applied, that in this case will be the istio ingressgateway. Then we enter into the configPatches section, that are the sections where we use the customization that we need to do, and in our case, we have two of them:

Both act on the context: GATEWAY and apply to two different objects: NETWORK\_FILTER AND ROUTE\_CONFIGURATION. You can also use filters on sidecars to affect the behavior of them. The first bit what it does is including the custom filter http\_connection\_maanger that allows the manipulation of the HTTP context, including for our primary purpose also the HTTP header, and then we have the section bit that acts on the ROUTE\_CONFIGURATION removing the server header as we can see by using the option response_header_to_remove

Conclusion

As you can see, this is not easy to implement. Still, at the same time, it is evidence of the power and low-level capabilities that you have when using a robust service mesh such as Istio to interact and modify the behavior of any tiny detail that you want for your benefit and, in this case, also to improve and increase the security of your workloads deployed behind the Service Mesh scope.

In the ever-evolving landscape of cybersecurity threats, safeguarding your servers against information disclosure is crucial to protect sensitive data and maintain your organization’s integrity. Istio empowers you to fortify your server security by providing robust tools for traffic management, encryption, and access control.

Remember, the key to adequate server security is a proactive approach that addresses vulnerabilities before they can be exploited. Take the initiative to implement Istio and elevate your server protection.

Enhancing Service Mesh DNS Resolution with Istio’s Proxy DNS Capability: Benefits and Use-Cases

Istio is a popular open-source service mesh that provides a range of powerful features for managing and securing microservices-based architectures. We have talked a lot about its capabilities and components, but today we will talk about how we can use Istio to help with the DNS resolution mechanism.

As you already know, In a typical Istio deployment, each service is accompanied by a sidecar proxy, Envoy, which intercepts and manages the traffic between services. The Proxy DNS capability of Istio leverages this proxy to handle DNS resolution requests more intelligently and efficiently.

Traditionally, when a service within a microservices architecture needs to communicate with another service, it relies on DNS resolution to discover the IP address of the target service. However, traditional DNS resolution can be challenging to manage in complex and dynamic environments, such as those found in Kubernetes clusters. This is where the Proxy DNS capability of Istio comes into play.

 Istio Proxy DNS Capabilities

With Proxy DNS, Istio intercepts and controls DNS resolution requests from services and performs the resolution on their behalf. Instead of relying on external DNS servers, the sidecar proxies handle the DNS resolution within the service mesh. This enables Istio to provide several valuable benefits:

  • Service discovery and load balancing: Istio’s Proxy DNS allows for more advanced service discovery mechanisms. It can dynamically discover services and their corresponding IP addresses within the mesh and perform load balancing across instances of a particular service. This eliminates the need for individual services to manage DNS resolution and load balancing.
  • Security and observability: Istio gains visibility into the traffic between services by handling DNS resolution within the mesh. It can apply security policies, such as access control and traffic encryption, at the DNS level. Additionally, Istio can collect DNS-related telemetry data for monitoring and observability, providing insights into service-to-service communication patterns.
  • Traffic management and control: Proxy DNS enables Istio to implement advanced traffic management features, such as routing rules and fault injection, at the DNS resolution level. This allows for sophisticated traffic control mechanisms within the service mesh, enabling A/B testing, canary deployments, circuit breaking, and other traffic management strategies.

 Istio Proxy DNS Use-Cases

There are some moments when you cannot or don’t want to rely on the normal DNS resolution. Why is that? Starting because DNS is a great protocol but lacks some capabilities, such as location discovery. If you have the same DNS assigned to three IPs, it will provide each of them in a round-robin fashion and cannot rely on the location.

Or you have several IPs, and you want to block some of them for some specific service; these are great things you can do with Istio Proxy DNS.

Istio Proxy DNS Enablement

You need to know that Istio Proxy DNS capabilities are not enabled by default, so you must help if you want to use it. The good thing is that you can allow that at different levels, from the full mesh level to just a single pod level, so you can choose what is best for you in each case.

For example, if we want to enable it at the pod level, we need to inject the following configuration in the Istio proxy:

    proxy.istio.io/config: |
		proxyMetadata:   
         # Enable basic DNS proxying
         ISTIO_META_DNS_CAPTURE: "true" 
         # Enable automatic address allocation, optional
         ISTIO_META_DNS_AUTO_ALLOCATE: "true"

The same configuration can be part of the Mesh level as part of the operator installation, as you can find the documentation here on the Istio official page.

Conclusion

In summary, the Proxy DNS capability of Istio enhances the DNS resolution mechanism within the service mesh environment, providing advanced service discovery, load balancing, security, observability, and traffic management features. Istio centralizes and controls DNS resolution by leveraging the sidecar proxies, simplifying the management and optimization of service-to-service communication in complex microservices architectures.

Unlocking Flexibility and Reusability: Harnessing the Power of Helm Multiple Instances Subcharts

Helm Multiple Instances Subchart usages as part of your main chart could be something that, from the beginning, can sound strange. We already commented about the helm charts sub-chart and dependencies in the blog because the usual use case is like that:

I have a chart that needs another component, and I “import” it as a sub-chart, which gives me the possibility to deploy the same component and customize its values without needing to create another chart copy and, as you can imagine simplifying a lot of the management of the charts, a sample can be like that:

Discover how multiple subcharts can revolutionize your Helm deployments. Learn how to leverage the power of reusability and customization, allowing you to deploy identical components with unique configurations. Enhance flexibility and simplify management with this advanced Helm feature. Unlock the full potential of your microservices architecture and take control of complex application deployments. Dive into the world of multiple subcharts and elevate your Helm charts to the next level.

So, I think that’s totally clear, but what about are we talking now? The use-case is to have the same sub-chart defined twice. So, imagine this scenario, we’re talking about that instead of this:

# Chart.yaml
dependencies:
- name: nginx
  version: "1.2.3"
  repository: "https://example.com/charts"
- name: memcached
  version: "3.2.1"
  repository: "https://another.example.com/charts"

We’re having something like this

# Chart.yaml
dependencies:
- name: nginx
  version: "1.2.3"
  repository: "https://example.com/charts"
- name: memcached-copy1
  version: "3.2.1"
  repository: "https://another.example.com/charts"
- name: memcached-copy2
  version: "3.2.1"
  repository: "https://another.example.com/charts"

So we have the option to define more than one “instance” of the same sub chart. And I guess, at this moment, you can start asking to yourself: “What are the use-case where I could need this?”

Because that’s quite understandable, unless you need it you will never realize about that. It is the same that happens to me. So let’s talk a bit about possible use cases for this.

 Use-Cases For Multi Instance Helm Dependency

Imagine that you’re deploying a helm chart for a set of microservices that belongs to the scope of the same application and each of the microservices has the same technology base, that can be TIBCO BusinessWorks Container Edition or it can be Golang microservices. So all of them has the same base so it can use the same chart “bwce-microservice” or “golang-microservices” but each of them has its own configuration, for example:

  • Each of them will have its own image name that would differ from one to the other.
  • Each of them will have its own configuration that will differ from one to the other.
  • Each of them will have its own endpoints that will differ and probably even connecting to different sources such as databases or external systems.

So, this approach would help us reuse the same technology helm chart, “bwce” and instance it several times, so we can have each of them with its own configuration without the need to create something “custom” and keeping the same benefits in terms of maintainability that the helm dependency approach provides to us.

 How can we implement this?

Now that we have a clear the use-case that we’re going to support, the next step is regarding how we can do this a reality. And, to be honest, this is much simpler than you can think from the beginning, let’s start with the normal situation when we have a main chart, let’s call it a “program,” that has included a “bwce” template as a dependency as you can see here:

name: multi-bwce
description: Helm Chart to Deploy a TIBCO BusinessWorks Container Edition Application
apiVersion: v1
version: 0.2.0
icon: 
appVersion: 2.7.2

dependencies:
- name: bwce
  version: ~1.0.0
  repository: "file:///Users/avazquez/Data/Projects/DET/helm-charts/bwce"

And now, we are going to move to a multi-instance approach where we will require two different microservices, let’s call it serviceA and serviceB, and both of them we will use the same bwce helm chart.

So the first thing we will modify is the Chart.yaml as follows:

name: multi-bwce
description: Helm Chart to Deploy a TIBCO BusinessWorks Container Edition Application
apiVersion: v1
version: 0.2.0
icon: 
appVersion: 2.7.2

dependencies:
- name: bwce
  alias: serviceA
  version: ~0.2.0
  repository: "file:///Users/avazquez/Data/Projects/DET/helm-charts/bwce"
- name: bwce
  alias: serviceB
  version: ~0.2.0
  repository: "file:///Users/avazquez/Data/Projects/DET/helm-charts/bwce"

The important part here is how we declare the dependency. As you can see in the name we still keeping the same “name” but they have an additional field named “alias” and this alias is what we will help to later identify the properties for each of the instances as we required. With that, we’re already have our two serviceA and serviceB instance definition and we can start using it in the values.yml as follows:

# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

serviceA:
  image: 
    imageName: 552846087011.dkr.ecr.eu-west-2.amazonaws.com/tibco/serviceA:2.5.2
    pullPolicy: Always
serviceB:  
  image: 
    imageName: 552846087011.dkr.ecr.eu-west-2.amazonaws.com/tibco/serviceB:2.5.2
    pullPolicy: Always
  

 Conclusion

The main benefit of this is that it enhances the options of using helm chart for “complex” applications that require different instances of the same kind of components and at the same time.

That doesn’t mean that you need a huge helm chart for your project because this will go against all the best practices of the whole containerization and microservices approach but at least it will give you the option to define different levels of abstraction as you want, keeping all the benefits from a management perspective.

Maximizing Kubernetes Configuration Quality with Kubeconform: A Powerful Utility for Seamless Kubernetes Validation and Management

Kubernetes API changes quite a lot, and we know that in every new version, they are adding new capabilities at the same time that they are deprecating the old ones, so it is a constant evolution, as we already stated in previous articles, as you can see, here regarding Autoscaling v2 and Vertical Autoscaling.

Some of these changes are related to the shift in the apiVersion of some objects, and you have probably already suffered from that v1/alpha going to v1/beta or just moving to a final v1 and deprecating the previous one. So, in the end, it is crucial to ensure that your manifest is in sync with the target version you’re deploying, and some tools can help us with that, including Kubeconform.

What is Kubeconform?

Kubeconform is a powerful utility designed to assist in Kubernetes configuration management and validation. As Kubernetes continues to gain popularity as the go-to container orchestration platform, ensuring the correctness and consistency of configuration files becomes crucial. Kubeconform addresses this need by providing a comprehensive toolset to validate Kubernetes configuration files against predefined standards or custom rules.

Kubeconform supports multiple versions of Kubernetes, allowing you to validate configuration files against different API versions. This flexibility is beneficial when working with clusters running different Kubernetes versions or migrating applications across sets with varying configurations.

Another great feature of Kubeconform is its ability to enforce best practices and standards across Kubernetes configurations. It allows you to define rules, such as enforcing proper labels, resource limits, or security policies, and then validates your configuration files against these rules. This helps catch potential issues early on and ensures that your deployments comply with established guidelines.

How to install Kubeconform?

Kubeconform can be installed from different sources, the most usual ones the standard for your environment using package managers such as brew, apt or similar ones or just getting the binaries from its GitHub page: https://github.com/yannh/kubeconform/releases.

How to launch Kubeconform from the Command Line?

Kubeconform is shipped as a small binary targeted to be executed in the CLI interface and tries to keep its interface minimal to ensure compatibility. Hence, it receives an argument with the file or folder with the manifest files that you want to check, as you can see here:

Then you have several options to do other things, such as the ones shown below:

-ignore-filename-pattern value

regular expression specifying paths to ignore (can be specified multiple times)

-ignore-missing-schemas

skip files with missing schemas instead of failing

-Kubernetes-version string

version of Kubernetes to validate against, e.g.: 1.18.0 (default “master”)

-output string

output format – json, junit, pretty, tap, text (default “text”)

-reject string

comma-separated list of kinds or GVKs to reject

-skip string

comma-separated list of kinds or GVKs to ignore

-strict

disallow additional properties not in schema or duplicated keys

-summary

print a summary at the end (ignored for junit output)

Use-cases of Kuberconform

There are different use cases where Kubeconfrom can play a good role. One is regarding Kubernetes upgrades, sometimes you need to ensure that your current manifest is still going to work in the new release that the cluster will be upgraded to, and with this tool, we can ensure that our YAML is still compatible with the latest version directly getting it from the environment and validate it properly.

Another notable aspect of Kubeconform is its seamless integration into existing CI/CD pipelines. You can easily incorporate kubeconform as a step in your pipeline to automatically validate Kubernetes configuration files before deploying them. By doing so, you can catch configuration errors early in the development process, reduce the risk of deployment failures, and maintain high configuration consistency.

In addition to its validation capabilities, kubeconform provides helpful feedback and suggestions for improving your Kubernetes configuration files. It highlights specific issues or deviations from the defined rules and offers guidance on addressing them. This simplifies the troubleshooting process and helps developers and administrators become more familiar with best practices and Kubernetes configuration standards.

Conclusion

Kubeconform is an invaluable utility for Kubernetes users who strive for reliable and consistent deployments. It empowers teams to maintain a high standard of configuration quality, reduces the likelihood of misconfigurations, and improves the overall stability and security of Kubernetes-based applications.

Exploring Istio Security Policies for Enhanced Service Mesh Protection with 3 Objects

Istio Security Policies are crucial in securing microservices within a service mesh environment. We have discussed Istio and the capabilities that it can introduce to your Kubernetes workloads. Still, today we’re going to be more detailed regarding the different objects and resources that would help us make our workloads much more secure and enforce the communication between them. These objects include PeerAuthentication, RequestAuthentication, and AuthorizationPolicy objects.

PeerAuthentication: Enforcing security on pod-to-pod communication

PeerAuthentication focuses on securing communication between services by enforcing mutual TLS (Transport Layer Security) authentication and authorization. It enables administrators to define authentication policies for workloads based on the source of the requests, such as specific namespaces or service accounts. Configuring PeerAuthentication ensures that only authenticated and authorized services can communicate, preventing unauthorized access and man-in-the-middle attacks. This can be achieved depending on the value of the mode where defining this object being STRICT for only allowed mTLS communication, PERMISSIVE to allow both kinds of communication, DISABLE to forbid the mTLS connection and keep the traffic insecure, and UNSET to use the inherit option. This is a sample of the definition of the object:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: foo
spec:
  mtls:
    mode: PERMISSIVE

RequestAuthentication: Defining authentication methods for Istio Workloads

RequestAuthentication, on the other hand, provides fine-grained control over the authentication of inbound requests. It allows administrators to specify rules and requirements for validating and authenticating incoming requests based on factors like JWT (JSON Web Tokens) validation, API keys, or custom authentication methods. With RequestAuthentication, service owners can enforce specific authentication mechanisms for different endpoints or routes, ensuring that only authenticated clients can access protected resources. Here you can see a sample of a RequestAuthentication object:

apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
  name: jwt-auth-policy
  namespace: my-namespace
spec:
  selector:
    matchLabels:
      app: my-app
  jwtRules:
    - issuer: "issuer.example.com"
      jwksUri: "https://example.com/.well-known/jwks.json"

As commented, JWT validation is the most used approach as JWT tokens are becoming the de-facto industry standard for incoming validations and the OAuth V2 authorization protocol. Here you can define the rules the JWT needs to meet to be considered a valid request. But the RequestAuthentication only describes the “authentication methods” supported by the workloads but doesn’t enforce it or provide any details regarding the Authorization.

That means that if you define a workload to need to use JWT authentication, sending the request with the token will validate that token and ensure it is not expired. It meets all the rules you have specified in the object definition, but it will also allow bypassing requests with no token at all, as you’re just defining what the workloads support but not enforcing it. To do that, we need to introduce the last object of this set, the AuthorizationPolicy object.

AuthorizationPolicy: Fine-grained Authorization Policy Definition for Istio Policies

AuthorizationPolicy offers powerful access control capabilities to regulate traffic flow within the service mesh. It allows administrators to define rules and conditions based on attributes like source, destination, headers, and even request payload to determine whether a request should be allowed or denied. AuthorizationPolicy helps enforce fine-grained authorization rules, granting or denying access to specific resources or actions based on the defined policies. Only authorized clients with appropriate permissions can access specific endpoints or perform particular operations within the service mesh. Here you can see a sample of an Authorization Policy object:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: rbac-policy
  namespace: my-namespace
spec:
  selector:
    matchLabels:
      app: my-app
  rules:
    - from:
        - source:
            principals: ["user:user@example.com"]
      to:
        - operation:
            methods: ["GET"]

Here you can go as detailed as you need; you can apply rules on the source of the request to ensure that only some recommendations can go through (for example, requests that are from a JWT token to use in combination with the RequestAuthenitcation object), but also rules on the target, if this is going to a specific host or path or method or a combination of both. Also, you can apply to ALLOW rules or DENY rules (or even CUSTOM) and define a set of them, and all of them will be enforced as a whole. The evaluation is determined by the following rules as stated in the Istio Official Documentation:

  • If there are any CUSTOM policies that match the request, evaluate and deny the request if the evaluation result is denied.
  • If there are any DENY policies that match the request, deny the request.
  • If there are no ALLOW policies for the workload, allow the request.
  • If any of the ALLOW policies match the request, allow the request. Deny the request.
Authorization Policy validation flow from: https://istio.io/latest/docs/concepts/security/

This will provide all the requirements you could need to be able to do a full definition of all the security policies needed.

 Conclusion

In conclusion, Istio’s Security Policies provide robust mechanisms for enhancing the security of microservices within a service mesh environment. The PeerAuthentication, RequestAuthentication, and AuthorizationPolicy objects offer a comprehensive toolkit to enforce authentication and authorization controls, ensuring secure communication and access control within the service mesh. By leveraging these Istio Security Policies, organizations can strengthen the security posture of their microservices, safeguarding sensitive data and preventing unauthorized access or malicious activities within their service mesh environment.