Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

Introduction

Istio TLS configuration is one of the essential features when we enable a Service Mesh. Istio Service Mesh provides so many features to define in a centralized, policy way how transport security, among other characteristics, is handled in the different workloads you have deployed on your Kubernetes cluster.

One of the main advantages of this approach is that you can have your application focus on the business logic they need to implement. These security aspects can be externalized and centralized without necessarily including an additional effort in each application you have deployed. This is especially relevant if you are following a polyglot approach (as you should) across your Kubernetes cluster workloads.

So, this time we’re going to have our applications just handling HTTP traffic for both internal and external, and depending on where we are reaching, we will force that connection to be TLS without the workload needed to be aware of it. So, let’s see how we can enable this Istio TLS configuration

Scenario View

We will use this picture you can see below to keep in mind the concepts and components that will interact as part of the different configurations we will apply to this.

  • We will use the ingress gateway to handle all incoming traffic to the Kubernetes cluster and the egress gateway to handle all outcoming traffic from the cluster.
  • We will have a sidecar container deployed in each application to handle the communication from the gateways or the pod-to-pod communication.

To simplify the testing applications, we will use the default sample applications Istio provides, which you can find here.

How to Expose TLS in Istio?

This is the easiest part, as all the incoming communication you will receive from the outside will enter the cluster through the Istio Ingress Gateway, so it is this component the one that needs to handle the TLS connection and then use the usual security approach to talk to the pod exposing the logic.

By default, the Istio Ingress Gateway already exposes a TLS port, as you can see in the picture below:

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

So we will need to define a Gateway that receives all this traffic through the HTTPS and redirect that to the pods, and we will do it as you can see here:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: bookinfo-gateway-https
  namespace: default
spec:
  selector:
    istio: ingressgateway
  servers:
    - hosts:
        - '*'
      port:
        name: https
        number: 443
        protocol: HTTPS
      tls:
        mode: SIMPLE # enables HTTPS on this port
        credentialName: httpbin-credential 

As we can see, it is a straightforward configuration, just adding the port HTTPS on the 443 and providing the TLS configuration:

And with that, we can already reach using SSL the same pages:

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

How To Consume SSL from Istio?

Now that we have generated a TLS incoming request without the application knowing anything, we will go one step beyond that and do the most challenging configuration. We will set up TLS/SSL connection to any outgoing communication outside the Kubernetes cluster without the application knowing anything about it.

To do so, we will use one of the Istio concepts we have already covered in a specific article. That concept is the Istio Service Entry that allows us to define an endpoint to manage it inside the MESH.

Here we can see the Wikipedia endpoint added to the Service Mesh registry:

 apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: se-app
  namespace: default
spec:
  hosts:
  - wikipedia.org
  ports:
  - name: https
    number: 443
    protocol: HTTPS
  resolution: DNS

Once we have configured the ServiceEntry, we can define a DestinationRule to force all connections to wikipedia.org will use the TLS configuration:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: tls-app
  namespace: default
spec:
  host: wikipedia.org
  trafficPolicy:
    tls:
      mode: SIMPLE

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kiali Explained: Observability and Traffic Visualization for Istio Service Mesh

Kiali Explained: Observability and Traffic Visualization for Istio Service Mesh

What Is Kiali?

Kiali is an open-source project that provides observability for your Istio service mesh. Developed by Red Hat, Kiali helps users understand the structure and behavior of their mesh and any issues that may arise.

Kiali provides a graphical representation of your mesh, showing the relationships between the various service mesh components, such as services, virtual services, destination rules, and more. It also displays vital metrics, such as request and error rates, to help you monitor the health of your mesh and identify potential issues.

 What are Kiali Main Capabilities?

One of the critical features of Kiali is its ability to visualize service-to-service communication within a mesh. This lets users quickly see how services are connected, and requests are routed through the mesh. This is particularly useful for troubleshooting, as it can help you quickly identify problems with service communication, such as misconfigured routing rules or slow response times.

Kiali 101: Understanding and Utilizing this Essential Service Mesh Management Tool

Kiali also provides several tools for monitoring the health of your mesh. For example, it can alert you to potential problems, such as a high error rate or a service not responding to requests. It also provides detailed tracking information, allowing you to see the exact path a request took through the mesh and where any issues may have occurred.

In addition to its observability features, Kiali provides several other tools for managing your service mesh. For example, it includes a traffic management module, which allows you to control the flow of traffic through your mesh easily, and a configuration management module, which helps you manage and maintain the various components of your mesh.

Overall, Kiali is an essential tool for anyone using an Istio service mesh. It provides valuable insights into the structure and behavior of your mesh, as well as power monitoring and management tools. Whether you are starting with Istio or an experienced user, Kiali can help ensure that your service mesh runs smoothly and efficiently.

What are the main benefits of using Kiali?

The main benefits of using Kiali are:

  • Improved observability of your Istio service mesh. Kiali provides a graphical representation of your mesh, showing the relationships between different service mesh components and displaying key metrics. This allows you to quickly understand the structure and behavior of your mesh and identify potential issues.
  • Easier troubleshooting. Kiali’s visualization of service-to-service communication and detailed tracing information make it easy to identify problems with service communication and pinpoint the source of any issues.
  • Enhanced traffic management. Kiali includes a traffic management module allowing you to control traffic flow through your mesh easily.
  • Improved configuration management. Kiali’s configuration management module helps you manage and maintain the various components of your mesh.

How To Install Kiali?

There are several ways to install Kiali as part of your Service Mesh deployment, being the preferred option to use the Operator model available here.

You can install this operator using Helm or OperatorHub. To install it using Helm Charts, you need to add the following repository using this command:

 helm repo add kiali https://kiali.org/helm-charts

** Remember that once you add a new repo, you need to run the following command to update the charts available

helm repo update

Now, you can install it using the helm installprimitive such as in the following sample:

helm install \
    --set cr.create=true \
    --set cr.namespace=istio-system \
    --namespace kiali-operator \
    --create-namespace \
    kiali-operator \
    kiali/kiali-operator

If you prefer going down the route of OperatorHub, you can use the following URL . Now, by clicking on the Install button, you will see the steps to have the component installed in your Kubernetes environment.

Kiali 101: Understanding and Utilizing this Essential Service Mesh Management Tool

In case you want a simple installation of Kiali, you can also use the sample YAML available inside the Istio installation folder using the following command:

kubectl apply -f $ISTIO_HOME/samples/addons/kiali.yaml

How does Kiali work?

Kiali is just the graphical representation of the information available regarding how the service mesh works. So it is not the responsibility of Kiali to store those metrics but to retrieve them and draw them in a relevant way for the user of the tool.

Prometheus does the storage of this data, so Kiali uses the Prometheus REST API to retrieve the information and draw it graphically, as you can see here:

  • It is going to show several relevant parts of the graph. It will show the namespace selected and inside of them the different apps (it would detect an app in case you have a label added to the workload with the name app ). Inside, each app will add different services and pods with other icons (triangles for the services and squares for the pods).
  • It will also show how the traffic reaches the cluster through the different ingress gateways and how it goes out in case we have any egress gateway configured.
  • It will show the kind of traffic we’re handling and the different error rates based on the kind of protocol, such as TCP, HTTP, and so on, as you can see in the picture below. The protocol is decided based on a naming convention on the port name from the service with the expected format: protocol-name
Kiali 101: Understanding and Utilizing this Essential Service Mesh Management Tool

Can Kiali be used with any service mesh?

No, Kiali is specifically designed for use with Istio service meshes.

It provides observability, monitoring, and management tools for Istio service meshes but is incompatible with other service mesh technologies.

If you use a different service mesh, you will need to find an additional tool for managing and monitoring it.

Are there other alternatives to Kiali?

Even if you cannot see natural alternatives to Kiali to visualize your workloads and traffic through the Istio Service Mesh, you can use other tools to grab the metrics that feed Kiali and have custom visualization using more generic tools such as Grafana, among others.

Let’s talk about similar tools to Kialia for other Service Meshes, such as Linkerd, Consul Connect, or even Kuma. Most follow a different approach where the visualization part is not a separate “project” but relies on a standard visualization tool. That gives you much more flexibility, but at the same time, it lacks most of the excellent visualization of the traffic that Kialia provides, such as graph views or being able to modify the traffic directly from the graph view.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Helm Templates in Files Explained: Customize ConfigMaps and Secrets Content

Helm Templates in Files Explained: Customize ConfigMaps and Secrets Content

Helm Templates in Files, such as ConfigMaps Content or Secrets Content, is of the most common requirements when you are in the process of creating a new helm chart. As you already know, Helm Chart is how we use Kubernetes to package our application resources and YAML in a single component that we can manage at once to ease the maintenance and operation process.



External template files are a powerful technique for managing complex configurations. Discover more advanced Helm templating strategies in our definitive Helm package management guide.

Helm Templates Overview

By default, the template process works with YAML files, allowing us to use some variables and some logic functions to customize and templatize our Kubernetes YAML resources to our needs.

So, in a nutshell, we can only have yaml files inside the templates folder of a YAML. But sometimes we would like to do the same process on ConfigMaps or Secrets or to be more concrete to the content of those ConfigMaps, for example, properties files and so on.

Helm Templates in Files: How To Customize ConfigMaps Content Simplified
Helm Templates in Files: Helm Templates Overview Overview showing the Files outside the templates that are usually required

As you can see it is quite normal to have different files such as json configuration file, properties files, shell scripts as part of your helm chart, and most of the times you would like to give some dynamic approach to its content, and that’s why using helm Templates in Files it is so important to be the main focus for this article

Helm Helper Functions to Manage Files

By default, Helm provides us with a set of functions to manage files as part of the helm chart to simplify the process of including them as part of the chart, such as the content of ConfigMap or Secret. Some of these functions are the following:

  • .Files.Glob: This function allows to find any pattern of internal files that matches the pattern, such as the following example:
    { range $path, $ := .Files.Glob ".yaml" }
  • .Files.Get: This is the simplest option to gather the content of a specific file that you know the full path inside your helm chart, such as the following sample: {{ .Files.Get "config1.toml" | b64enc }}

You can even combine both functions to use together such as in the following sample:

 {{ range $path, $_ :=  .Files.Glob  "**.yaml" }}
      {{ $.Files.Get $path }}
{{ end }}

Then you can combine that once you have the file that you want to use with some helper functions to easily introduce in a ConfigMap and a Secret as explained below:

  • .AsConfig : Use the file content to be introduced as ConfigMap handling the pattern: file-name: file-content
  • .AsSecrets: Similar to the previous one, but doing the base64 encoding for the data.

Here you can see a real example of using this approach in an actual helm chart situation:

apiVersion: v1
kind: Secret
metadata:
  name: zones-property
  namespace: {{ $.Release.Namespace }}
data: 
{{ ( $.Files.Glob "tml_zones_properties.json").AsSecrets | indent 2 }} 

You can find more information about that here. But this only allows us to grab the file as is and include it in a ConfigMap. It is not allowing us to do any logic or any substitution to the content as part of that process. So, if we want to modify this, this is not a valid sample.

How To Use Helm Templates in Files Such as ConfigMaps or Secrets?

In case we can do some modifications to the content, we need to use the following formula:

apiVersion: v1
kind: Secret
metadata:
  name: papi-property
  namespace: {{ $.Release.Namespace }}
data:
{{- range $path, $bytes := .Files.Glob "tml_papi_properties.json" }}
{{ base $path | indent 2 }}: {{ tpl ($.Files.Get $path) $ | b64enc }}
{{ end }}

So, here we are doing is first iterating for the files that match the pattern using the .Files.Glob function we explained before, iterating in case we have more than one. Then we manually create the structure following the pattern : file-name: file-content.

To do that, we use the function base to provide just the filename from a full path (and add the proper indentation) and then use the .Files.Get to grab the file’s content and do the base64 encoding using the b64encfunction because, in this case, we’re handling a secret.

The trick here is adding the tpl function that allows this file’s content to go through the template process; this is how all the modifications that we need to do and the variables referenced from the .Values object will be adequately replaced, giving you all the power and flexibility of the Helm Chart in text files such as properties, JSON files, and much more.

I hope this is as useful for you as it has been for me in creating new helm charts! And Look here for other tricks using loops or dependencies.

Nomad vs Kubernetes: Key Differences, Use Cases, and When Nomad Makes Sense

Nomad vs Kubernetes: Key Differences, Use Cases, and When Nomad Makes Sense

Nomad is the Hashicorp alternative to the typical pattern of using a Kubernetes-based platform as the only way to orchestrate your workloads efficiently. Nomad is a project started in 2019, but it is getting much more relevant nowadays after 95 releases, and the current version of this article is 1.4.1, as you can see in their GitHub profile.

Nomad approaches the traditional challenges of isolating the application lifecycle for the infrastructure operation lifecycle where that application resides. Still, instead of going full to a container-based application, it tries to provide a solution differently.

What are the main Nomad Features?

Based on its own definition, as you can read on their GitHub profile, they already highlight some of the points of difference between the de-facto industry standard:

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications

Easy-to-use: This is the first statement they include in their definition because the Nomad approach is much simpler than alternatives such as Kubernetes because it works on a single-binary approach where it has all the capabilities that are needed running as a node agent based on its own “vocabulary” that you can read more of it in their official documentation.

Flexibility: This is the other critical thing they provide a hypervisor, an intermediate layer between the application and the underlying infrastructure. It is not just limited to container applications but also supports this kind of deployment. It also allows the deploy it as part of a traditional virtual machine approach. The primary use cases highlighted are running standard windows applications, which is tricky when talking about Kubernetes deployments; even though Windows containers have been a thing for so long, their adoption is not at the same level, as you can see in the Linux container world.

Hashicorp Integration: As part of the Hashicorp portfolio, it also includes seamless integration with other Hashicorp projects such as Hashicorp Vault, which we have covered in several articles, or Hashicorp Consul, which helps to provide additional capabilities in terms of security, configuration management, and communication between the different workloads.

Nomad vs Kubernetes: How Nomad Works?

As commented above, Nomad covers everything with a single-component approach. A nomad binary is an agent that can work in server mode or client mode, depending on the role of the machine executing it.

So Nomad is based on a Nomad cluster, a set of machines running a nomad agent in server mode. Those servers are split depending on the role of the leader or followers. The leader performs most of the cluster management, and the followers can create scheduling plans and submit them to the leader for approval and execution. This is represented in the picture below from the Hashicorp Nomad official page:

Nomad vs Kubernetes: 1 Contestant Against the Orchestration King
Nomad simple architecture from nomadproject.io

Once we have the cluster ready, we need to create our jobs, and a job is a definition of the task we would like to execute on the Nomad cluster we have previously set up. A task is the smallest unit of work in Nomad. Here is where the flexibility comes to Nomad because the task driver executes each task, allowing different drivers to execute various workloads. This is how following the same approach, we will have a docker driver to run our container deployment or an exec driver to execute it on top of a virtual infrastructure. Still, you can create your task drivers following a plugin mechanism that you can read more about here.

Jobs and Task are defined using a text-based approach but not following the usual YAML or JSON kind of files but a different format, as you can see in the picture below (click here to download the whole file from the GitHub Nomad Samples repo):

 Is Nomad a Replace for Kubernetes?

It is a complex question to answer, and even Hashicorp they have documented different strategies. You can undoubtedly use Nomad to run container-based deployments instead of running them on Kubernetes. But at the same time, they also position the solution alongside Kubernetes to run some workloads on one solution and another on the other.

In the end, both try to solve and address the same challenges in terms of scalability, infrastructure sharing and optimization, agility, flexibility, and security from traditional deployments.

Kubernetes focus on different kind of workloads, but everything follows the same deployment mode (container-based) and adopts recent paradigms (service-based, microservice patterns, API-led, and so on) with a robust architecture that allows excellent scalability, performance, flexibility, and with adoption levels in the industry that has become the current de-facto only alternative for modern workload orchestration platforms.

But, at the same time, it also requires effort in management and transforming existing applications to new paradigms.

On the other hand, Nomad tries to address it differently, minimizing the change of the existing application to take advantage of the platform’s benefits and reducing the overhead of management and complexity that a usual Kubernetes platform provides, depending on the situation.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

DevSecOps vs DevOps: Key Differences Explained by Answering 3 Core Questions

DevSecOps vs DevOps: Key Differences Explained by Answering 3 Core Questions

DevSecOps is a concept you probably have heard extensively in the last few months. You will see it in alignment with the traditional idea of DevOps. This probably, at some point, makes you wonder about a DevSecOps vs DevOps comparison, even trying to understand what are the main differences between them or if they are the same concept. And also, with other ideas starting to appear, such as Platform Engineering or Site Reliability, it is beginning to create some confusion in the field that I would like to clarify today in this article.

What is DevSecOps?

DevSecOps is an extension of the DevOps concept and methodology. Now, it is not a joint effort between Development and Operation practices but a joint effort among Development, Operation, and Security.

DevSecOps vs DevOps: Key Differences Explained by Answering 3 Core Questions
Diagram by GeekFlare: A DevSecOps Introduction (https://geekflare.com/devsecops-introduction/)

Implies introducing security policies, practices, and tools to ensure that the DevOps cycles provide security along this process. We already commented on including security components to provide a more secure deployment process. We even have specific articles about these tools, such as scanners, docker registries, etc.

Why DevSecOps is important?

DevSecOps, or to be more explicit, including security practices as part of the DevOps process, is critical because we are moving to hybrid and cloud architectures where we incorporate new design, deployment, and development patterns such as containers, microservices, and so on.

This situation makes that we are moving from one side to having hundreds of applications in the most complex cases to thousands of applications, and to have dozens of servers to thousands of containers, each of them with different base images and third-party libraries that can be obsolete, have a security hole or just be raised new vulnerabilities such as we have seen in the past with the Spring Framework or the Log4J library to shout some of the most recent global substantial security issues that the companies dealt with.

So, even the most extensive security team cannot be at pace checking manually or with a set of scripting all the different new challenges to the security if we don’t include them as part of the overall process of the development and deployment of the components. This is where the concept of shift-left security is usually considered, and we already covered that in this article you can read here.

DevSecOps vs DevOps: Is DevSecOps just updated DevOps?

So based on the above definition, you can think: “Ok, so when somebody talks about DevOps as not thinking about security”. This is not true.

In the same aspect, when we talk about DevOps, it is not explicitly all the detailed steps, such as software quality assurance, unit testing, etc. So, as happens with many extensions in this industry, the original, global or generic concept includes the contents of the wings as well.

So, in the end, DevOps and DevSecOps are the same things, especially today when all companies and organizations are moving to the cloud or hybrid environments where security is critical and non-negotiable. Hence, every task that we do, from developing software to access to any service, needs to be done with Security in mind. But I used both concepts in different scenarios. I will use DevSecOps when I would like to explicitly highlight the security aspect because of the audience, the context, or the topic we are discussing to do differentiation.

Still, in any generic context, DevOps will include the security checks will be retained for sure because if it is not, it is just useless. Me.

 Summary

So, in the end, when somebody speaks today about DevOps, it implicitly includes the security aspect, so there is no difference between both concepts. But you will see and also find it helpful to use the specific term DevSecOps when you want to highlight or differentiate this part of the process.

Scan Docker Images Locally with Trivy: Fast and Reliable Vulnerability Detection

Scan Docker Images Locally with Trivy: Fast and Reliable Vulnerability Detection

Scan Docker images or, to be more honest, scan your container images is becoming one of the everyday tasks to be done as part of the development of your application. The change of pace of how easily the new vulnerabilities arise, the explosion of dependencies that each of the container images has, and the number of deployments per company make it quite complex to keep the pace to ensure that they can mitigate the security issues.

We already covered this topic some time ago when the Docker Desktop tool introduced the scan option based on an integration with Synk and, more recently, with the latest release of Lens. This is one of the options to check the container images of the “corporate” version of the tool. And since some time also, the central registries from the Cloud have Provided such an ECR, including the Scanning option as one of the capabilities for any image deployed there.

But what happens if you are already moving from Docker Desktop to another option, such as podman or Rancher Desktop? How can you scan your docker images?

Several scanners can be used to scan your container images locally, and some of them are easier than others to set up. One of the main knowns is Clair which is also being used as part of the RedHat Quay registry and has a lot of traction. It works on a client-server mode that is great to be used by different teams that require a more “enterprise” deployment, usually closely related to a Registry. Still, it doesn’t play well to be run locally as it requires several components and relationships.

As an easy option to try locally, you have Trivy. Trivy is an exciting tool developed by AquaSecurity. You may remember the company as this is the one that is behind other developments related to security in Kubernetes, such as KubeBench, that we already covered in the past.

In its own words, “Trivy is a comprehensive security scanner. It is reliable, fast, and straightforward to use and works wherever you need it.”

How to Install Trivy?

The installation process is relatively easy, and documented for every significant platform here. Still, in the end, it relies on binary packages available such as RPM, DEB, Brew, MacPorts, or even a Docker image.

How To Scan Docker Images With Trivy ?

Once it is installed, you can just run the commands such as this:

 trivy image python:3.4-alpine

This will do the following tasks:

  • Update the repository DB with all the vulnerabilities
  • Pull the image in case this is not available locally
  • Detect the languages and components present in that image
  • Validate the images and generate an output

What Output Is Provided By Trivy?

As a sample, this is the output for the python:3.4-alpine as of Today:

Scan Docker Images With Trivy

You will get a table with one row per library or component that has detected a vulnerability showing the Library name and the exposure related to it with the CVE code. CVE code is usually how vulnerabilities are referred to as they are present in a common repository with all their descriptions and details of them. In addition to that, it shows the severity of the vulnerability based on the existing report. It also provides the current version detected on the image and in case there is a different version that fixed that vulnerability, the initial version that has solved that vulnerability, and finally, a title to provide a little bit more context about the vulnerability:

Scan Docker Images Locally with Trivy: Fast and Reliable Vulnerability Detection

If a Library is related to more than one vulnerability, it will split the cells on that row to access the different data for each vulnerability.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Helm Dependencies Explained: How Chart Dependencies Work in Helm

Helm Dependencies Explained: How Chart Dependencies Work in Helm

Helm Dependency is a critical part of understanding how Helm works as it is the way to establish relationships between different helm packages. We have talked a lot here about what Helm is, and some topics around that, and we even provided some tricks if you create your charts.

Understanding chart dependencies is crucial for building scalable Helm architectures. Explore more Helm patterns and best practices in our comprehensive Helm guide.

So, as commented, Helm Chart is nothing more than a package that you put around the different Kubernetes objects that need to be deployed for your application to work. The usual comparison is that it is similar to a Software Package. When you install an application that depends on several components, all of those components are packaged together, and here is the same thing.

What is a Helm Dependency?

A Helm Dependency is nothing more than the way you define that your Chart needs another chart to work. For sure, you can create a Helm Chart with everything you need to deploy your application, but something you would like to split that work into several charts just because they are easy to maintain or the most common use case because you want to leverage another Helm Chart that is already available.

One use case can be a web application that requires a database, so you can create on your Helm Chart all the YAML files to deploy your web application and your Database in Kubernetes, or you can have your YAML files for your web application (Deployment, Services, ConfigMaps,…) and then say: And I need a database and to provide it I’m going to use this chart.

This is similar to how it works with the software packages in UNIX systems; you have your package that does the job, like, for example, A, but for that job to be done, it requires the library L, and to ensure that when you are installing A, Library L is already there or if not it will be installed you declare that your application A depends on Library L, so here is the same thing. You declare that your Chart depends on another Chart to work. And that leaves us to the next point.

How do we declare a Helm Dependency?

This is the next point; now that we understand what a Helm Dependency is conceptually and we have a use case, how can we do that in our Helm Chart?

All the work is done in the Chart.yml file. If you remember, the Chart.yml file is the file where you declare all the metadata of your Helm Chart, such as the name, the version of the chart, the application version, location URL, icon, and much more. And usually has a structure like this one:

apiVersion: v2
name: MyChart
description: My Chart Description
type: application
version: 0.2.0
appVersion: "1.16.0"

So here we can add a section dependencies and, in that section is where we are going to define the charts that we depend on. As you can see in the snippet below:

apiVersion: v2
name: MyChart
description: My Chart Description
type: application
version: 0.2.0
appVersion: "1.16.0"
dependencies:
- name: Dependency
  version: 1.0.0
  repository: "file:///location_of_my_chart"

Here we are declaring Dependency as our Helm Dependency. We specify the version that we would like to use (similar to the version we say in our chart), and that will help us to ensure that we will provide the same version that has been tested as part of the resolution of the dependency and also the location using an URL that can be an external URL is this is pointing to a Helm Chart that is available on the internet or outside your computer or using a File Path in case you are pointing to a local resource in your machine.

That will do the job of defining the helm dependency, and this way, when you install your chart using the command helm install, it will also provide the dependence.

How do I declare a Helm Conditional Dependency?

Until now, we learned how to declare a dependency, and each time I provision my application, it will also provide the dependence. But usually, we would like to have a fine-grained approach to that. Imagine the same scenario as above: We have our Web Application that depends on the Database, and we have two options, we can provision the database as part of the installation of the web application, or we can point to an external database and in that case, it makes no sense to provision the Helm Dependency. How can we do that?

So, easy, because one of the optional parameters you can add to your dependency is condition and do exactly that, condition allow you to specify a flag in your values.yml that in the case is equal to true, it will provide the dependency but in the case is equal to false it will skip that part similar to the snippet shown below:

 apiVersion: v2
name: MyChart
description: My Chart Description
type: application
version: 0.2.0
appVersion: "1.16.0"
dependencies:
- name: Dependency
  version: 1.0.0
  repository: "file:///location_of_my_chart"
  condition: database.enabled 

And with that, we will set the enabled parameter under database in our values.yml to true if we would like to provision it.

How do I declare a Helm Dependency With a Different version?

As shown in the snippets above, we offer that when we declare a Helm Dependency, we specify the version; that is a safe way to do it because it ensures that any change done to the helm chart will not affect your package. Still, at the same time, you cannot be aware of security fixes or patches to the chart that you would like to leverage in your deployment.

To simplify that, you have the option to define the version in a more flexible way using the operator ~ in the definition of the version, as you can see in the snippet below:

apiVersion: v2
name: MyChart
description: My Chart Description
type: application
version: 0.2.0
appVersion: "1.16.0"
dependencies:
- name: Dependency
  version: ~1.0.0
  repository: "file:///location_of_my_chart"
  condition: database.enabled 

This means that any patch done to the chart will be accepted, so this is similar that this chart will use the latest version of 1.0.X. Still, it will not use the 1.1.0 version, so that allows to have more flexibility, but at the same time keeping things safe and secured in case of a breaking change on the Chart you depend on. This is just one way to define that, but the flexibility is enormous as the Chart versions use “Semantic Versions,” You can learn and read more about that here: https://github.com/Masterminds/semver.

Multi-Stage Dockerfiles Explained: Reduce Docker Image Size the Right Way

Multi-Stage Dockerfiles Explained: Reduce Docker Image Size the Right Way

Multi-Stage Dockerfile is the pattern you can use to ensure that your docker image is at an optimized size. We already have covered the importance of keeping the size of your docker image at a minimum level and what tools you could use, such as dive, to understand the size of each of your layers. But today, we are going to follow a different approach and that approach is a multi-stage build for our docker containers.

What is a Multi-Stage Dockerfile Pattern?

The multi-Stage Dockerfile is based on the principle that the same Dockerfile can have different FROM sentences and each of the FROM sentences starts a new stage of the build.

Multi-Stage Dockerfile Pattern

Why Multi-Stage Build Pattern Helps Reducing The Size of Container Images?

The main reason the usage of multi-stage build patterns helps reduce the size of the containers is that you can copy any artifact or set of artifacts from one stage to the other. And that is the most important reason. Why? Because that means that everything you do not copy is discarded and you are not carrying all these not required components from layer to layer and generating a bigger unneeded size of the final Docker image.

How do you define a Multi-Stage Dockerfile

First, you need to have a Dockerfile with more than one FROM. As commented, each of the FROM will indicate the start of one stage of the multi-stage dockerfile. To differentiate them or reference them, you can name each of the stages of the Dockerfile by using the clause AS alongside the FROM command, as shown below:

 FROM eclipse-temurin:11-jre-alpine AS builder

As a best practice, you can also add a new label stage with the same name you provided before, but that is not required. So, in a nutshell, a Multi-Stage Dockerfile will be something like this:

FROM eclipse-temurin:11-jre-alpine AS builder
LABEL stage=builder
COPY . /
RUN apk add  --no-cache unzip zip && zip -qq -d /resources/bwce-runtime/bwce-runtime-2.7.2.zip "tibco.home/tibcojre64/*"
RUN unzip -qq /resources/bwce-runtime/bwce*.zip -d /tmp && rm -rf /resources/bwce-runtime/bwce*.zip 2> /dev/null


FROM  eclipse-temurin:11-jre-alpine 
RUN addgroup -S bwcegroup && adduser -S bwce -G bwcegroup

How do you copy resources from one stage to another?

This is the other important part here. Once we have defined all the stages we need, and each is doing its part of the job, we need to move data from one stage to the next. So, how can we do that?

The answer is by using the command COPY. COPY is the same command you use to move data from your local storage to the container image, so you will need a way to differentiate that this time you are not copying it from your local storage but another stage, and here is where we are going to use the argument --from. The value will be the name of the stage we learned in the previous section to declare. So a complete COPYcommand will be something like the snippet shown below:

 COPY --from=builder /resources/ /resources/

What is the Improvement you can get?

That is the essential part and will depend on how your Dockerfiles and images are created, but the primary factor you can consider is the number of layers your current image has. The bigger the number of layers, the more significant that you can probably save on the amount of the final container image in a multi-stage dockerfile.

The main reason is that each layer will duplicate part of the data, and I am sure you will not need all of the layer’s data in the next one. And using the approach comments in this article, you will get a way to optimize it.

 Where can I read more about this?

If you want to read more, you would need to know that the multi-stage dockerfile is documented as one of the best practices on the Docker official web page, and they have a great article about this by Alex Ellis that you can read here.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)

Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)

Introduction

This article will cover how to inject secrets in Pods using Hashicorp Vault. In previous articles, we covered how to install Hashicorp Vault in Kubernetes, configure and create secrets in Hashicorp, and how tools such as TIBCO BW can retrieve them. Still, today, we are going to go one step ahead.

The reason why Inject secrets in pods is very important is that it allows the application inside the pod to be transparent around any communication to Hashicorp. After all, for the applications, the secret will be just a regular file located in a specific path inside the container. It doesn’t need to worry if this file came from a Hashicorp Secret or a total of different resources.

This injection approach facilitates the Kubernetes ecosystem’s polyglot approach because it frees any responsibility for the underlying application. The same happens with injector approaches such as Istio or much more.

But, let’s explain how this approach works to inject secrets in Pods using Hashicorp Vault. As part of the installation alongside the vault server we have installed (or several ones if you have done a distributed installation), we have seen another pod under the name of value-agent-injector, as you can see in the picture below:

Inject Secrets In Pods: Vault Injector Pod

This agent will be responsible for listening to the new deployments you do and, based on the annotations this deployment has, will launch a sidecar alongside your application and send the configuration to be able to connect to the vault and download the secrets required and mount it as files inside your pod as shown in the picture below:

To do that, we need to do several steps as part of the configuration as we are going to include in the upcoming sections of the article

Enabling Kubernetes Authentication in Hashicorp

The first thing we need to do at this stage is to enable the Kubernetes Authentication in Hashicorp. This method allows clients to authenticate with a Kubernetes Service Account Token. We do that with the following command:

 Vault auth enable kubernetes

Vault accepts a service token from any client in the Kubernetes cluster. During authentication, Vault verifies that the service account token is valid by querying a token review Kubernetes endpoint. Now, we need to configure this authentication method providing the location our the Kubernetes API, and to do that, we need to run the following command:

 vault write auth/kubernetes/config \
    kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"

Defining a Kubernetes Services Account and Defining a Policy

Now, we will create a Kubernetes Service Account that will run our pods, and this service account will be allowed to retrieve the secret we generated in the previous post.

To do that, we will start with the creation of the service account by running this command from outside the pod:

kubectl create sa internal-app

This will create a new service account under the name of internal-app, and now we are going to generate a policy inside the Hashicorp Vault server by using this command inside the vault server pod:

 vault policy write internal-app - <<EOF
path "internal/data/database/config" {
  capabilities = ["read"]
}
EOF

And now, we associated this policy with the service account by running this command also inside the vault server pod:

  vault write auth/kubernetes/role/internal-app \
    bound_service_account_names=internal-app \
    bound_service_account_namespaces=default \
    policies=internal-app \
    ttl=24h

And that’s pretty much all the configuration we need to do at the Vault side to be able to inject secrets in pods using Hashicorp Vault. Now, we need to configure our application accordingly by doing the following modifications:

  • Specify the ServiceAccountName to the deployment to be the one we created previously: internal-app
  • Specify the specific annotations to inject the vault secrets and the configuration of those secrets.

Let’s start with the first point. We need to add the serviceAccountName to our Kubernetes Manifest YAML file as shown below:

Inject Secrets In Pods: Service Account Name definition

And regarding the second point, we would solve it by adding several annotations to our deployment, as shown below:

Inject Secrets In Pods: Annotations

The annotations used to inject secrets in pods are the following ones:

  • vault.hashicorp.com/agent-inject: ‘true’: This tells the vault injector that we would like to inject the sidecar agent in this deployment and have the Vault configuration. This is required to do any further configuration
  • vault.hashicorp.com/role: internal-app: This is the vault role we are going to use when we’re asking for secrets and information to the vault to be sure that we only access the secrets we have allowed based on the policy we created in the previous section
  • vault.hashicorp.com/agent-inject-secret-secret-database-config.txt: internal/data/database/config: This will be one annotation per secret we plan to add, and it is composed of three parts:
    • vault.hashicorp.com/agent-inject-secret- this part is fixed
    • secret-database-config.txt this part will be the filename that is created under /vault/secrets inside our pod
    • internal/data/database/config This is the path inside the vault of our secret to being linked to that file.

And that’s it! If we deploy now our deployment we will see the following things:

  • Our deployment application has been launched with three containers instead of one because two of them are Hashicorp Vault related, as you can see in the picture below:
Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)
  • vault-agent-init will be the container that establishes the connection to the vault-server because any container starts and does the first download and injecting secrets in the pod based on the configuration provided.
  • vault-agent will be the container running as a watcher to detect any modification on the related secrets and update them.
Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)

And now, if we go to the main container, we will see in the /vault/secrets path the secret has been finally injected as expected:

Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)

And this is how easily and without any knowledge about the underlying app we can inject secrets in pods using Hashicorp Vault.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

OpenLens vs Lens: Key Differences, Licensing Changes, and Which One to Use

OpenLens vs Lens: A New Battle Starting in January 2023

Introduction

We already talked about Lens several times in different articles but today I am bringing it here OpenLens because after the release of Lens 6 in late July a lot of questions have arrises, especially regarding its change and the relationship with the OpenLens project, so I thought it could be very interesting to bring some of this data all together in the same place so any of you is quite confused. So I would try to explain and answer the main questions you can have at the moment.

What is OpenLens?

OpenLens is the open source project that is behind the code that supports the main functionality of Lens, the software to help you manage and run your Kubernetes Clusters. It is available on GitHub here (https://github.com/lensapp/lens) and it is totally open-source and distributed over an MIT License. In its own words this is the definition:

This repository ("OpenLens") is where Team Lens develops the Lens IDE product together with the community. It is backed by a number of Kubernetes and cloud-native ecosystem pioneers. This source code is available to everyone under the MIT license

OpenLens vs Lens?

So the main question you could have at the moment is what is the difference between Lens and OpenLens. The main difference is that Lens is built on top of OpenLens including some additional software and libraries with different licenses. It is developed by the Mirantis team (the same company that owns the Docker Enterprise) and it is distributed under a traditional EULA.

Is Lens going to be private?

We need to start by saying that since the beginning Lens has been released under a traditional EULA, so on that front there is not much difference, we can say that OpenLens is Open Source but Lens is Freeware or at least was freeware at that point. But on 28th July we had the release of Lens 6 where the difference between projects started to arise.

As commented on the Mirantis Blog Post a lot of changes and new capabilities have been included but on top of that also the vision has been revealed. As the Mirantis team says they don’t stop at the current level Lens has today to manage the Kubernetes cluster they want to go beyond providing also a Web version of Lens to simplify even more the access, also extend its reach beyond Kubernetes, and so on.

So, you can admit that this is a very compelling vision and very ambitious at the same time and that’s why also they are doing some changes to the license and model, which we are going to talk about below.

Is Lens still free?

We already commented that Lens was always released under a traditional EULA so it was not Open Source like other projects such as its core in OpenLens, but was free to use. With the release on July 28th, this is changing a bit to support their new vision.

They are releasing a new subscription model depending on the usage you are doing of the tool and the approach is very similar to the one they did at the time with Docker Desktop if you remember that we handle that on an article too.

  • Lens Personal subscriptions are for personal use, education, and startups (less than $10 million in annual revenue or funding). They are free of charge.
  • Lens Pro subscriptions are required for professional use in larger businesses. The pricing is $19.90 per user/month or $199 per user/year.

The new license applied with the release of Lens 6 on 28th July but they have provided a Grace Period until January 2023 so you can adapt to this new model.

Should I stop using Lens now?

This is, as always, up to you, but things are going to be the same until January 2023 and at that point, you need to formalize your situation with Lens and Mirantis. If you are under the situation of a Lens Personal license because you are working for a startup or open-source, you can continue to do so without any problem. If that’s not the case, it is up to the company if the additional features they are providing now and also the vision to the future justify the investment you need to do on the Lens Pro license.

You will always have the option to switch from Lens to OpenLens it will not be 100% the same but the core functionalities and approach at this moment will continue to be the same and the project for sure will be very very active. And also as Mirantis already confirmed in the same blog post: “There are no changes to OpenLens licensing or any other upstream open source projects used by Lens Desktop.” So you cannot expect the same situation happens if you are switching to OpenLens or already using OpenLens.

How can I install OpenLens?

Installation of OpenLens is a little bit tricky because you need to generate your build from the source, but to ease that path has been several awesome people that are doing that on their GitHub repositories such as Muhammed Kalkan that is providing a repo with the latest versions with only Open Source components for the major platforms (Windows, macOS X (Intel and Silicon) or Linux) available here:

What Features I am Losing if I switch to OpenLens?

For sure there will be some features that you will be losing if you switch from Lens to OpenLens which are the ones that are provided using the licensed pieces of software. Here we include a non-exclusive list of our experiences using both products:

  • Account Synchronization: All the capabilities of having all your Kubernetes Cluster under your Lens Account and sync will not be available on OpenLens. You will rely on the content of the kubeconfig file
  • Spaces: The option to have your configuration shared between different users that belongs to the same team is not available on OpenLens.
  • Scan Image: One of the new capabilities of the Lens 6 is the option to scan the image of the containers deployed on the cluster, but this is not available on OpenLens.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.