Kubernetes Security Best Practices: A Shared Responsibility Between Developers and Operators

Kubernetes Security Best Practices: A Shared Responsibility Between Developers and Operators

Kubernetes Security is one of the most critical aspects today in IT world. Kubernetes has become the backbone of modern infrastructure management, allowing organizations to scale and deploy containerized applications with ease. However, the power of Kubernetes also brings forth the responsibility of ensuring robust security measures are in place. This responsibility cannot rest solely on the shoulders of developers or operators alone. It demands a collaborative effort where both parties work together to mitigate potential risks and vulnerabilities.

Even though DevOps and Platform Engineering approaches are pretty standard, there are still tasks responsible for different teams, even though nowadays you have platform and project teams.

Here you will see three easy ways to improve your Kubernetes security from both dev and ops perspectives:

No Vulnerabilities in Container Images

Vulnerability Scan on Container Images is something crucial in nowadays developments because the number of components deployed on the system has grown exponentially, and also the opacity of them as well. Vulnerabilities Scan using tools such as Trivy or the integrated options in our local docker environments such as Docker Desktop or Rancher Desktop is mandatory, but how can you use it to make your application more secure?

  • Developer’s responsibility:
    • Use only allowed standard base images, well-known
    • Reduce, at minimum, the number of components and packages to be installed with your application (better Alpine than Debian)
    • Use a Multi-Stage approach to only include what you will need in your images.
    • Run a vulnerability scan locally before pushing
  • Operator’s responsibility:
    • Force to download all base images for the corporate container registry
    • Enforce vulnerability scan on push, generating alerts and avoiding deployment if the quality criteria are unmet.
    • Perform regular vulnerability scans for runtime images and generate incidents for the development teams based on the issues discovered.

No Additional Privileges in Container Images

Now that our application doesn’t include any vulnerability, we need to ensure the image is not allowed to do what it should, such as elevating privileges. See what you can do depending on your role:

  • Developer responsibility:
    • Never create images with root user and use security context options in your Kubernetes Manifest files
    • Test your images with all the possible capabilities dropped unless needed for some specific reason
    • Make your filesystem read-only and use volumes for the required folders on your application.
  • Operator’s responsibility:

Restrict visibility between components

When we design applications nowadays, it is expected that they require to connect to other applications and components, and the service discovery capabilities in Kubernetes are excellent in how we can interact. Still, also this allows other apps to connect to services that maybe they shouldn’t. See what you can do to help on that aspect depending on your role and responsibility:

  • Developer responsibility:
    • Ensure your application has proper authentication and authorization policies in place to avoid any unauthorized use of your application.
  • Operation responsibility:
    • Manage at the platform level the network visibility of the components but deny all traffic by default and allow the connections required by design by using Network Policies.
    • Use Service Mesh tools to have a central approach for authentication and authorization.
    • Use tools like Kiali to monitor the network traffic and detect unreasonable traffic patterns.

Conclusion

In conclusion, the importance of Kubernetes security cannot be overstated. It requires collaboration and shared responsibility between developers and operators. By focusing on practices such as vulnerability scanning, restricting additional privileges, and restricting visibility between components, organizations can create a more secure Kubernetes environment. By working together, developers and operators can fortify the container ecosystem, safeguarding applications, data, and critical business assets from potential security breaches. With a collaborative approach to Kubernetes security, organizations can confidently leverage the full potential of this powerful orchestration platform while maintaining the highest standards of security. By adopting these practices, organizations can create a more secure Kubernetes environment, protecting their applications and data from potential threats.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

Introduction

Istio TLS configuration is one of the essential features when we enable a Service Mesh. Istio Service Mesh provides so many features to define in a centralized, policy way how transport security, among other characteristics, is handled in the different workloads you have deployed on your Kubernetes cluster.

One of the main advantages of this approach is that you can have your application focus on the business logic they need to implement. These security aspects can be externalized and centralized without necessarily including an additional effort in each application you have deployed. This is especially relevant if you are following a polyglot approach (as you should) across your Kubernetes cluster workloads.

So, this time we’re going to have our applications just handling HTTP traffic for both internal and external, and depending on where we are reaching, we will force that connection to be TLS without the workload needed to be aware of it. So, let’s see how we can enable this Istio TLS configuration

Scenario View

We will use this picture you can see below to keep in mind the concepts and components that will interact as part of the different configurations we will apply to this.

  • We will use the ingress gateway to handle all incoming traffic to the Kubernetes cluster and the egress gateway to handle all outcoming traffic from the cluster.
  • We will have a sidecar container deployed in each application to handle the communication from the gateways or the pod-to-pod communication.

To simplify the testing applications, we will use the default sample applications Istio provides, which you can find here.

How to Expose TLS in Istio?

This is the easiest part, as all the incoming communication you will receive from the outside will enter the cluster through the Istio Ingress Gateway, so it is this component the one that needs to handle the TLS connection and then use the usual security approach to talk to the pod exposing the logic.

By default, the Istio Ingress Gateway already exposes a TLS port, as you can see in the picture below:

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

So we will need to define a Gateway that receives all this traffic through the HTTPS and redirect that to the pods, and we will do it as you can see here:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: bookinfo-gateway-https
  namespace: default
spec:
  selector:
    istio: ingressgateway
  servers:
    - hosts:
        - '*'
      port:
        name: https
        number: 443
        protocol: HTTPS
      tls:
        mode: SIMPLE # enables HTTPS on this port
        credentialName: httpbin-credential 

As we can see, it is a straightforward configuration, just adding the port HTTPS on the 443 and providing the TLS configuration:

And with that, we can already reach using SSL the same pages:

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

How To Consume SSL from Istio?

Now that we have generated a TLS incoming request without the application knowing anything, we will go one step beyond that and do the most challenging configuration. We will set up TLS/SSL connection to any outgoing communication outside the Kubernetes cluster without the application knowing anything about it.

To do so, we will use one of the Istio concepts we have already covered in a specific article. That concept is the Istio Service Entry that allows us to define an endpoint to manage it inside the MESH.

Here we can see the Wikipedia endpoint added to the Service Mesh registry:

 apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: se-app
  namespace: default
spec:
  hosts:
  - wikipedia.org
  ports:
  - name: https
    number: 443
    protocol: HTTPS
  resolution: DNS

Once we have configured the ServiceEntry, we can define a DestinationRule to force all connections to wikipedia.org will use the TLS configuration:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: tls-app
  namespace: default
spec:
  host: wikipedia.org
  trafficPolicy:
    tls:
      mode: SIMPLE

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

DevSecOps vs DevOps: Key Differences Explained by Answering 3 Core Questions

DevSecOps vs DevOps: Key Differences Explained by Answering 3 Core Questions

DevSecOps is a concept you probably have heard extensively in the last few months. You will see it in alignment with the traditional idea of DevOps. This probably, at some point, makes you wonder about a DevSecOps vs DevOps comparison, even trying to understand what are the main differences between them or if they are the same concept. And also, with other ideas starting to appear, such as Platform Engineering or Site Reliability, it is beginning to create some confusion in the field that I would like to clarify today in this article.

What is DevSecOps?

DevSecOps is an extension of the DevOps concept and methodology. Now, it is not a joint effort between Development and Operation practices but a joint effort among Development, Operation, and Security.

DevSecOps vs DevOps: Key Differences Explained by Answering 3 Core Questions
Diagram by GeekFlare: A DevSecOps Introduction (https://geekflare.com/devsecops-introduction/)

Implies introducing security policies, practices, and tools to ensure that the DevOps cycles provide security along this process. We already commented on including security components to provide a more secure deployment process. We even have specific articles about these tools, such as scanners, docker registries, etc.

Why DevSecOps is important?

DevSecOps, or to be more explicit, including security practices as part of the DevOps process, is critical because we are moving to hybrid and cloud architectures where we incorporate new design, deployment, and development patterns such as containers, microservices, and so on.

This situation makes that we are moving from one side to having hundreds of applications in the most complex cases to thousands of applications, and to have dozens of servers to thousands of containers, each of them with different base images and third-party libraries that can be obsolete, have a security hole or just be raised new vulnerabilities such as we have seen in the past with the Spring Framework or the Log4J library to shout some of the most recent global substantial security issues that the companies dealt with.

So, even the most extensive security team cannot be at pace checking manually or with a set of scripting all the different new challenges to the security if we don’t include them as part of the overall process of the development and deployment of the components. This is where the concept of shift-left security is usually considered, and we already covered that in this article you can read here.

DevSecOps vs DevOps: Is DevSecOps just updated DevOps?

So based on the above definition, you can think: “Ok, so when somebody talks about DevOps as not thinking about security”. This is not true.

In the same aspect, when we talk about DevOps, it is not explicitly all the detailed steps, such as software quality assurance, unit testing, etc. So, as happens with many extensions in this industry, the original, global or generic concept includes the contents of the wings as well.

So, in the end, DevOps and DevSecOps are the same things, especially today when all companies and organizations are moving to the cloud or hybrid environments where security is critical and non-negotiable. Hence, every task that we do, from developing software to access to any service, needs to be done with Security in mind. But I used both concepts in different scenarios. I will use DevSecOps when I would like to explicitly highlight the security aspect because of the audience, the context, or the topic we are discussing to do differentiation.

Still, in any generic context, DevOps will include the security checks will be retained for sure because if it is not, it is just useless. Me.

 Summary

So, in the end, when somebody speaks today about DevOps, it implicitly includes the security aspect, so there is no difference between both concepts. But you will see and also find it helpful to use the specific term DevSecOps when you want to highlight or differentiate this part of the process.

Scan Docker Images Locally with Trivy: Fast and Reliable Vulnerability Detection

Scan Docker Images Locally with Trivy: Fast and Reliable Vulnerability Detection

Scan Docker images or, to be more honest, scan your container images is becoming one of the everyday tasks to be done as part of the development of your application. The change of pace of how easily the new vulnerabilities arise, the explosion of dependencies that each of the container images has, and the number of deployments per company make it quite complex to keep the pace to ensure that they can mitigate the security issues.

We already covered this topic some time ago when the Docker Desktop tool introduced the scan option based on an integration with Synk and, more recently, with the latest release of Lens. This is one of the options to check the container images of the “corporate” version of the tool. And since some time also, the central registries from the Cloud have Provided such an ECR, including the Scanning option as one of the capabilities for any image deployed there.

But what happens if you are already moving from Docker Desktop to another option, such as podman or Rancher Desktop? How can you scan your docker images?

Several scanners can be used to scan your container images locally, and some of them are easier than others to set up. One of the main knowns is Clair which is also being used as part of the RedHat Quay registry and has a lot of traction. It works on a client-server mode that is great to be used by different teams that require a more “enterprise” deployment, usually closely related to a Registry. Still, it doesn’t play well to be run locally as it requires several components and relationships.

As an easy option to try locally, you have Trivy. Trivy is an exciting tool developed by AquaSecurity. You may remember the company as this is the one that is behind other developments related to security in Kubernetes, such as KubeBench, that we already covered in the past.

In its own words, “Trivy is a comprehensive security scanner. It is reliable, fast, and straightforward to use and works wherever you need it.”

How to Install Trivy?

The installation process is relatively easy, and documented for every significant platform here. Still, in the end, it relies on binary packages available such as RPM, DEB, Brew, MacPorts, or even a Docker image.

How To Scan Docker Images With Trivy ?

Once it is installed, you can just run the commands such as this:

 trivy image python:3.4-alpine

This will do the following tasks:

  • Update the repository DB with all the vulnerabilities
  • Pull the image in case this is not available locally
  • Detect the languages and components present in that image
  • Validate the images and generate an output

What Output Is Provided By Trivy?

As a sample, this is the output for the python:3.4-alpine as of Today:

Scan Docker Images With Trivy

You will get a table with one row per library or component that has detected a vulnerability showing the Library name and the exposure related to it with the CVE code. CVE code is usually how vulnerabilities are referred to as they are present in a common repository with all their descriptions and details of them. In addition to that, it shows the severity of the vulnerability based on the existing report. It also provides the current version detected on the image and in case there is a different version that fixed that vulnerability, the initial version that has solved that vulnerability, and finally, a title to provide a little bit more context about the vulnerability:

Scan Docker Images Locally with Trivy: Fast and Reliable Vulnerability Detection

If a Library is related to more than one vulnerability, it will split the cells on that row to access the different data for each vulnerability.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)

Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)

Introduction

This article will cover how to inject secrets in Pods using Hashicorp Vault. In previous articles, we covered how to install Hashicorp Vault in Kubernetes, configure and create secrets in Hashicorp, and how tools such as TIBCO BW can retrieve them. Still, today, we are going to go one step ahead.

The reason why Inject secrets in pods is very important is that it allows the application inside the pod to be transparent around any communication to Hashicorp. After all, for the applications, the secret will be just a regular file located in a specific path inside the container. It doesn’t need to worry if this file came from a Hashicorp Secret or a total of different resources.

This injection approach facilitates the Kubernetes ecosystem’s polyglot approach because it frees any responsibility for the underlying application. The same happens with injector approaches such as Istio or much more.

But, let’s explain how this approach works to inject secrets in Pods using Hashicorp Vault. As part of the installation alongside the vault server we have installed (or several ones if you have done a distributed installation), we have seen another pod under the name of value-agent-injector, as you can see in the picture below:

Inject Secrets In Pods: Vault Injector Pod

This agent will be responsible for listening to the new deployments you do and, based on the annotations this deployment has, will launch a sidecar alongside your application and send the configuration to be able to connect to the vault and download the secrets required and mount it as files inside your pod as shown in the picture below:

To do that, we need to do several steps as part of the configuration as we are going to include in the upcoming sections of the article

Enabling Kubernetes Authentication in Hashicorp

The first thing we need to do at this stage is to enable the Kubernetes Authentication in Hashicorp. This method allows clients to authenticate with a Kubernetes Service Account Token. We do that with the following command:

 Vault auth enable kubernetes

Vault accepts a service token from any client in the Kubernetes cluster. During authentication, Vault verifies that the service account token is valid by querying a token review Kubernetes endpoint. Now, we need to configure this authentication method providing the location our the Kubernetes API, and to do that, we need to run the following command:

 vault write auth/kubernetes/config \
    kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"

Defining a Kubernetes Services Account and Defining a Policy

Now, we will create a Kubernetes Service Account that will run our pods, and this service account will be allowed to retrieve the secret we generated in the previous post.

To do that, we will start with the creation of the service account by running this command from outside the pod:

kubectl create sa internal-app

This will create a new service account under the name of internal-app, and now we are going to generate a policy inside the Hashicorp Vault server by using this command inside the vault server pod:

 vault policy write internal-app - <<EOF
path "internal/data/database/config" {
  capabilities = ["read"]
}
EOF

And now, we associated this policy with the service account by running this command also inside the vault server pod:

  vault write auth/kubernetes/role/internal-app \
    bound_service_account_names=internal-app \
    bound_service_account_namespaces=default \
    policies=internal-app \
    ttl=24h

And that’s pretty much all the configuration we need to do at the Vault side to be able to inject secrets in pods using Hashicorp Vault. Now, we need to configure our application accordingly by doing the following modifications:

  • Specify the ServiceAccountName to the deployment to be the one we created previously: internal-app
  • Specify the specific annotations to inject the vault secrets and the configuration of those secrets.

Let’s start with the first point. We need to add the serviceAccountName to our Kubernetes Manifest YAML file as shown below:

Inject Secrets In Pods: Service Account Name definition

And regarding the second point, we would solve it by adding several annotations to our deployment, as shown below:

Inject Secrets In Pods: Annotations

The annotations used to inject secrets in pods are the following ones:

  • vault.hashicorp.com/agent-inject: ‘true’: This tells the vault injector that we would like to inject the sidecar agent in this deployment and have the Vault configuration. This is required to do any further configuration
  • vault.hashicorp.com/role: internal-app: This is the vault role we are going to use when we’re asking for secrets and information to the vault to be sure that we only access the secrets we have allowed based on the policy we created in the previous section
  • vault.hashicorp.com/agent-inject-secret-secret-database-config.txt: internal/data/database/config: This will be one annotation per secret we plan to add, and it is composed of three parts:
    • vault.hashicorp.com/agent-inject-secret- this part is fixed
    • secret-database-config.txt this part will be the filename that is created under /vault/secrets inside our pod
    • internal/data/database/config This is the path inside the vault of our secret to being linked to that file.

And that’s it! If we deploy now our deployment we will see the following things:

  • Our deployment application has been launched with three containers instead of one because two of them are Hashicorp Vault related, as you can see in the picture below:
Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)
  • vault-agent-init will be the container that establishes the connection to the vault-server because any container starts and does the first download and injecting secrets in the pod based on the configuration provided.
  • vault-agent will be the container running as a watcher to detect any modification on the related secrets and update them.
Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)

And now, if we go to the main container, we will see in the /vault/secrets path the secret has been finally injected as expected:

Inject Secrets into Kubernetes Pods Using HashiCorp Vault (Agent Injector Guide)

And this is how easily and without any knowledge about the underlying app we can inject secrets in pods using Hashicorp Vault.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

TIBCO BusinessWorks HashiCorp Vault Integration: Secure Secrets in 3 Steps

TIBCO BusinessWorks HashiCorp Vault Integration: Secure Secrets in 3 Steps

Introduction

This article aims to show the TIBCO BW Hashicorp Vault Configuration to integrate your TIBCO BW application with the secrets stored in Hashicorp Vault, mainly for the externalization and management of password and credentials resources.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

As you probably know, in the TIBCO BW application, the configuration is stored in Properties at different levels (Module or Application properties). You can read more about them here. And the primary purpose of that properties is to provide flexibility to the application configuration.

These properties can be of different types, such as String, Integer, Long, Double, Boolean, and DateTime, among other technical resources inside TIBCO BW, as shown in the picture below:

TIBCO BW Hashicorp Vault Configuration: BW Property Types

The TIBCO BW Hashicorp Vault integration will affect only those properties of Password type (at least up to 2.7.2/6.8.1 BW version). The reason behind that is that those properties are the kind of data relevant to the information that is sensitive and needs to be secure. Other concepts can be managed through standard Kubernetes components such as ConfigMaps.

BW Application Definition

We are going to start with a straightforward application, as you can see in the picture below:

TIBCO BW Hashicorp Vault Configuration: Property sample

Just a simple timer that will be executed once and insert the current time into the PostgreSQL database. We will use Hashicorp Vault to store the password of the database user to be able to connect to it. The username and the connection string will reside on a ConfigMap.

We will skip the part of the configuration regarding the deployment of the TIBCO BW application Containers and link to a ConfigMap you have an article covering that in detail in case you need to follow it, and we will focus just on the topic regarding TIBCO BW Hashicorp Vault integration.

So we will need to tell TIBCO BW that the password of the JDBC Shared Resource will be linked to Hashicorp Vault configuration, and to do that, the first thing is to have tied the Password of the Shared Resources to a Module Property as shown in the picture below:

TIBCO BW Hashicorp Vault Configuration: Password linked to Module Property

Now, we need to tell this Module Property that is Linked to Hashicorp Vault, and we will do that on the Application Property View, selecting that this property is linked to a Credential Management Solution as shown in the picture below:

TIBCO BW Hashicorp Vault Configuration: Credential Management Configuration for Property

And it is now when we establish the TIBCO BW Hashicorp Vault relationship. We need to click directly on the green plus sign, and we will have a modal window asking for the technology of credentials management that we’re going to use and the data needed for each of them, as you can see in the following picture:

TIBCO BW Hashicorp Vault Configuration: Credential Management Configuration for Property

We will select Hashicorp Vault as the provided. Then we will need to provide three attributes that we already commented on in the previous article when we start creating secrets in Hashicorp Vault:

  • Secret Name: this is the secret name path after the root path of the element.
  • Secret Key: This is the key inside the secret itself
  • Mount Path: This is the root path of the secret

To get more details about these three concepts, please look at our article about how to create secrets in Hashicorp Vault.

So with all this, we have pretty much everything we need to connect to Hashicorp Vault and grab the secret, and from the TIBCO BW BusinessStudio side, everything is done; we can generate the EAR file and deploy it into Kubernetes because here it is the last part of our configuration.

 Kubernetes Deployment

Until this moment, we have the following information already provided:

  • BW Process that has the login to connect to the Database and insert information
  • Link between the password property used to connect and the Hashicorp Secret definition

So, pretty much everything is there, but one concept is missing. How will the Kubernetes Pod connect to Hashicorp once the pod is deployed? Until this point, we didn’t provide the Hashicorp Vault server location of the authentication method to connect to it. This is the missing part of the TIBCO BW Hashicorp Vault integration and will be part of the Kubernetes Deployment YAML file.

We will do that using the following environment properties in this sample:

TIBCO BW Hashicorp Vault Configuration: Hashicorp Environment Variables
  • HASHICORP_VAULT_ADDR: This variable will point to where the Hashicorp Vault server is located
  • HASHICORP_VAULT_AUTH: This variable will indicate which authentication options will be used. In our case, we will use the token one as we used in the previous article
  • HASHICORP_VAULT_KV_VERSION: This variable indicates which version of the KV storage solution we are using and will be two by default.
  • HASHICORP_VAULT_TOKEN: This will be just the token value to be able to authentication against the Hashicorp Vault server

If you are using other authentication methods or just want to know more about those properties please take a look at this documentation from TIBCO.

With all that added to the environment properties of our TIBCO BW application, we can run it, and we will get an output similar to this one, and that shows that the TIBCO BW Hashicorp Vault integration has been done and the application was able to start without any issue

TIBCO BW Hashicorp Vault Configuration: Running sample

Create Secrets in HashiCorp Vault: CLI and REST API Explained Step by Step

Create Secrets in HashiCorp Vault: CLI and REST API Explained Step by Step

Introduction

Create secrets in Hashicorp Vault is one of the most important and relevant things you can do once you have installed Hashicorp Vault on your environment, probably by recovering and getting these secrets from the components they need it. But in today’s article, we will focus on the first part so you can learn how easily you can create secrets in Hashicorp Vault.

In previous articles we commented on the importance of Hashicorp Vault and the installation process, as you can read here. Hence, at this point, we already have our vault ready to start working with it wholly initialized and unseal to be able to start serving requests.

Create Secrets in Hashicorp Vault using Hashicorp Vault CLI Commands

All the commands we will do will use a critical component named Hashicorp Vault CLI, and you will notice that because all of our commands will start with vault. To be honest, we already started with that in the previous article; if you remember, we already run some of these commands to initialize or unseal the vault, but now this will be our main component to interact with.

The first thing we need to do is to be able to log into the vault, and to do that; we are going to use the root token that was provided to us when we initialized the vault; we are going to store this vault in an environment variable so it will be easy to work with it. All the commands we are going to run now are going to be inside the vault agent server pod, as shown in the picture below:

Create Secrets in Hashicorp Vault: Detecting Vault Server Pod

Once we are inside of it, we are going to do the log command with the following syntax:

 vault login 

And we will get an output similar to this one:

Create Secrets in Hashicorp Vault: Login in Hashicorp Vault

If we do not provide the token in advance, the console will ask for the token to be typed afterward, and it will be automatically hidden, as you can see in the picture below:

Create Secrets in Hashicorp Vault: Login without Token provided

After this point, we are already logged into the vault, so we can start typing commands to create secrets in Hashicorp Vault. Let’s start with that process.

To start with our process for creating secrets in Hashicorp Vault, we first need to make or be more accurate to the Hashicorp Vault syntax to enable a secret path that you can think about as the root path to which all your secrets will be related to. If we are talking about having secrets for different applications, each path can be each of the applications, but the organization of secrets can be other depending on the context. We will cover that in much more detail in the following articles.

To enable the secret path to start the creation of secrets in Hashicorp Vault, we will type the following command:

 vault secrets enable -path=internal kv-v2

That will enable a secret store of the type kv-v2 (key-value store in its v2), and the path will be “internal,” so everything else that we create after that will be under this “internal” root path.

And now, we’re going to do the creation of the secret in Hashicorp Vault, and as we are using a key-value store, the syntax is also related to that because we are going to “put” a secret using the following command:

 vault kv put internal/database/config username="db-readonly-username" password="db-secret-password"

That will create inside the internal path a child path /database/config where it will store two keys:

  • username that will have the value db-readonly-username
  • password that will have the value db-secret-password

As you can see, it is quite easy to create new secrets on the Vault linked to the path, and if you want to retrieve its content, you can also do it using the Vault CLI, but this time using the get command as shown in the snippet below:

 vault kv get internal/database/config

And the output will be similar to the one shown below:

Create Secrets in Hashicorp Vault: Retrieving Secrets from the Vault

This will help you interact with your store’s content to retrieve, add or update what you already have there. Once you have everything ready there, we can move to the client side to configure it to gather all this data as part of its lifecycle workflow.

Create Secrets in Hashicorp Vault using REST API

The Hashicorp Vault CLI simplifies the interaction with the vault server, but all the interaction between the CLI and the server happens through a REST API that the server exposes and the CLI client consumes. It provides a simplified syntax to the user and translates the parameters provided into REST requests to the server, but you can use also REST requests to go to the server directly. Please look at this article in the official documentation to get more details about the REST API.

HashiCorp Vault Installation on Kubernetes Using Helm (Quick Start Guide)

HashiCorp Vault Installation on Kubernetes Using Helm (Quick Start Guide)

Introduction

In this article, we are going to cover the Hashicorp Vault Installation on Kubernetes. Hashicorp Vault has become one of the industry standards when we talk about managing secrets and sensitive data in production environments, and this covers cloud and non-cloud-native deployments. But especially in Kubernetes, this is a critical component. We have already commented that the Kubernetes Secrets are not very secured by default, so HashiCorp Vault solves that problem.

Installation Methods

Hashicorp Vault provides many different installation methods that you can read about on their official page here; most still focus on a traditional environment. But in summary, these are the ones you have available:

  • Install from Package Manager
  • Install from pre-existing binary
  • Install it from the source
  • Helm for Kubernetes

As you can imagine, the path we will follow here is the Helm way. I guess you are already familiar with help, but if not, I have some articles focus around Helm Charts that you can find all valuable information.

Helm Chart for Hashicorp Vault

For the sake of this article, we are going to what is called a standalone hashicorp vault installation, so we are not going to create in this post an architecture with High-Availability (HA) that is production-ready but something that can help you to start playing with the tool and see how this tool can be integrated with other ones that belong to the same cloud-native environment. To get more information about deploying Hashicorp Vault into a production-ready setup, please look at the following link.

We first need to install the helm chart in our local environment, but we need to be very careful about the helm version we have. When writing this article, Hashicorp Vault Installation requires a 3.7+ Helm Version, so you must first check the version you have installed.

In case you’re running on an older version, you will get the following error:

 Error: parse error at (vault/templates/_helpers.tpl:38): unclosed action

You can get more details on this GitHub issue.

At the time of writing this article, the latest version of Helm is 3.9, but this version generates an issue with AWS EKS with this error:

 Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1."
Failed installing **** with helm

You can get more details on this GitHub issue.

So, in that case, the best way to ensure there will not be a problem with the Hashicorp Vault Installation is to downgrade to 3.8, and you will be able to deploy the helm chart without any issue.

Hashicorp Vault Installation Process

To proceed with the Hashicorp Vault Installation, we need to run the following commands:

helm repo add hashicorp https://helm.releases.hashicorp.com
helm install vault hashicorp/vault

This will install two different components a single vault server as part of a StatefulSet and a vault-agent-injector to manage the injection of vault configuration on the various components and deployments on the other namespaces.

To get the pods running, we need to initialize and unseal the vault before being ready to use. To do that, we need to enter inside the vault-server pod and execute the following commands:

 vault operator init

This will generate several essential things:

  • It will generate the keys to be able to unseal the vault to be able to start using it. It will prompt a different number of keys, in our sample 5, and you will need at least 3 of them to be able to unseal the vault as
  • It will also generate a root token to be able to log into the CLI and interactor with the server to be able to read and write secrets

After that, we will need to run the following command at least three times, providing each of them with a different unseal key:

 Vault operator unseal

After that point, all components are Running and Ready and we can conclude our Hashicorp Vault Installation and start interacting with the vault to create your secrets.

Hashicorp Vault Installation: All Components Ready

Grafana LDAP Integration: Secure Authentication in Less Than 5 Minutes

Grafana LDAP Integration: Secure Authentication in Less Than 5 Minutes

This article will cover how to quickly integrate Grafana and LDAP server to increase the security of your application

Grafana is one of the most used dashboard tools, mainly for the observability platforms on cloud-workload environments. But none of these tools is complete until you can configure them to meet your security standards. Security is becoming each time more and more important and this is especially true when we are talking about any cloud-related component. So this topic should always something that any cloud architect has in mind when they are defining or implementing any piece of software today.

And at that point, the LDAP integration is one of the main things you will always need to do. I don’t know any enterprise, big or small, that allows the usage of any tool with a Graphical User Interface if it is not connected to the corporate directory.

So, let’s see how we can implement this with a popular tool such as Grafana. In my case, I’m going to use a containerized version of grafana, but the steps and things to do remains the same no matter the deployment model.

The task that we are going to perform is based on three steps:

  • Enable the LDAP settings
  • Basic LDAP settings configuration
  • Group mapping configuration

The first thing we need to modify is the grafana.ini adding the following snippet:

    [auth.ldap]
    enabled = true
    config_file = /etc/grafana/ldap.toml

That means that we are enabling the authentication based on LDAP and setting the location of the LDAP configuration file. This ldap.toml has all the configuration needed for the LDAP:

    [[servers]]
    host = "localhost"
    port = 389
    use_ssl = false
    start_tls = false
    ssl_skip_verify = false
  
    bind_dn = "XXXXXX-XXXXXXXXX"
    bind_password = "XXXXXX"
    
	search_filter = "(&(sAMAccountName=%s)(memberOf=ADMIN_GROUP))"
    search_base_dns = [DC=example,DC=org"]

  
    [servers.attributes]
    member_of = "member"
    email =  "mail"
    name = "givenName"
    surname = "sn"
    username = "sAMAccountName"

    [[servers.group_mappings]]
    group_dn = "*"
    org_role = "Admin"

The first part is regarding the primary connection. We will establish the host location and port, the admin user, and the password. After that, we will need a second section that definesearch_filter and search_base_dns to define the users that can access the system.

Finally, we have another section to define the mapping between the LDAP attributes and the grafana attributes to be able to gather the data from the LDAP.

    [servers.attributes]
    member_of = "member"
    email =  "mail"
    name = "givenName"
    surname = "sn"
    username = "sAMAccountName"

Also, we can define the privileges that the different groups allow to have to the various org_rolesin Grafana, as you can see in the snippet below:

    [[servers.group_mappings]]
    group_dn = "*"
    org_role = "Admin"

With all those changes in place, you need to restart the Grafana server to see all this configuration applied. After that point, you can log in using the LDAP credentials as you can see in the picture below and see all the data retrieved from the LDAP Server:

Grafana LDAP Integration: Secure Authentication in Less Than 5 Minutes

Using the default admin server, you can also use a new feature to test the LDAP configuration using the LDAP option that you can see in the picture below:

Grafana LDAP Integration: Secure Authentication in Less Than 5 Minutes

And then you can search for a User and you will see how this user will play on the Grafana Server:

  • Check the attributes will map from the LDAP server to the Grafana server
  • Also check the status of activity and the role allowed to this user

You can see one sample in the picture below:

I hope that this article helps you in a way to level up the security settings on your Grafana installations to integrate with the LDAP server. If you want to see more information about this and similar topics I encourage you to take a look at the links that you would find below these sentences.

Improving Development Security with Open Source DevSecOps Tools (Syft & Grype)

Improving Development Security with Open Source DevSecOps Tools (Syft & Grype)

Discover how Anchore can help you to keep your software safe and secure without losing agility.

Development Security is one of the big topics of today’s development practice. All the improvements that we got following the DevOps practices have generated many issues and concerns from the security perspective.

The explosion of components that the security teams need to deal with, container approaches, and polyglot environments gave us many benefits from the development and the operational perspective. Still, it made the security side of it more complex.

This is why there have been many movements regarding the “Shift left” approach and including security as part of the DevOps process creating the new term for DevSecOps that is becoming the new normal.

So, today what I would like to bring to you is a set of tools that I have just discovered that are created with the approach of making your life easier from the development security perspective because also developers need to be part of this and not leave all the responsibility to a different team.

This set of tools is name Anchore Toolbox, and they are open source and free to use, as you can see on the official webpage (https://anchore.com/opensource/)

So, what Anchore can provide to us? At the moment, we are talking about two different applications: Syft and Grype.

Syft

Syft is a CLI tool and go library for generating a Software Bill of Materials (SBOM) from container images and filesystems. Installation is as easy as just executing the following command:

curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin

And after doing that, we need to type syft to see all the options at our disposal:

Improving Development Security with Open Source DevSecOps Tools (Syft & Grype)
Syft help menu with all the options available

So, in our case, I will use to generate a bill of materials from an existing Docker image from bitnami/kafka to show how this works. I need to type the following command:

syft bitnami/kafka

And after a few seconds to have the image loaded and analyzed, I get as the output the list of all and each of the packages that this image has installed and the version of each of them as shown in the picture below. One great thing is that it shows not only the operating system packages like what we have installed using apk or apt but also other components like java libraries as well so we can have a complete bill of materials for this container image.

 ✔ Loaded image
 ✔ Parsed image
 ✔ Cataloged image [204 packages]
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-javadoc.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-javadoc.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-java8-compat_2.12–0.9.1.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-java8-compat_2.12–0.9.1.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-test-sources.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-test-sources.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/jackson-module-scala_2.12–2.10.5.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/jackson-module-scala_2.12–2.10.5.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka-streams-scala_2.12–2.7.0.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka-streams-scala_2.12–2.7.0.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-test.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-test.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-collection-compat_2.12–2.2.0.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-collection-compat_2.12–2.2.0.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-sources.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-sources.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-logging_2.12–3.9.2.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-logging_2.12–3.9.2.jar’
NAME VERSION TYPE
 java-archive
acl 2.2.53–4 deb
activation 1.1.1 java-archive
adduser 3.118 deb
aopalliance-repackaged 2.6.1 java-archive
apt 1.8.2.2 deb
argparse4j 0.7.0 java-archive
audience-annotations 0.5.0 java-archive
base-files 10.3+deb10u8 deb
base-passwd 3.5.46 deb
bash 5.0–4 deb
bsdutils 1:2.33.1–0.1 deb
ca-certificates 20200601~deb10u2 deb
com.fasterxml.jackson.module.jackson.module.scala java-archive
commons-cli 1.4 java-archive
commons-lang3 3.8.1 java-archive
...

Grype

Grype is a vulnerability scanner for container images and filesystems. It is the next step because it checks the image’s components and checks if there is any known vulnerability.

To install this component again is as easy as type the following command:

curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin

After doing that, we need to type grype to have the help menu with all the options at our disposal:

Improving Development Security with Open Source DevSecOps Tools (Syft & Grype)
Grype help menu with all the options available

Grype works in the following one. The first thing it does is load the vulnerability DB to check the different packages against this database to search for any known vulnerability. After doing that, follow the same pattern as syft and generate the bill of materials and check each of the components into the vulnerability database, and if there is a match. It just provides the ID of the vulnerability, the severity, and, if this is fixed into a higher version, provides the version where this vulnerability has been fixed.

Here you can see the output regarding the same image from bitnami/kafka with all the vulnerabilities detected

grype bitnami/kafka
 ✔ Vulnerability DB [updated]
 ✔ Loaded image
 ✔ Parsed image
 ✔ Cataloged image [204 packages]
 ✔ Scanned image [149 vulnerabilities]
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
NAME INSTALLED FIXED-IN VULNERABILITY SEVERITY
apt 1.8.2.2 CVE-2011–3374 Negligible
bash 5.0–4 CVE-2019–18276 Negligible
commons-lang3 3.8.1 CVE-2013–1907 Medium
commons-lang3 3.8.1 CVE-2013–1908 Medium
coreutils 8.30–3 CVE-2016–2781 Low
coreutils 8.30–3 CVE-2017–18018 Negligible
curl 7.64.0–4+deb10u1 CVE-2020–8169 Medium
..

Summary

These simple CLI tools help us a lot in the needed journey to keep our software current and free of known vulnerabilities and improve our development security. Also, as these are CLI apps and also can run on containers, it is effortless to include those as part of your CICD pipeline so vulnerabilities can check in an automated way.

They also provided a plugin to be included in the most used CI/CD systems such as Jenkins, Cloudbees, CircleCI, GitHub Actions, Bitbucket, Azure DevOps, and so on.