Scale to Zero in Kubernetes: Bringing the Serverless Experience to Your Cluster

brown and beige weighing scale

Bringing the Serverless Experience To Your Kubernetes Cluster

Serverless always has been considered the next step in the cloud journey. You know what I mean: you start from your VM on-premises, then you move to have containers on a PaaS platform, and then you try to find your next stop in this journey that is serverless.

Scale to Zero in Kubernetes: Bringing the Serverless Experience to Your Cluster
Technological evolution defined based on infrastructure abstraction perspective

Serverless is the idea of forgetting about infrastructure and focusing only on your apps. There is no need to worry about where it will run or the management of the underlying infrastructure. Serverless has started as a synonym of the Function as a Service (FaaS) paradigm. It has been populated first by the Amazon Lambda functions and later by all the major cloud providers.

It started as an alternative to the containerized approach that probably requires a lot of technical skills to manage and run at a production scale, but this is not the case anymore.

We have seen how the serverless approach has reached any platform despite this starting point. Following the same principles, we have different platforms that its focus is to abstract all technical aspects for the operational part and provide a platform where you can put your logic running. Pretty much every SaaS platform covers this approach but I would like to highlight some samples to clarify:

  • netlify is a platform that allows you to deploy your web application without needing to manage anything else that the code needed to run it.
  • TIBCO Cloud Integration is an iPaaS solution that provides all the technical resources you could need so you can focus on deploying your integration services.

But going beyond that, pretty much each service provided by the major cloud platform such as Azure, AWS, or GCP follows the same principle. Most of them (messaging, machine learning, storage, and so on) abstract all the infrastructure underlying it so you can focus on the real service.

Going back to the Kubernetes ecosystem we have two different layers of that approach. The main one is the managed Kubernetes services that all big platforms provide where all the management of the Kubernetes (master nodes, internal Kubernetes components) are transparent to you and you center everything on the workers. And the second level is what you can get in the AWS world with the EKS + Fargate kind of architecture where not even the worker nodes exist, you have your pods that will be deployed on a machine that belongs to your cluster but you don’t need to worry about it, or manage anything related to that.

So as we have seen serverless approach is coming to all areas but this is not the scope of this article. The idea here is to try to focus on the serverless as a synonym of Function as a Service and (FaaS) and How we can bring the FaaS experience to our productive K8S ecosystem. But let’s start with the initial questions:

Why would we like to do that?

This is the most exciting thing to ask: what are the benefits this approach provides? Function as a Service follows the zero-scale approach. That means that the function is not loaded if they are not being executed, and this is important, especially when you are responsible for your infrastructure or at least paying for it.

Imagine a normal microservices written in any technology, the amount of resources it can use depends on its load, but even without any load, you need some resources to keep it running; mainly, we are talking about memory that you need to stay in use. The actual amount will depend on the technology and the development itself, but it can be moved from some MB to some hundreds. If we consider all the microservices a significant enterprise can get, you will get a difference of several GB that you are paying for that are not providing any value.

But beyond the infrastructure management, this approach also plays very well with another of the latest architectural approaches, the Event-Driven Application (EDA), because we can have services that are asleep just waiting for the right event to wake them up and start processing.

So, in a nutshell, the serverless approach helps you get your optimized infrastructure dream and enable different patterns also in an efficient way. But what happens is I already own the infrastructure? It will be the same because you will run more services in the same infrastructure, so you will still get the optimized use of your current infrastructure.

What do we need to enable that?

The first thing that we need to know is that not all technologies or frameworks are suitable to run on this approach. That is because you need to meet some requirements to be able to do that as a successful approach, as shown below:

  • Quick Startup: If your logic is not loaded before a request hits the service, you will need to make sure the logic can load quickly to avoid impacting the consumer of the service. So that means that you will need a technology that can load in a small amount of time, usually talking in the microsecond range.
  • Stateless: As your logic is not going to be loaded in a continuous mode it is not suitable for stateful services.
  • Disposability: Similar to the previous point it should be ready for graceful shutdown in a robust way

How do we do that?

Several frameworks allow us to get all those benefits that we can incorporate into our Kubernetes ecosystem, such as the following ones:

  • KNative: This is the framework that the CNCF Foundation supports and is being included by default in many Kubernetes distributions such as Red Hat Openshift Platform.
  • OpenFaaS: This is a well-used framework created by Alex Ellis that supports the same idea.

It is true that there are other alternatives such as Apache OpenWhisk, Kubeless, or Fission but there less used in today’s world and mainly most alternative has been chosen between OpenFaaS and KNative but if you want to read more about other alternatives I will let you an article about the CNCF covering them so you can take a look for yourself:

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Solution for Transformation failed for XSLT input in BusinessWorks

Solution for Transformation failed for XSLT input in BusinessWorks

Solving one of the most common developer issues using BusinessWorks

The Transformation failed for XSLT input is one of the most common error messages you could see when developing using the TIBCO BusinessWorks tools. Understanding what the message is saying is essential to provide a quick solution.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

I have seen developers needing hours and hours trying to troubleshoot this kind of error when most of the time, all the information you are receiving is just in front of you, but you need to understand why the engine is aching for it.

But let’s provide a little bit of context first. What is this error that we’re talking about? I’m talking about something like what you can see in the log trace below:

...
com.tibco.pvm.dataexch.xml.util.exceptions.PmxException: PVM-XML-106027: Transformation failed for XSLT input '<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:tns1="http://www.tibco.com/pe/WriteToLogActivitySchema" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tib="http://www.tibco.com/bw/xslt/custom-functions" version="2.0"><xsl:param name="Mapper2"/><xsl:template name="Log-input" match="/"><tns1:ActivityInput><message><xsl:value-of select="tib:add-to-dateTime($Mapper2/primitive,0,0,1,0,0,0)"/></message></tns1:ActivityInput></xsl:template></xsl:stylesheet>'
	at com.tibco.pvm.infra.dataexch.xml.genxdm.expr.IpmxGenxXsltExprImpl.eval(IpmxGenxXsltExprImpl.java:65)
	at com.tibco.bx.core.behaviors.BxExpressionHelper.evalAsSubject(BxExpressionHelper.java:107)
...

Sounds more familiar now? As I already said, all TIBCO BusinessWorks developers have faced that. As I said, I have seen some of them struggling or even re-doing the job repeatedly without finding a proper solution. But the idea here is to try to solve it efficiently and quickly.

I will start with the initial warnings: Let’s avoid re-coding, re-doing something that should work but is not working, because you are not winning in any scenario:

  • If you re-do it again and it didn’t work, you lose x2 time creating something you already have, and you are still stuck.
  • If you re-do it again and it works, you don’t know what was wrong, so that you will face it again shortly.

So, how do this works? Let’s use this process:

Collect —> Analyze —> Understand —> Fix.

Scenario

So first, we need to collect data, and most of the time, as I said, it is enough with the log trace that we have, but at some moment, you could need to require the code of your process to detect the error the solution.

So we will start with this source code that you can see in the screenshot below and the following explanation:

Solution for Transformation failed for XSLT input in BusinessWorks

We have a Mapper that is defining a schema with an int element that is an option, and we’re not providing any value:

Solution for Transformation failed for XSLT input in BusinessWorks

And then we’re using the date to print a log, adding one more day to that date:

Solution for Transformation failed for XSLT input in BusinessWorks

Solution

First, we need to understand the error message. When we’re getting an error message titled `Transformation failed for XSLT input that says precisely this:

I tried to execute the XSLT that is related to one activity, and I failed because of a technical error

As probably you already know, each of the BusinessWorks activities executed an XSL Transformation internally to do the Input mapping; you can see it in the Business Studio directly. So, this error is telling you that this internal XSL is failing.

Solution for Transformation failed for XSLT input in BusinessWorks

And the reason is a run-time issue that cannot be detected at design time. So, to make it clear, the values of some of your variables and transforming fail. So first, you should focus on this. Usually, all the information is in the log trace itself, so let’s start analyzing that:

Solution for Transformation failed for XSLT input in BusinessWorks

So, let’s see all the information that we have here:

  • First of all, we can detect what activity is failing. You can easily do that if you are debugging locally, but if this is happening on the server can be more tricky; but with this information, you have: All the XSLT itself that is printed on the log trace, so you can easily
  • You also have a Caused by that is telling you why this is failing:
    • You can have several Caused by sentences that should be reading in cascading mode, so the lower one is the root issue generating the error for all the others above, so we should locate that first.
Solution for Transformation failed for XSLT input in BusinessWorks

In this case, the message is quite evident, as you can see in the trace below.

 com.tibco.xml.cxf.runtime.exceptions.FunctionException: XPath function {http://www.tibco.com/bw/xslt/custom-functions}add-to-dateTime exception: gregorian cannot be null.

So the add-to-dateTime function fails because one Gregorian argument (so, a date) is null. And that’s precisely what is happening in my case. If I provide a value to the parameter… Voilà, it is working!

Solution for Transformation failed for XSLT input in BusinessWorks

 Summary

Similar situations can happen with different root causes, but the most commons are:

  • Issue with optional and not optional elements so a null reaches a point where it should.
  • Validation errors because the input parameter doesn’t match the field definition.
  • Extended XML Types that are not supported by the function used.

All these issues can be easily and quickly solved following the reasoning we explained in this post!

So, let’s put it into practice next time you see a colleague with that issue and help them have a more efficient programming experience!

Multi-Container Pods in Kubernetes: When to Use Them (and When Not To)

city with high rise buildings during night time

A multi-container Pod should be the exception, not the default.

Let’s Talk About the Most Dangerous Option From Pod Design Perspective, so you can be ready to use it!

One of the usual conversations is about the composition and definition of components inside a Pod. This is normal for people moving from traditional deployment to a cloud-native environment, and the main question is: How many containers can I have inside a pod? 

I’m sure that most of you have heard or have asked that question at some point on your cloud-native journey, or even you have this doubt internally at this moment, and there is no doubt on the answer: One single container.

Wait, wait!! Don’t leave the post yet! We know that is not technically true, but it is easier to understand initially; you can only have a pod doing one thing.

So, if that’s the case, why do the multi container pods exist? And most importantly, if this is the first time you have heard that concept, what is a multi container pod?

Let’s start with the definition: A multi container pod has more than one container in its composition. And when we are talking about multi container, we are not talking about having some initContainers to manage the dependencies. Still, we are talking about having more than one container run simultaneously and at the same level, as you can see in the picture below:

Multi-Container Pods in Kubernetes: When to Use Them (and When Not To)
Multi Container Pod Definition

Does Kubernetes support this model? Yes, for sure. You can define inside your containers section as many containers as you need. So, from a technical view, there is no limit to having as many containers as you need in the same pod. But the main question you should ask yourself is:

Is this what you want to do?

A pod is the smallest unit in Kubernetes as a reminder. You deploy and undeploy pods, stop and start pods, restart pods, scale pods. So anything that is inside the same pod is highly coupled. It’s like a bundle, and they also share resources. So it is even more critical.

Imagine this situation, I’d like to buy a notebook, so I go to the shop and ask for the notebook, but they don’t have a single notebook. Still, they have an incredible bundle: a notebook, a pen, and a stapler just for $2 more than a single notebook price.

So you think that this is an excellent price because you are getting a pen and a stapler for a small part of their price if you would like to buy it in isolation. So you think that’s a good idea. But then, you remind that you also need other notebooks for other purposes. In the end, you need ten more notebooks, but when you need to buy them, you also need to acknowledge the ten pens and ten staplers that you don’t need anymore. OK, there are cheaper, but in the end, you are paying a reasonable price for something that you don’t need. So, it is not efficient. And the same applies to the Pod structure definition.

In the end, you move from traditional monolith deployments to different containers inside a pod to have the same challenges and issues? What is the point of doing that?

None.

If there is no reason to have two containers tightly together, why is this allowed in the K8S specification? Because this is useful for some specific use-cases and scenarios. Let’s talk about some of them.

  • Helper Containers: This is the most common one and is that you have different containers inside the pod. Still, one is the main one, the one that provides a business capability or a feature, and the other is just helping in some way.
  • Sidecar Pattern Implementation: Another common approach to have this composition is implementing the sidecar pattern. This is how it works by deploying another container to perform a specific capability. You have seen it, for example, for Service Meshes, Log Aggregation Architecture, or other components that follow that pattern.
  • Monitoring Exporters: Another usual see to thing do is to use one of these containers to act as an exporter for the monitoring metrics of the main component. This is usually seen on architectures such as Prometheus, where each piece has its exporter to be scraped from the Prometheus Server

There are also exciting facts of sharing containers inside a pod because, as commented, they also share resources such as:

  • Volumes: You can, for example, define a shared folder for all the different containers inside a pod, so one container can read information for the other to perform its task quickly and efficiently.
  • Inter-process Communication: You can communicate between containers using IPC to communicate more efficiently.
  • Network: The different containers inside a pod can also access ports from other containers just reaching localhost.

I hope this article has helped you understand why this capability of having many containers inside the same pod exists, but at the same time to know which kind of scenarios are using this approach and having some reasoning about if a new use-case should be used this approach or not.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Troubleshoot Network Connections in Kubernetes Workloads (Live Traffic Debugging)

blue UTP cord

Discover Mizu: Traffic Viewer for Kubernetes to ease this challenge and improve your daily work.

One of the most common things we have to do when testing and debugging our cloud-native workloads on Kubernetes is to check the network communication.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

It could be to check the incoming traffic you are getting so we can inspect the requests we are receiving and see what we are replying to and similar kinds of use-cases. I am sure this sounds familiar to most of you.

I usually solve that using tcpdump on the container, similar to what I would do in a traditional environment, but this is not always easy. Depending on the environment and configuration, you cannot do so because you need to include a new package in your container image, do a new deployment, so it is available, etc.

So, to solve that and other similar problems, I discovered a tool named Mizu, which I would like to have found a few months ago because it would help me a lot. Mizu is precisely that. In its own words:

Mizu is a simple-yet-powerful API traffic viewer for Kubernetes, enabling you to view all API communication between microservices across multiple protocols to help you debug and troubleshoot regressions.

To install, it is pretty straightforward. You need to grab the binary and provide the correct permission on your computer. You have a different binary for each architecture, and in my case (Mac Intel-based), these are the commands that I executed:

curl -Lo mizu github.com/up9inc/mizu/releases/latest/download/mizu_darwin_amd64 && chmod 755 mizu && mv mizu /usr/local/bin

And that’s it, then you have a binary in your laptop that connects to your Kubernetes cluster using Kubernetes API, so you need to have configured the proper context.

In my case, I have deployed a simple nginx server using the command:

 kubectl run simple-app --image=nginx --port 80

And once that the component has been deployed, as it is shown in the Lens screenshot below:

I ran the command to launch mizu from my laptop:

mizu tap

And after a few seconds, I have in front of me a webpage opened monitoring all traffic happening in this pod:

Troubleshoot Network Connections in Kubernetes Workloads (Live Traffic Debugging)

I have made the nginx port expose using the kubectl expose command:

 kubectl expose pod/simple-app

And after that, I deployed a temporary pod using the curl image to start sending some requests with the command shown below:

 kubectl run -it --rm --image=curlimages/curl curly -- sh

now I’ve started to send some requests to my nginx pod using curl:

 curl -vvv http://simple-app:80 

And after a few calls, I could see a lot of information in front of me. First of all, I can see the requests I was sending with all the details of it:

Troubleshoot Network Connections in Kubernetes Workloads (Live Traffic Debugging)

But even more important, I can see a service map diagram showing the dependencies and the calls graphically happening to the pod with the response time and also the protocol usage:

Troubleshoot Network Connections in Kubernetes Workloads (Live Traffic Debugging)

This will not certainly replace a complete observability solution on top of a service mesh. Still, it will be a beneficial tool to add to your toolchain when you need to debug a specific communication between components or similar kinds of scenarios. As commented, it is like a high-level tcpdump for pod communication.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Why You Shouldn’t Set Kubernetes ResourceQuotas Using Limit Values (and What to Do Instead)

woman in blue denim jacket holding white paper

If You Aren’t Careful You Can Block The Scalability Of Your Workloads

Why You Shouldn’t Set Kubernetes ResourceQuotas Using Limit Values (and What to Do Instead)

Photo by Joshua Hoehne on Unsplash

One of the great things about container-based developments idefiningne isolation spaces where you have guaranteed resources such as CPU and memory. This is also extended on Kubernetes-based environments at the namespace level, so you can have different virtual environments that cannot exceed the usage of resources at a specified level.

To define that you have the concept of Resource Quota that works at the namespace level. Based on its own definition (https://kubernetes.io/docs/concepts/policy/resource-quotas/)

A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.

You have several options to define the usage of these Resource Quotas but we will focus on this article on the main ones as follows:

  • limits.cpu: Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
  • limits.memory: Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
  • requests.cpu: Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value.
  • requests.memory: Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value.

So you can think that this is a great option to define a limit.cpu and limit.memory quota, so you make sure that you will not extend that amount of usage by any means. But you need to be careful about what this means, and to illustrate that I will use a sample.

I have a single workload with a single pod with the following resource limitation:

  • requests.cpu: 500m
    • limits.cpu: 1
    • requests.memory: 500Mi
    • limits.memory: 1 GB

Your application is a Java-based application that exposes a REST Service ant has configured a Horizontal Pod Autoscaler rule to scale when the amount of CPU exceeds its 50%.

So, we start in the primary situation: with a single instance that requires to run 150 m of vCPU and 200 RAM, so a little bit less than 50% to avoid the autoscaler. But we have a Resource Quota about the limits of the pod (1 vCPU and 1 GB) so we have blocked that. We have more requests and we need to scale to two instances. To simplify the calculations, we will assume that we will use the same amount of resources for each of the instances, and we will continue that way until we reach 8 instances. So lets’ see how it changes the limits defined (the one that will limit the number of objects I can create in my namespace) and the actual amount of resources that I am using:

Why You Shouldn’t Set Kubernetes ResourceQuotas Using Limit Values (and What to Do Instead)

So, for resources used amount of 1.6 vCPU I have blocked 8 vCPU, and in case that was my Resource Limit, I cannot create more instances, even though I have 6.4 vCPU not used that I have allowed deploying because of this kind of limitation I cannot do it.

Yes, I am able to ensure the principle that I never will use more than 8 vCPU, but I’ve been blocked very early on that trend affecting the behavior and scalability of my workloads.

Because of that, you need to be very careful when you are defining these kinds of limits and making sure what you are trying to achieve because to solve one problem you can be generating another one.

I hope this can help you to prevent this issue for happening in your daily work or at least to keep it in mind when you are facing similar scenarios.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kubernetes Metadata Explained: Access Pod Names, Labels, and Annotations at Runtime

black laptop computer turned on on table

Discover how to extract all the information available to inject it into your pods

Kubernetes Metadata is how you will access some of the information from your pods in your application at runtime. When you are moving from a traditional kind of development to a cloud-native one, you usually need to access some out-of-the-box information available in a conventional environment.

This happens especially when we are talking about a platform that in the past was deployed on any platform that was populated with some information such as application name, version, domain, and so on. But this is tricky in a cloud-native approach. Or maybe not, But at least for some time, you have been wondering how you can get access to some of the information you know about your cloud-native workload, so the running application inside the pod knows it as well.

Because when you define a cloud-native, you describe a lot of very relevant information. For example, let’s think about that. When you start your pod, you know your pod name because it is your hostname:

Kubernetes Metadata Explained: Access Pod Names, Labels, and Annotations at Runtime

But when you define your workload, you have a deployment name; how can you get it from your pod? How do you get which namespace your pod has been deployed to? Or what about all the metadata we define as labels and annotations?

The good thing is that there is a way to get any single data we have commented on, so don’t worry; you will get all this information available to use if you need to.

The standard way to access any information is through environment variables. This is the traditional way that we provide initial data to our pod. We already have seen we know we can use ConfigMaps to populate environment variables, but this is not the only way to provide data to our pods. There is much more, so take a look at it.

Discovering the fieldRef option

When we discussed using ConfigMap as environment variables, we had two ways to populate that information. Providing all the ConfigMap content, in which case we used the envFrom option, we can also use the valueFrom and provide the configMap name and the same key we would like to get the valueFrom.

So, following this section approach, we have an even more helpful command called fieldRef. fieldRef is the command name for a reference to a field, and we can use it inside the valueFrom directive. In a nutshell, we can provide a field reference as a value to an environment variable key.

So let’s take a look at the data that we can get from this object:

  • metadata.name: This gets the pod name as a value for an environment value
  • metadata.namespace: Provides the namespace that the pod is running as the value
  • metadata.lables[LABELNAME]: Extract the value of the label as the value for the environment key
  • metadata.annotations[ANNOTATIONNAME]: Extract the value of the annotation as value for the environment key

So here, you can see a snippet that defines different environment variables using this metadata as the value so you can use it inside the pod just gathering as standard environment variables:

        env:
        - name: APP_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['app']
        - name: DOMAIN_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['domain']
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name 

Going Even Beyond

But this is not everything that the fieldRef option can provide, there is much more, and if you would like to take a look, you can do it here:

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Discovering The Truth Behind Kubernetes Secrets

man in white dress shirt wearing black framed eyeglasses

We have been talking recently about ConfigMap being one of the objects to store a different configuration for Kubernetes based-workloads. But what happens with sensitive data?

This is an interesting question, and the initial answer from the Kubernetes platform was to provide a Secrets object. Based on its definition from the Kubernetes official website, they define secrets like this:

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don’t need to include confidential data in your application code

So, by default, secrets are what you should use to store your sensitive data. From the technical perspective, to use them, they behave very similar to ConfigMap, so you can link it to the different environment variables, mount it inside a pod, or even have specific usages for managing credentials for different kinds of accounts such as Service Accounts. This classifies the different types of secrets that you can create:

  • Opaque: This defines a generic secret that you can use for any purpose (mainly configuration data or configuration files)
  • Service-Account-Token: This defines the credentials for service accounts, but this is deprecated and no longer in use since Kubernetes 1.22.
  • Docker-Registry Credentials: This defines credentials to connect to the Docker registry to download images as part of your deployment process.
  • Basic or SSH Auth: This defines specific secrets to handle authentication.
  • TLS Secret:
  • Bootstrap Secrets:

But is it safe to use Kubernetes Secrets to store sensitive data? The main answer for any question in any tech-related topic is: It depends. But some controversy has arisen that this topic is also covered in the official Kubernetes page, highlighting the following aspects:

Kubernetes Secrets are, by default, stored unencrypted in the API server’s underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.

So, the main thing is, by default, this is a very, very insecure way. It seems more like a categorization of the data than a proper secure handle. Also, Kubernetes provide some tips to try to make this alternative more secure:

  • Enable Encryption at Rest for Secrets.
  • Enable or configure RBAC rules that restrict reading data in Secrets (including indirect means).
  • Where appropriate, also use mechanisms such as RBAC to limit which principals are allowed to create new Secrets or replace existing ones.

But that can be not enough, and that has created room for third-party and cloud providers to provide their solution that covers these needs and at the same time also offer additional features. Some of these options are the ones shown below:

  • Cloud Key Management Systems: Pretty much all the big cloud providers provide some way of Secret Management to go beyond these features and mitigate those risks. If we talk about AWS, there is AWS Secrets Manager , if we are talking about Azure, we have Azure Key Vault , and in the case of Google, we also have Google Secret Manager.
  • Sealed Secrets is a project that tries to extend Secrets to provide more security, especially on the Configuration as a Code approach, offers a safe way to store those objects in the same kind of repositories as you expose any other Kubernetes resource file. In its own words, “ The SealedSecret can be decrypted only by the controller running in the target cluster, and nobody else (not even the original author) can obtain the original Secret from the SealedSecret.”
  • Third-party Secrets Managers that are similar to the ones from the Cloud Providers that allows a more independent approach, and there are several players here such as Hashicorp Vault or CyberArk Secret Manager
  • Finally also, Spring Cloud Config can provide security to store data that are related to sensitive configuration concepts such as passwords and at the same time covers the same need as the ConfigMap provides from a unified perspective.

I hope this article has helped to understand the purpose of the Secrets in Kubernetes and, at the same time, the risks regarding its security and how we can mitigate them or even rely on other solutions that provide a more secure way to handle this critical piece of information.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kubernetes ConfigMaps Explained: Best Practices to Manage Configuration Properly

black and silver laptop computer beside yellow ceramic mug

ConfigMaps is one of the most known and, at the same time, less used objects in the Kubernetes ecosystem. It is one of the primary objects that has been there from the beginning, even if we tried so many other ways to implement a Config Management solution (such as Consul, Spring Cloud Config, and others).

Based on its own documentation words:

A ConfigMap is an API object used to store non-confidential data in key-value pairs.

https://kubernetes.io/docs/concepts/configuration/configmap/

Its motivation was to provide a native solution for configuration management for cloud-native deployments. A way to manage and deploy configuration focusing on different code from the configuration. Now, I still remember the WAR files with the application.properties file inside of it.

ConfigMap is a resource as simple as you can see in the snippet below:

apiVersion: v1
kind: ConfigMap
metadata:
  name: game-demo
data:
  player_initial_lives: "3"
  ui_properties_file_name: "user-interface.properties"

ConfigMaps are objects that belong to a namespace. They have a strong relationship with Deployment and Pod and enable the option to have different logical environments using namespace where they can deploy the same application. Still, with a specific configuration, so they will need a particular configMap to support that, even if it is based on the same resource YAML file.

From a technical perspective, the content of the ConfigMap is stored in the etcd database as it happens for any information that is related to the Kubernetes environment, and you should remember that etcd by default is not encrypted, so all the data can be retrieved for anyone that has access to it.

Purposes of ConfigMaps

Configuration Parameters

The first and foremost purpose of the configMap is to provide configuration parameters to your workload. An industrialized way to remove the need for env variables is to link the environment configuration from your application.

Kubernetes ConfigMaps Explained: Best Practices to Manage Configuration Properly

Providing Environment Dependent Files

Another significant usage is providing or replacing files inside your containers containing the critical configuration file. One of the primary samples that illustrate this is to give a logging configuration for your app if your app is using the logback library. In this case, you need to provide a logback.xml file, so it knows how to apply your logging configuration.

Other options can be properties. The file needs to be located there or even public-key certificates to handle SSL connections with safelisted servers only.

Kubernetes ConfigMaps Explained: Best Practices to Manage Configuration Properly
Kubernetes ConfigMaps Explained: Best Practices to Manage Configuration Properly

Read-Only Folders

Another option is to use the ConfigMap as a read-only folder to provide an immutable way to link information to the container. One use-case of this can be Grafana Dashboards that you are adding to your Grafana pod (if you are not using the Grafana operator)

Kubernetes ConfigMaps Explained: Best Practices to Manage Configuration Properly

Different ways to create a ConfigMap

You have several ways to create a ConfigMap using the interactive mode that simplifies its creation. Here are the ones that I use the most:

Create a configMap to host key-value pairs for configuration purposes

kubectl create configMap name  --from-literal=key=value

Create a configMap using a Java-like properties file to populate the ConfigMap in a key-value pair.

 kubectl create configMap name --from-env-file=filepath

Create a configMap using a file to be part of the content of the ConfigMap

 kubectl create configMap name --from-file=filepath

 ConfigMap Refresh Lifecycle

ConfigMaps are updated in the same way as any other Kubernetes object, and you can use even the same commands such as kubctl apply to do that. But you need to be careful because one thing is to update the ConfigMap itself and another thing is that the resource using this ConfigMap is updated.

In all the use-cases that we have described here, the content of the ConfigMap depends on the Pod’s lifecycle. That means that the content of the ConfigMap is read on the initialization process of the Pod. So to update the ConfigMap data inside the pod, you will need to restart or bring a new instance of the pod after you have modified the ConfigMap object itself.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

CKAD Exam Preparation: Practical Tips to Pass the Kubernetes Developer Certification

person writing on white paper

Learn From My Own Experience To Clear Your Kubernetes Certification Exam

But I would also like to provide some practical advice based on my own experience if this can help anyone else going through the same process. I know there are a lot of similar articles, and most of them are worth it because each of them provides a different perspective and approach. So here it is mine:

  • Fast but Safe. You will have around 2 hours to complete between 15 to 20 practical questions, which pretty much gives you about 6 minutes each on average. That’s enough time to do it, but also you must go fast. So, try to avoid the approach of reading the whole exam first or moving across questions. It is better to start with the first one right away and if you are blocked, move to the next one. At the same time, you must validate the output you are getting to ensure that you are not missing anything. Try to run any command to validate if the objects have been created correctly and have the right attributes and configuration before moving to the next one. Time is precious. I had a lot of time at the end of the exam to review the questions, but it is also true that I spent 20 minutes because I wrote ngnix instead of nginx, and I was unable to see it!!
  • Imperative commands is the way to go: You must learn the YAML structure for the main objects. Deployment, Pod, CronJob, Jobs, etc. You will also need to master the imperative commands to generate the initial output quickly. Imperative commands such as kubectl run, kubectl create, kubectl expose will not provide 100% of the answer, but maybe 80% is the base to make arrangements to have the solution to your question quickly. I recommend taking a look at this resource:
  • kubectl explain to avoid going through documentation on thinking a lot. I have a problem learning the exact name of a field or the location in the YAML file. So I used a lot of the kubectl explain, especially with the —rescursive flag. It provides the YAML structure so, if you don’t remember if the key name is configMap or ConfigMapRef or claimName or persitentVolumeClaim, this will be an incredible help. If you also add a grep -A 10 -B 5 command to find your field and its context, you will master it. This doesn’t replace knowing the YAML structure, but it will help to be efficient when you don’t remember the exact name or location.
CKAD Exam Preparation: Practical Tips to Pass the Kubernetes Developer Certification
kubectl explain pod –recursive
  • Don’t forget about docker/podman and helm: With the changes in the certification in September 2021 also, the building process is essential, so it is excellent if you have enough time in your preparation to play with tools such as docker/podman or helm so you will master any question related to that that you could find.
  • Use the simulator: LinuxFoundation is providing you two sessions on the simulator that, from one side, will give you an authentic exam experience, so you will face similar kinds of questions and interface to feel that you are not the first time that you are facing and at the same time you could feel familiar with the environment. I recommend using both sessions (both have the same question), one in the middle of your training and the second one just one or two days before your exam.

So, here are my tips, and I hope you will like them. If they were helpful to you, please let me know on social networks or by mail or another way of contacting your preference! All the best in your preparation, and I’m sure you will get your goals!

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

My Take on the Kubernetes CKAD Certification: Real Exam Experience and Lessons Learned

girl in black t-shirt writing on white paper

My Experience and Feelings After Clearing the Certified Kubernetes Application Developer

Last week I cleared the Certified Kubernetes Application Developer (CKAD) certification with a 95/100 score, and it was more difficult than it sounds even though this is the easiest of the Kubernetes certifications, the way the exam is designed and the skills that are evaluated on it make you unsure of your knowledge.

I have been using Kubernetes daily for more than three years now. Because of my work, it is required to deploy, define, troubleshoot Kubernetes-based workloads on different platforms (Openshift, EKS, AKS… anything), so you could think that I shouldn’t need to prepare for this kind of exam, and that could be the impression too. But this is far from reality.

I feel that there is no certification that you can clear without preparation because the certification does not measure how skilled you are on any other topic than the certification process itself. You can be the master of any technology, but if you go to a certification exam without any specific exam preparation, you have a lot of chances to fail.

Even in this case that we have shifted from the traditional theoretical test-case question to a more practical one, it is no different. Because yes, you don’t need to learn anything, and yes, it requires that you can really do things, not just know about a thing, but everything else is the same.

You will be asked about things you will never use in real life, you will need to use commands that you only are going to use in the exam, and you will need to do it in the specific way the expected too because this is how certification works. Is it bad? Probably… is there any other way to do it? We didn’t find it yet any better.

I have to admit that I think this process is much fairer than the test-case one, even though I prefer the test case just for a matter of timing during the process.

So, probably, you are asking if that is my view, why I try to clear the certification in the first place? There are several reasons to do it. First of all, I think certification is a great way to set a standard of knowledge for anyone. That doesn’t mean that people with the certification are more competent or better skilled than people without the certification. I don’t consider myself more qualified today than one month ago when I started to prepare for the certification, but at least it settled some foundation of what you can expect.

Additional to that is a challenge to yourself, to show you that you can do it, and it is always great to push your limits a bit beyond what is mandatory for work. And finally, it is something that looks good in your CV, that is for sure.

Do I learn something new? Yes, for sure, a lot of things. I even improved myself because I usually do some tasks, and just that alone made it worth it. Even if I failed, I think it was worth it because it always gives you something more to add to your toolchain, and that is always good.

Also, this exam doesn’t ensure that you are a good Kubernetes Application Developer. In my view, I think the exam approach is focused on showing that you are a fair Kubernetes Implementer. Why am I saying that? Let’s add some points:

  • You don’t get any points to provide the best solution for a problem. The ask is so specific that there is a matter of translating what is written in plain English to Kubernetes actions and objects.
  • There are troubleshooting questions, yes, but there are also quite basic ones that don’t ensure that your thought process is efficient. Again, efficiency is not evaluated on the process.

So, I am probably missing a Certified Kubernetes Architecture exam where you can have the definition of a problem, and you need to provide a solution. You will get evaluated based on that. Even with some way to justify the decision you are making and the thought process, I don’t think we ever see that. Why? Because, and that’s very important because any new certification exam we are going to face needs to be specific enough so it can be evaluated automatically.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.