Kubernetes Distributions Explained: What They Are and the Top Platforms to Choose

Kubernetes Distributions Explained: What They Are and the Top Platforms to Choose

Learn what Kubernetes Distributions are, why it matters to you and who are the top best players available in the market today

Introduction

One of the biggest announcements from the latest AWS re:Invent 2020 sessions was the release of EKS-D from Amazon. EKS-D is their open-source Kubernetes Distribution that’s now available for everyone to start using in their cloud provider or even on-premises.

It’s based on past findings and the entire process Amazon has undergone in managing their Kubernetes managed platform, Amazon EKS.

These announcements have many people asking themselves: “OK, I know Kubernetes, but what’s a Kubernetes distribution? And why should I care?”

So I’ll try to answer that with the knowledge I have, and I always try to use the same approach: a Kubernetes versus Linux model comparison.

Kubernetes is an open-source project, as you know, started by Google and is now being managed by the community and the Cloud Native Computing Foundation (CNCF), and you can find all the code available here:

But let’s be honest: Not many of us are pulling that repo and trying to compile it to provide a cluster. That’s not how we usually work. If you follow the code path — downloading it, building it, and so on — this is usually named vanilla Kubernetes.

If we start with the Linux comparison, it’s the same situation as we have with the Linux kernel that most of the Linux distribution ships, but this is already compiled and available with a bunch of other tools all working together via the usual approach.

So that’s what a Kubernetes distribution is. They build Kubernetes. They provide other tools and components to enhance or provide more features and to focus on additional aspects like a security focus, a DevOps focus, or another focus. Another concept that usually is raised is the purity of distribution, and we try to talk about distribution that’s pure.

We call a distribution pure when it’s building Kubernetes, and that’s it. It leaves everything else to the developers or users to decide what they want to use on top of it.


What Are the Main Components Shipped in a Kubernetes Distribution?

The main components that can differ when we’re talking about a Kubernetes distribution are the following:

Container runtime and registries

We all know there’s more that one container runtime, and even if you weren’t aware of that, you’ve probably read all of the articles regarding the removal of Docker support in Kubernetes v1.20, as you can read in this awesome article from Edgar Rodriguez.

At this moment, it seems all runtimes should support the existing Container Runtime Interface, and runtimes like CRI-O, Containerd, or Kata seem to be the default options now.

Networking

Another topic that often differs when we’re talking about Kubernetes distributions is how they manage their network, and this is one of the most critical aspects of the whole platform.

As we have with the container runtime, a standard specification exists to cover that topic, and that’s the Container Network Interface (CNI). Several projects exist on this topic, like Flannel, Calico, Canal, and Wave. Also, some platforms provide their own component, like the Openshift SDN operator.

Storage

How to handle storage in Kubernetes is also very important, especially as we embrace this model in deployments that require stateful models. Different platforms can support different storage options, like file systems and so on.


Who Are the Top Players?

The first thing we need to be aware of is there are a huge number of Kubernetes distributions out there.

We’ll count the ones with a CNCF certification, and you can take a look at all of them here. At the moment of writing this article, we’re talking about 72 certified distributions.

Logos of the various CNCF-certified distributions.
Image via the Cloud Native Computing Foundation

These are the ones that I’d like to highlight today:

Red Hat OpenShift

Red Hat OpenShift logo.

The Red Hat OpenShift platform could be one of the most used platforms, especially in a private-cloud fashion. It could include most of the Red Hat services regarding storage, like GlusterFS and networking with OpenShift DNS. It has OKD as the open-source project that backs and contributes to the OpenShift platform. Check this article to see how to set up Openshift locally to test it

Mirantis

Mirantis logo

The former Docker enterprise that’s been acquired by Mirantis is another of the usual choices when we’re talking about supported platforms.

VMware Tanzu

VMware Tanzu logo

VMware Tanzu, also coming from the acquisition of Pivotal from VMware, is a Kubernetes platform.

Canonical

Canonical logo

Canonical (open source) is a platform from the company that develops and maintains Ubuntu. It’s another one of the important choices here and provides a variety of options, focusing not only on the common central mode but also on edge Kubernetes deployments with projects like MicroK8S and more options.

Rancher

Rancher logo

Rancher (open source) is another one of the big players, focusing on following and extending the CNCF standards and also offering a big push for edge deployment with K3S. It also offers automated upgrades.


Summary

So, as you can see, the number of options available out there is huge. They all differ, so it’s important to take your time when you’re deciding your target platform based on your criteria for your project or your company.

And that’s without covering the managed platforms available out there that are becoming one of the more preferred options for companies so they can get all the flexibility from Kubernetes while not needing to handle the complexity of managing a Kubernetes platform themselves. But that’s a topic for another article — hopefully soon.

This article at least has provided you with more clarity about what a Kubernetes distribution is, the main differences among them, and a quick look at some of the key actors in this spectrum. Enjoy your day, and enjoy your life.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Why Visual Diff Is Essential for Low-Code Development and Team Collaboration

Why Visual Diff Is Essential for Low-Code Development and Team Collaboration

Helping you excel using low code in distributed teams and parallel programming

Most enterprises are exploring low-code/no-code development now that the most important thing is to achieve agility on the technology artifacts from different perspectives (development, deployment, and operation).

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

The benefits of this way of working make this almost a no-brainer decision for most companies. We already covered them in a previous article. Take a look if you have not read it yet.

But we know that all new things come with their own challenges that we need to address and master in order to unleash the full benefits that these new paradigms or technologies are providing. Much like with cloud-native architecture, we need to be able to adapt.

Sometimes it is not the culture that we need to change. Sometimes the technology and the tools also need to evolve to address those challenges and help us on that journey. And this is how Visual Diff came into life.

When you develop using a low-code approach, all the development process is easier. You need to combine different blocks that do the logic you need, and everything is simpler than a bunch of code lines.

Example of low-code development
Low-code development approach using TIBCO BusinessWorks.

But we also need to manage all these artifacts in a repository whereby all of them are focused on source code development. That means that when you are working with those tools at the end, you are not working with a “low-code approach” but rather a source code approach. Things like merging different branches and looking to the version history to know the changes are complex.

And they are complex because they are performed by the repository itself, which is focused on the file changes and the source code that changes. But one of the great benefits of low-code development is that the developer doesn’t need to be aware of the source code generated as part of the visual, faster activity. So, how can we solve that? What can we use to solve that?

Low-code technologies need to advance to take the lead here. For example, this is what TIBCO BusinessWorks has done with the release of their Visual Diff capability.

So, you still have your integration with your source code repository. You can do all the processes and activities you usually need to do in this kind of parallel distributed development. Still, you can also see all those activities from a “low-code” perspective.

That means that when I am taking a look at the version history, I can see the visual artifacts that have been modified. The activities added or deleted are shown there in a meaningful way for low-code development. That closes the loop about how low-code developments can take all the advantages of the modern source code repositories and their flows (GitFlow, GitHub Flow, One Flow, etc.) as well as the advantages of the low-code perspective.

Let’s say there are two options with which you can see how an application has been changed. One is the traditional approach and the other uses the Visual Diff:

Visual Diff of your processes
Option A: Visual Diff of your processes
Same processes but with a Text Comparision approach
Option B: Same processes but with a Text Comparison approach

So, based on this evidence, what do you think is easier to understand? Even if you are a true coder as I am, we cannot deny the ease and benefits of the low-code approach for massive and standard development in the enterprise world.


Summary

No matter how fast we are developing with all the accelerators and frameworks that we have, a well-defined low-code application will be faster than any of us. It is the same battle that we had in the past with the Graphical Interfaces or mouse control versus the keyboard.

We accept that there is a personal preference to choose one or the other, but when we need to decide what is more effective and need to rely on the facts, we cannot be blind to what is in front of us.

I hope you have enjoyed this article. Have a nice day!

Lens Kubernetes IDE Explained: Improve Development and Cluster Management Productivity

Lens Kubernetes IDE Explained: Improve Development and Cluster Management Productivity

Find the greatest way to manage your Kubernetes development cluster

I need to start this article by admitting that I am an advocate of Graphical User Interfaces and everything that provides a way to speed up the way we do things and be more productive.

So when we talk about how to manage our Kubernetes cluster mainly for development purposes, you can imagine that I am one of those people who tries any available tool to make that journey easier. The ones who’ve started using Portainer to manage their local Docker engine or are a fan of the new dashboard in Docker for Windows/Mac. But that is far from reality.

In terms of Kubernetes management, I got used to typing all the commands to check the pods, the logs, the status of the cluster to do the port-forwards, etc. Any task I did was with a terminal, and I felt that it was the right thing to do. I did not even use a Kubernetes dashboard to have a web page for my Kubernetes environment. All of that changed last week when I met with a colleague who showed me what Lens could do.

Lens is a totally different story. I am not praising it because I am being paid to do so. This is an open source project that you can find on GitHub. But the way that it does the job is just awesome!

Image of Len showing the status of a Kubernetes cluster
Image of Len showing the status of a Kubernetes cluster — Screenshot by the author.

The first thing I would like to mention regarding Lens is that it has multi-context support, so you can have all the different Kubernetes contexts available to switch following a Slack approach when we switch from different workspaces. It just reads your .kube/config file and makes all those contexts available to you to connect to the one you would like.

Kubernetes context selection in Lens
Kubernetes context selection in Lens

Once we have connected to one of these clusters, we have different options to see the status of it, but the first one is to check the Workloads using the Overview option:

Workloads Overview in Lens
Workloads Overview in Lens

Then, you can drill down to any pod or different object inside Kubernetes to check its status and at the same time do the main actions you usually do when you deal with a pod, such as check the logs, execute a terminal to one of the containers that belong to that pod, or even edit the YAML for that pod.

Pod options inside Lens
Pod options inside Lens

But Lens goes beyond the usual Kubernetes tasks because it also has a Helm integration, so you can check the releases that you have there, the version of the status, and so on:

Helm integration option in Lens
Helm integration option in Lens

The experience of managing everything feels perfect. You are more productive as well. Even those who love the CLI and terminals need to admit that to do regular tasks, the Graphical approach and the mouse are faster than the keyboard — even for the defenders of the mechanical keyboard like myself.

So, I encourage you to download Lens and start using it right now. To do so, go to their main web page and download it:

Thanks for reading!

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Shell XML Processing: How To Update value in XML file in 5 Minutes

Shell XML Processing: How To Update value in XML file in 5 Minutes

Learn how to use xmlstarlet to update value in XML file and handle complex XML payloads efficiently in your shell scripts

If you work in IT or even if you think about IT as one of your main hobbies, you have been written a shell script at some point. If you also work on the business’s operation side, this can be your daily task. To create, maintain, or to upgrade the existing process.

Nowadays, it is more usual to interact with external systems that use payload in XML files or even configuration files written using this format.

Native shell script does not provide an easy way to do that or support libraries to handle that as we can in modern programming languages like Python, Java, or Go. So, probably you have found yourself writing code to parse this kind of payload. But this is not the only way to do that, and we can (and we should!) leverage existing utilities to do this job for us.

xmlstarlet

I could not find a better way to explain what xmlstarlet does than the definition the owners do in their source code repository:

XMLStarlet is a command line XML toolkit which can be used to transform,
query, validate, and edit XML documents and files using simple set of shell
commands in similar way it is done for plain text files using grep/sed/awk/
tr/diff/patch.

So, xmlstarlet provides all the power to do anything you can imagine when dealing with XML similarly, as this was plain text files.

Installation

The installation process for this utility is quite easy and depends on the operating system you are using. I will assume that most of the shell scripts are with a Unix machine target in mind. Installing this is quite easy as most of the package repositories have a version of this tool.

So, if you are using an apt-based system, you need to execute the command:

sudo apt-get install xmlstarlet

If you are using another platform, do not worry because they have available versions for all the most used operating systems and platforms, as you can see in the link below:

Usage

As soon as we have this software installed, the first thing we will do is launch it to see the options available.

XMLStarlet Toolkit: Command line utilities for XML
Usage: D:\Data\Downloads\xmlstarlet-1.6.1-win32\xmlstarlet-1.6.1\xmlstarlet.exe [<options>] <command> [<cmd-options>]
where <command> is one of:
  ed    (or edit)      - Edit/Update XML document(s)
  sel   (or select)    - Select data or query XML document(s) (XPATH, etc)
  tr    (or transform) - Transform XML document(s) using XSLT
  val   (or validate)  - Validate XML document(s) (well-formed/DTD/XSD/RelaxNG)
  fo    (or format)    - Format XML document(s)
  el    (or elements)  - Display element structure of XML document
  c14n  (or canonic)   - XML canonicalization
  ls    (or list)      - List directory as XML
  esc   (or escape)    - Escape special XML characters
  unesc (or unescape)  - Unescape special XML characters
  pyx   (or xmln)      - Convert XML into PYX format (based on ESIS - ISO 8879)
  p2x   (or depyx)     - Convert PYX into XML
<options> are:
  -q or --quiet        - no error output
  --doc-namespace      - extract namespace bindings from input doc (default)
  --no-doc-namespace   - don't extract namespace bindings from input doc
  --version            - show version
  --help               - show help
Wherever file name mentioned in command help it is assumed
that URL can be used instead as well.

Type: xmlstarlet <command> --help <ENTER> for command help

So the first thing we need to decide is which command we need to use, and these commands are one-to-one related to the action that we like to perform. I will focus on the main commands in this post, but as you can see, this covers from select values an XML, update it or even validate it.

Use case 1: Selecting a value.

We are going to start with the most simple use case try to select a value from an XML (file.xml) like this:

<root>
<object1 name="attribute_name">value in XML</object1>
</root>

So, we are going to start with the most simple command:

./xmlstarlet  sel  -t -v "/root/object1"  ./file.xml

value in XML

This provides the value inside the object1 element, using -t to define a new template and -v to specify the value-of sentence. If now, we would like to get the attribute value, we can do it pretty much similar to the previous command:

./xmlstarlet  sel  -t -v "/root/object1/@name"  ./file.xml

attribute_name

Use case 2: Updating value.

Now, we will follow the other approach based on the same file. We will update the value of the object1 element to set the text “updated text”.

To do that, we execute the following command:

./xmlstarlet ed -u "/root/object1" -v "updated text" ./file.xml

<?xml version="1.0"?>
<root>
  <object1 name="attribute_name">updated text</object1>
</root>

Summary

xmlstarlet will provide us all the options to manage something complicated as there are XML and do all the tasks that you can imagine simply without needed to code yourself all the parsing logic. I hope you are a happier developer since now when you need to manage an XML inside a shell script.

Optimize Prometheus Disk Usage: Reduce TSDB Size and Control Metrics Cardinality

Optimize Prometheus Disk Usage: Reduce TSDB Size and Control Metrics Cardinality

Learn some tricks to analyze and optimize the usage that you are doing of the TSDB and save money on your cloud deployment.

In previous posts, we discussed how the storage layer worked for Prometheus and how effective it was. But in the current times, we are of cloud computing we know that each technical optimization is also a cost optimization as well and that is why we need to be very diligent about any option that we use regarding optimization.

We know that usually when we monitor using Prometheus we have so many exporters available at our disposal and also that each of them exposes a lot of very relevant metrics that we need to track everything we need to. But also, we should be aware that there are also metrics that we don’t need at this moment or we don’t plan to use it. So, if we are not planning to use, why do we want to waste disk space storing them?

So, let’s start taking a look at one of the exporters we have in our system. In my case, I would like to use a BusinessWorks Container Application that exposes metrics about its utilization. If you check their metrics endpoint you could see something like this:

# HELP jvm_info JVM version info
# TYPE jvm_info gauge
jvm_info{version="1.8.0_221-b27",vendor="Oracle Corporation",runtime="Java(TM) SE Runtime Environment",} 1.0
# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area.
# TYPE jvm_memory_bytes_used gauge
jvm_memory_bytes_used{area="heap",} 1.0318492E8
jvm_memory_bytes_used{area="nonheap",} 1.52094712E8
# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area.
# TYPE jvm_memory_bytes_committed gauge
jvm_memory_bytes_committed{area="heap",} 1.35266304E8
jvm_memory_bytes_committed{area="nonheap",} 1.71302912E8
# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area.
# TYPE jvm_memory_bytes_max gauge
jvm_memory_bytes_max{area="heap",} 1.073741824E9
jvm_memory_bytes_max{area="nonheap",} -1.0
# HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area.
# TYPE jvm_memory_bytes_init gauge
jvm_memory_bytes_init{area="heap",} 1.34217728E8
jvm_memory_bytes_init{area="nonheap",} 2555904.0
# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_used gauge
jvm_memory_pool_bytes_used{pool="Code Cache",} 3.3337536E7
jvm_memory_pool_bytes_used{pool="Metaspace",} 1.04914136E8
jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 1.384304E7
jvm_memory_pool_bytes_used{pool="G1 Eden Space",} 3.3554432E7
jvm_memory_pool_bytes_used{pool="G1 Survivor Space",} 1048576.0
jvm_memory_pool_bytes_used{pool="G1 Old Gen",} 6.8581912E7
# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_committed gauge
jvm_memory_pool_bytes_committed{pool="Code Cache",} 3.3619968E7
jvm_memory_pool_bytes_committed{pool="Metaspace",} 1.19697408E8
jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 1.7985536E7
jvm_memory_pool_bytes_committed{pool="G1 Eden Space",} 4.6137344E7
jvm_memory_pool_bytes_committed{pool="G1 Survivor Space",} 1048576.0
jvm_memory_pool_bytes_committed{pool="G1 Old Gen",} 8.8080384E7
# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_max gauge
jvm_memory_pool_bytes_max{pool="Code Cache",} 2.5165824E8
jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0
jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9
jvm_memory_pool_bytes_max{pool="G1 Eden Space",} -1.0
jvm_memory_pool_bytes_max{pool="G1 Survivor Space",} -1.0
jvm_memory_pool_bytes_max{pool="G1 Old Gen",} 1.073741824E9
# HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_init gauge
jvm_memory_pool_bytes_init{pool="Code Cache",} 2555904.0
jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0
jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0
jvm_memory_pool_bytes_init{pool="G1 Eden Space",} 7340032.0
jvm_memory_pool_bytes_init{pool="G1 Survivor Space",} 0.0
jvm_memory_pool_bytes_init{pool="G1 Old Gen",} 1.26877696E8
# HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool.
# TYPE jvm_buffer_pool_used_bytes gauge
jvm_buffer_pool_used_bytes{pool="direct",} 148590.0
jvm_buffer_pool_used_bytes{pool="mapped",} 0.0
# HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool.
# TYPE jvm_buffer_pool_capacity_bytes gauge
jvm_buffer_pool_capacity_bytes{pool="direct",} 148590.0
jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0
# HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool.
# TYPE jvm_buffer_pool_used_buffers gauge
jvm_buffer_pool_used_buffers{pool="direct",} 19.0
jvm_buffer_pool_used_buffers{pool="mapped",} 0.0
# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM
# TYPE jvm_classes_loaded gauge
jvm_classes_loaded 16993.0
# HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution
# TYPE jvm_classes_loaded_total counter
jvm_classes_loaded_total 17041.0
# HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution
# TYPE jvm_classes_unloaded_total counter
jvm_classes_unloaded_total 48.0
# HELP bwce_activity_stats_list BWCE Activity Statictics list
# TYPE bwce_activity_stats_list gauge
# HELP bwce_activity_counter_list BWCE Activity related Counters list
# TYPE bwce_activity_counter_list gauge
# HELP all_activity_events_count BWCE All Activity Events count by State
# TYPE all_activity_events_count counter
all_activity_events_count{StateName="CANCELLED",} 0.0
all_activity_events_count{StateName="COMPLETED",} 0.0
all_activity_events_count{StateName="STARTED",} 0.0
all_activity_events_count{StateName="FAULTED",} 0.0
# HELP activity_events_count BWCE All Activity Events count by Process, Activity State
# TYPE activity_events_count counter
# HELP activity_total_evaltime_count BWCE Activity EvalTime by Process and Activity
# TYPE activity_total_evaltime_count counter
# HELP activity_total_duration_count BWCE Activity DurationTime by Process and Activity
# TYPE activity_total_duration_count counter
# HELP bwpartner_instance:total_request Total Request for the partner invocation which mapped from the activities
# TYPE bwpartner_instance:total_request counter
# HELP bwpartner_instance:total_duration_ms Total Duration for the partner invocation which mapped from the activities (execution or latency)
# TYPE bwpartner_instance:total_duration_ms counter
# HELP bwce_process_stats BWCE Process Statistics list
# TYPE bwce_process_stats gauge
# HELP bwce_process_counter_list BWCE Process related Counters list
# TYPE bwce_process_counter_list gauge
# HELP all_process_events_count BWCE All Process Events count by State
# TYPE all_process_events_count counter
all_process_events_count{StateName="CANCELLED",} 0.0
all_process_events_count{StateName="COMPLETED",} 0.0
all_process_events_count{StateName="STARTED",} 0.0
all_process_events_count{StateName="FAULTED",} 0.0
# HELP process_events_count BWCE Process Events count by Operation
# TYPE process_events_count counter
# HELP process_duration_seconds_total BWCE Process Events duration by Operation in seconds
# TYPE process_duration_seconds_total counter
# HELP process_duration_milliseconds_total BWCE Process Events duration by Operation in milliseconds
# TYPE process_duration_milliseconds_total counter
# HELP bwdefinitions:partner BWCE Process Events count by Operation
# TYPE bwdefinitions:partner counter
bwdefinitions:partner{ProcessName="t1.module.item.getTransactionData",ActivityName="FTLPublisher",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="TransactionService",PartnerOperation="GetTransactionsOperation",Location="internal",PartnerMiddleware="MW",} 1.0
bwdefinitions:partner{ProcessName=" t1.module.item.auditProcess",ActivityName="KafkaSendMessage",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="AuditService",PartnerOperation="AuditOperation",Location="internal",PartnerMiddleware="MW",} 1.0
bwdefinitions:partner{ProcessName="t1.module.item.getCustomerData",ActivityName="JMSRequestReply",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="CustomerService",PartnerOperation="GetCustomerDetailsOperation",Location="internal",PartnerMiddleware="MW",} 1.0
# HELP bwdefinitions:binding BW Design Time Repository - binding/transport definition
# TYPE bwdefinitions:binding counter
bwdefinitions:binding{ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInterface="GetCustomer360:GetDataOperation",Binding="/customer",Transport="HTTP",} 1.0
# HELP bwdefinitions:service BW Design Time Repository - Service definition
# TYPE bwdefinitions:service counter
bwdefinitions:service{ProcessName="t1.module.sub.item.getCustomerData",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0
bwdefinitions:service{ProcessName="t1.module.sub.item.auditProcess",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0
bwdefinitions:service{ProcessName="t1.module.sub.orchestratorSubFlow",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0
bwdefinitions:service{ProcessName="t1.module.Process",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0
# HELP bwdefinitions:gateway BW Design Time Repository - Gateway definition
# TYPE bwdefinitions:gateway counter
bwdefinitions:gateway{ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",Endpoint="bwce-demo-mon-orchestrator-bwce",InteractionType="ISTIO",} 1.0
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 1956.86
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.604712447107E9
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 763.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1048576.0
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 3.046207488E9
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 4.2151936E8
# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds.
# TYPE jvm_gc_collection_seconds summary
jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 540.0
jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 4.754
jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 2.0
jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.563
# HELP jvm_threads_current Current thread count of a JVM
# TYPE jvm_threads_current gauge
jvm_threads_current 98.0
# HELP jvm_threads_daemon Daemon thread count of a JVM
# TYPE jvm_threads_daemon gauge
jvm_threads_daemon 43.0
# HELP jvm_threads_peak Peak thread count of a JVM
# TYPE jvm_threads_peak gauge
jvm_threads_peak 98.0
# HELP jvm_threads_started_total Started thread count of a JVM
# TYPE jvm_threads_started_total counter
jvm_threads_started_total 109.0
# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers
# TYPE jvm_threads_deadlocked gauge
jvm_threads_deadlocked 0.0
# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors
# TYPE jvm_threads_deadlocked_monitor gauge
jvm_threads_deadlocked_monitor 0.0

As you can see a lot of metrics but I have to be honest I am not using most of them in my dashboards and to generate my alerts. I can use the metrics regarding the application performance for each of the BusinessWorks process and its activities, also the JVM memory performance and number of threads but things like how the JVM GC is working for each of the layers of the JVM (G1 Young Generation, G1 Old Generation) I’m not using them at all.

So, If I show the same metric endpoint highlighting the things that I am not using it would be something like this:

# HELP jvm_info JVM version info
# TYPE jvm_info gauge
jvm_info{version="1.8.0_221-b27",vendor="Oracle Corporation",runtime="Java(TM) SE Runtime Environment",} 1.0

# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area.
# TYPE jvm_memory_bytes_used gauge
jvm_memory_bytes_used{area="heap",} 1.0318492E8
jvm_memory_bytes_used{area="nonheap",} 1.52094712E8
# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area.
# TYPE jvm_memory_bytes_committed gauge
jvm_memory_bytes_committed{area="heap",} 1.35266304E8
jvm_memory_bytes_committed{area="nonheap",} 1.71302912E8
# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area.
# TYPE jvm_memory_bytes_max gauge
jvm_memory_bytes_max{area="heap",} 1.073741824E9
jvm_memory_bytes_max{area="nonheap",} -1.0
# HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area.
# TYPE jvm_memory_bytes_init gauge
jvm_memory_bytes_init{area="heap",} 1.34217728E8
jvm_memory_bytes_init{area="nonheap",} 2555904.0

# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_used gauge
jvm_memory_pool_bytes_used{pool="Code Cache",} 3.3337536E7
jvm_memory_pool_bytes_used{pool="Metaspace",} 1.04914136E8
jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 1.384304E7
jvm_memory_pool_bytes_used{pool="G1 Eden Space",} 3.3554432E7
jvm_memory_pool_bytes_used{pool="G1 Survivor Space",} 1048576.0
jvm_memory_pool_bytes_used{pool="G1 Old Gen",} 6.8581912E7
# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_committed gauge
jvm_memory_pool_bytes_committed{pool="Code Cache",} 3.3619968E7
jvm_memory_pool_bytes_committed{pool="Metaspace",} 1.19697408E8
jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 1.7985536E7
jvm_memory_pool_bytes_committed{pool="G1 Eden Space",} 4.6137344E7
jvm_memory_pool_bytes_committed{pool="G1 Survivor Space",} 1048576.0
jvm_memory_pool_bytes_committed{pool="G1 Old Gen",} 8.8080384E7
# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_max gauge
jvm_memory_pool_bytes_max{pool="Code Cache",} 2.5165824E8
jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0
jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9
jvm_memory_pool_bytes_max{pool="G1 Eden Space",} -1.0
jvm_memory_pool_bytes_max{pool="G1 Survivor Space",} -1.0
jvm_memory_pool_bytes_max{pool="G1 Old Gen",} 1.073741824E9
# HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_init gauge
jvm_memory_pool_bytes_init{pool="Code Cache",} 2555904.0
jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0
jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0
jvm_memory_pool_bytes_init{pool="G1 Eden Space",} 7340032.0
jvm_memory_pool_bytes_init{pool="G1 Survivor Space",} 0.0
jvm_memory_pool_bytes_init{pool="G1 Old Gen",} 1.26877696E8
# HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool.
# TYPE jvm_buffer_pool_used_bytes gauge
jvm_buffer_pool_used_bytes{pool="direct",} 148590.0
jvm_buffer_pool_used_bytes{pool="mapped",} 0.0
# HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool.
# TYPE jvm_buffer_pool_capacity_bytes gauge
jvm_buffer_pool_capacity_bytes{pool="direct",} 148590.0
jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0
# HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool.
# TYPE jvm_buffer_pool_used_buffers gauge
jvm_buffer_pool_used_buffers{pool="direct",} 19.0
jvm_buffer_pool_used_buffers{pool="mapped",} 0.0
# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM
# TYPE jvm_classes_loaded gauge
jvm_classes_loaded 16993.0
# HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution
# TYPE jvm_classes_loaded_total counter
jvm_classes_loaded_total 17041.0
# HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution
# TYPE jvm_classes_unloaded_total counter
jvm_classes_unloaded_total 48.0

# HELP bwce_activity_stats_list BWCE Activity Statictics list
# TYPE bwce_activity_stats_list gauge
# HELP bwce_activity_counter_list BWCE Activity related Counters list
# TYPE bwce_activity_counter_list gauge
# HELP all_activity_events_count BWCE All Activity Events count by State
# TYPE all_activity_events_count counter
all_activity_events_count{StateName="CANCELLED",} 0.0
all_activity_events_count{StateName="COMPLETED",} 0.0
all_activity_events_count{StateName="STARTED",} 0.0
all_activity_events_count{StateName="FAULTED",} 0.0
# HELP activity_events_count BWCE All Activity Events count by Process, Activity State
# TYPE activity_events_count counter
# HELP activity_total_evaltime_count BWCE Activity EvalTime by Process and Activity
# TYPE activity_total_evaltime_count counter
# HELP activity_total_duration_count BWCE Activity DurationTime by Process and Activity
# TYPE activity_total_duration_count counter
# HELP bwpartner_instance:total_request Total Request for the partner invocation which mapped from the activities
# TYPE bwpartner_instance:total_request counter
# HELP bwpartner_instance:total_duration_ms Total Duration for the partner invocation which mapped from the activities (execution or latency)
# TYPE bwpartner_instance:total_duration_ms counter
# HELP bwce_process_stats BWCE Process Statistics list
# TYPE bwce_process_stats gauge
# HELP bwce_process_counter_list BWCE Process related Counters list
# TYPE bwce_process_counter_list gauge
# HELP all_process_events_count BWCE All Process Events count by State
# TYPE all_process_events_count counter
all_process_events_count{StateName="CANCELLED",} 0.0
all_process_events_count{StateName="COMPLETED",} 0.0
all_process_events_count{StateName="STARTED",} 0.0
all_process_events_count{StateName="FAULTED",} 0.0
# HELP process_events_count BWCE Process Events count by Operation
# TYPE process_events_count counter
# HELP process_duration_seconds_total BWCE Process Events duration by Operation in seconds
# TYPE process_duration_seconds_total counter
# HELP process_duration_milliseconds_total BWCE Process Events duration by Operation in milliseconds
# TYPE process_duration_milliseconds_total counter
# HELP bwdefinitions:partner BWCE Process Events count by Operation
# TYPE bwdefinitions:partner counter
bwdefinitions:partner{ProcessName="t1.module.item.getTransactionData",ActivityName="FTLPublisher",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="TransactionService",PartnerOperation="GetTransactionsOperation",Location="internal",PartnerMiddleware="MW",} 1.0
bwdefinitions:partner{ProcessName=" t1.module.item.auditProcess",ActivityName="KafkaSendMessage",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="AuditService",PartnerOperation="AuditOperation",Location="internal",PartnerMiddleware="MW",} 1.0
bwdefinitions:partner{ProcessName="t1.module.item.getCustomerData",ActivityName="JMSRequestReply",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="CustomerService",PartnerOperation="GetCustomerDetailsOperation",Location="internal",PartnerMiddleware="MW",} 1.0
# HELP bwdefinitions:binding BW Design Time Repository - binding/transport definition
# TYPE bwdefinitions:binding counter
bwdefinitions:binding{ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInterface="GetCustomer360:GetDataOperation",Binding="/customer",Transport="HTTP",} 1.0
# HELP bwdefinitions:service BW Design Time Repository - Service definition
# TYPE bwdefinitions:service counter
bwdefinitions:service{ProcessName="t1.module.sub.item.getCustomerData",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0
bwdefinitions:service{ProcessName="t1.module.sub.item.auditProcess",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0
bwdefinitions:service{ProcessName="t1.module.sub.orchestratorSubFlow",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0
bwdefinitions:service{ProcessName="t1.module.Process",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0
# HELP bwdefinitions:gateway BW Design Time Repository - Gateway definition
# TYPE bwdefinitions:gateway counter
bwdefinitions:gateway{ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",Endpoint="bwce-demo-mon-orchestrator-bwce",InteractionType="ISTIO",} 1.0
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 1956.86
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.604712447107E9
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 763.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1048576.0
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 3.046207488E9
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 4.2151936E8
# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds.
# TYPE jvm_gc_collection_seconds summary
jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 540.0
jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 4.754
jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 2.0
jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.563

# HELP jvm_threads_current Current thread count of a JVM
# TYPE jvm_threads_current gauge
jvm_threads_current 98.0
# HELP jvm_threads_daemon Daemon thread count of a JVM
# TYPE jvm_threads_daemon gauge
jvm_threads_daemon 43.0
# HELP jvm_threads_peak Peak thread count of a JVM
# TYPE jvm_threads_peak gauge
jvm_threads_peak 98.0
# HELP jvm_threads_started_total Started thread count of a JVM
# TYPE jvm_threads_started_total counter
jvm_threads_started_total 109.0
# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers
# TYPE jvm_threads_deadlocked gauge
jvm_threads_deadlocked 0.0
# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors
# TYPE jvm_threads_deadlocked_monitor gauge
jvm_threads_deadlocked_monitor 0.0

So, it can be a 50% of the metric endpoint response the part that I’m not using, so, why I am using disk space that I am paying for to storing it? And this is just for a “critical exporter”, one that I try to use as much information as possible, but think about how many exporters do you have and how much information you use for each of them.

Ok, so now the purpose and the motivation of this post are clear, but what we can do about it?

Discovering the REST API

Prometheus has an awesome REST API to expose all the information that you can wish about. If you have ever use the Graphical Interface for Prometheus (shown below) you are using the REST API because this is why is behind it.

Optimize Prometheus Disk Usage: Reduce TSDB Size and Control Metrics Cardinality
Target view of the Prometheus Graphical Interface

We have all the documentation regarding the REST API in the Prometheus official documentation:

https://prometheus.io/docs/prometheus/latest/querying/api/

But what is this API providing us in terms of the time-series database TSDB that Prometheus is using?

TSDB Admin APIs

We have a specific API to manage the performance of the TSDB database but in order to be able to use it, we need to enable the Admin API. And that is done by providing the following flag where we are launching the Prometheus server --web.enable-admin-api.

If we are using the Prometheus Operator Helm Chart to deploy this we need to use the following item in our values.yaml

## EnableAdminAPI enables Prometheus the administrative HTTP API which includes functionality such as deleting time series.    
## This is disabled by default.
## ref: https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis
## enableAdminAPI: true

We have a lot of options enable when we enable this administrative API but today we are going to focus on a single REST operation that is the “stats”. This is the only method related to TSDB that it doesn’t require to enable the Admin API. This operation, as we can read in the Prometheus documentation, returns the following items:

headStats: This provides the following data about the head block of the TSDB:

  • numSeries: The number of series.
  • chunkCount: The number of chunks.
  • minTime: The current minimum timestamp in milliseconds.
  • maxTime: The current maximum timestamp in milliseconds.

seriesCountByMetricName: This will provide a list of metrics names and their series count.

labelValueCountByLabelName: This will provide a list of the label names and their value count.

memoryInBytesByLabelName This will provide a list of the label names and memory used in bytes. Memory usage is calculated by adding the length of all values for a given label name.

seriesCountByLabelPair This will provide a list of label value pairs and their series count.

To access to that API we need to hit the following endpoint:

GET /api/v1/status/tsdb

So, when I am doing that in my Prometheus deployment I get something similar to this:

{
"status":"success",
"data":{
"seriesCountByMetricName":[
{
"name":"apiserver_request_duration_seconds_bucket",
"value":34884
},
{
"name":"apiserver_request_latencies_bucket",
"value":7344
},
{
"name":"etcd_request_duration_seconds_bucket",
"value":6000
},
{
"name":"apiserver_response_sizes_bucket",
"value":3888
},
{
"name":"apiserver_request_latencies_summary",
"value":2754
},
{
"name":"etcd_request_latencies_summary",
"value":1500
},
{
"name":"apiserver_request_count",
"value":1216
},
{
"name":"apiserver_request_total",
"value":1216
},
{
"name":"container_tasks_state",
"value":1140
},
{
"name":"apiserver_request_latencies_count",
"value":918
}
],
"labelValueCountByLabelName":[
{
"name":"__name__",
"value":2374
},
{
"name":"id",
"value":210
},
{
"name":"mountpoint",
"value":208
},
{
"name":"le",
"value":195
},
{
"name":"type",
"value":185
},
{
"name":"name",
"value":181
},
{
"name":"resource",
"value":170
},
{
"name":"secret",
"value":168
},
{
"name":"image",
"value":107
},
{
"name":"container_id",
"value":97
}
],
"memoryInBytesByLabelName":[
{
"name":"__name__",
"value":97729
},
{
"name":"id",
"value":21450
},
{
"name":"mountpoint",
"value":18123
},
{
"name":"name",
"value":13831
},
{
"name":"image",
"value":8005
},
{
"name":"container_id",
"value":7081
},
{
"name":"image_id",
"value":6872
},
{
"name":"secret",
"value":5054
},
{
"name":"type",
"value":4613
},
{
"name":"resource",
"value":3459
}
],
"seriesCountByLabelValuePair":[
{
"name":"namespace=default",
"value":72064
},
{
"name":"service=kubernetes",
"value":70921
},
{
"name":"endpoint=https",
"value":70917
},
{
"name":"job=apiserver",
"value":70917
},
{
"name":"component=apiserver",
"value":57992
},
{
"name":"instance=192.168.185.199:443",
"value":40343
},
{
"name":"__name__=apiserver_request_duration_seconds_bucket",
"value":34884
},
{
"name":"version=v1",
"value":31152
},
{
"name":"instance=192.168.112.31:443",
"value":30574
},
{
"name":"scope=cluster",
"value":29713
}
]
}
}

We can also check the same information if we use the new and experimental React User Interface on the following endpoint:

/new/tsdb-status
Optimize Prometheus Disk Usage: Reduce TSDB Size and Control Metrics Cardinality
Graphical Visualization of top 10 series count by metric name in the new Prometheus UI

So, with that, you will get the Top 10 series and labels that are inside your time-series database, so in case, some of them are not useful you can just get rid of them using the normal approaches to drop a series or a label. This is great, but what if all the ones shown here are relevant, what can we do about it?

Mmmm, maybe we can use PromQL to monitor this (dogfodding approach). So if we would like to extract the same information but using PromQL we can do it with the following query:

topk(10, count by (__name__)({__name__=~".+"}))
Optimize Prometheus Disk Usage: Reduce TSDB Size and Control Metrics Cardinality
Top 10 of metric series generated and stored in the time series database

And now we have all the power at my hands. For example, let’s take a look not at the 10 more relevant but the 100 more relevants or any other filter that we need to apply. For example, let’s see the metrics regarding with the JVM that we discussed at the beginning. And we will do that with the following PromQL query:

topk(100, count by (__name__)({__name__=~"jvm.+"}))
Optimize Prometheus Disk Usage: Reduce TSDB Size and Control Metrics Cardinality
Top 100 of metric series regarding to JVM metrics

So we can see that we have at least 150 series regarding to metrics that I am not using at all. But let’s do it even better, let’s take a look at the same but group by job names:

topk(10, count by (job,__name__)({__name__=~".+"}))
Optimize Prometheus Disk Usage: Reduce TSDB Size and Control Metrics Cardinality
Result of checking the top 10 metric series count with the job that is generating them

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Prometheus Storage Explained: How the TSDB Works and Why It Matters

Prometheus Storage Explained: How the TSDB Works and Why It Matters

Learn the bases that make Prometheus, so a great solution to monitor your workloads and use it for your own benefit.

Prometheus is one of the key systems in nowadays cloud architectures. The second graduate project from the Cloud Native Computing Foundation (CNCF) after Kubernetes itself, and is the monitoring solution for excellence in most of the workloads running on Kubernetes.

If you already have used Prometheus for some time, you know that it relies on a Time series database so Prometheus storage is one of the key elements. Based on their own words from the Prometheus official page:

Every time series is uniquely identified by its metric name and optional key-value pairs called labels, and that series is similar to the tables in a relational model. And inside each of those series, we have samples that are similar to the tuples. And each of the samples contains a float value and a milliseconds-precision timestamp.

Default on-disk approach

By default, Prometheus uses a local-storage approach storing all those samples on disk. This data is distributed in different files and folders to group different chunks of data.

So, we have folders to create those groups, and by default, they are a two-hour block and can contain one or more files depending on the amount of data ingested in that period of time as each folder contains all the samples for that specific timeline.

Additionally, each folder also has some kind of metadata files that help locate each of the data files’ metrics.

A file is persistent in a complete manner when the block is over, and before that, it keeps in memory and uses a write-ahead log technical to recover the data in case of a crash of the Prometheus server.

So, at a high-level view, the directory structure of a Prometheus server’s data directory will look something like this:

Remote Storage Integration

Default on-disk storage is good and has some limitations in terms of scalability and durability, even considering the performance improvement of the latest version of the TSDB. So, if we’d like to explore other options to store this data, Prometheus provides a way to integrate with remote storage locations.

It provides an API that allows writing samples that are being ingested into a remote URL and, at the same time, be able to read back sample data for that remote URL as shown in the picture below:

As always in anything related to Prometheus, the number of adapters created using this pattern is huge, and it can be seen in the following link in detail:

Summary

Knowing how prometheus storage works is critical to understand how we can optimize their usage to improve the performance of our monitoring solution and provide a cost-efficient deployment.

In the following posts, we’re going to cover how we can optimize the usage of this storage layer, making sure that only the metrics and samples that are important to use are being stored, and also how to analyze which metrics are the ones used most of the time-series database to be able to take good decision about which metrics should be dropped and which ones should be kept.

So, stay tuned for the next post regarding how we can have a better life with Prometheus and not die in the attempt.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Scan Docker Images Locally for Vulnerabilities Using Snyk (DevSecOps Guide)

Scan Docker Images Locally for Vulnerabilities Using Snyk (DevSecOps Guide)

Learn how you can leverage the use of Snyk inside your Docker engine installation

Security is the most relevant topic in modern architecture. It needs to be handled from all different perspectives. Having a single team auditing the platforms and the developments that we built is not enough.

The introduction of DevSecOps as the new normal, including the security teams and policies being part of the development process to avoid security becoming a blocker of innovation and make sure that the artifacts we deploy are secured, have made this clear.

Docker image scanning is one of the most important topics we can cover regarding the container images to know that all the internal components that are part of the image are safe from vulnerabilities. We usually rely on some systems to do so.

I wrote an article regarding the usage of one of the most relevant options (Harbor) from the open source world to do this job.

And this is also being done by different Docker repositories from cloud providers like Amazon ECR as of this year. But why do we need to wait until we push the images to an external Docker registry? Why can’t we do it in our local environment?

Now we can. Version 2.5.0.1 of the Docker engine also includes the Snyk components needed to inspect the Docker images directly from the command line:

https://www.docker.com/blog/combining-snyk-scans-in-docker-desktop-and-docker-hub-to-deploy-secure-containers/


Scanning Your Local Images

So, let’s start. Let’s open a new terminal and type the following command:

docker scan <image-name>

As soon as we type this, the command will tell us that this scanning process will use Snyk to do that and we need to authorize access to those services to do the scanning process.

After that, we get a list of all the vulnerabilities detected, as you can see in the picture below:

Vulnerability scanning
Vulnerability scanning using your local Docker client

For each of the vulnerabilities, you can see the following data:

Vulnerability info
Detailed information provided for each of the vulnerabilities detected

We get the library with the vulnerability, the severity level, and a short description of it. If you need more details, you can also check the provided URL that is linked to a description page for that vulnerability:

Vulnerabilities page
Vulnerability detailed page from snyk

Finally, it also provides the sources introducing this library in your image so this can be solved quickly.

It provides a high-level view of the whole image too, as you can see here:

Overview of Docker images
Overview of your Docker images with all the vulnerabilities detected

So, now you don’t have any excuse to not have all your images safe and secure before pushing to your local repository. Let’s do it!

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

How to Analyze and Reduce Docker Image Size (Layer Analysis and Best Practices)

How to Analyze and Reduce Docker Image Size (Layer Analysis and Best Practices)

Find how you can improve the size of your Docker images for a better experience and savings inside your organization.

Containerization is the new normal. We are all aware of that. All the new versions of the corporate software and all the open-source projects are including the options to use a docker image to run their software.

Probably you already have been doing your tests or even running in production workloads based on docker images that you have built yourself. If that is the case, you probably know one of the big challenges when you’re doing this kind of task: How to optimize the size of the image you generate?

One of the main reasons the docker image can be so big is because they are built following a layered concept. And that means that each of the images is being created as the addition of layers, each associated with the different commands you have in your Dockerfile.

How to Analyze and Reduce Docker Image Size (Layer Analysis and Best Practices)
Graphical explanation of how a Docker image is compounded.

Use dive to analyze the size of your images

dive is an open-source project that provides a detailed view of the composition of your docker images. It works as a command-line interface application that has a great view of the content of the layers, as you can see in the picture below:

How to Analyze and Reduce Docker Image Size (Layer Analysis and Best Practices)
Dive execution of a BusinessWorks Container Edition image

The tool follows an n-curses interface (if you are old enough to remember how tools were before Graphical User Interfaces was a thing; it should look familiar) and has these main features:

  • This tool will provide the list of layers in the top-left of the screen and the size associated with each of them.
  • Provides general stats about image efficiency (a percentage value), a potential view of the wasted size, and the image’s total size.
  • For each of the layers selected, you get a view on the file system for this view with the data of each folder’s size.
  • Also, get a view of the bigger elements and the number of replication of these objects.

Now, you have a tool that will help you first to know how your image is built and get performance data of each of the tweaks that you do to improve that size. So, let’s start with the tricks.

1.- Clean your image!

This first is quite obvious, but that doesn’t mean that it is not important. Usually, when you create a Docker image, you follow the same pattern:

  • You declare a base image to leverage on.
  • You add resources to do some work.
  • You do some work.

Usually, we forget an additional step: To clean the added resources when they are not needed anymore! So, it is important to be sure that we remove each of the files that we don’t need anymore.

This also applies to other components like the apt cache when we are installing a new package that we need or any temporary folder that we need to perform an installation or some work to build the image.

2.- Be careful about how you create your Dockerfile

As we already mentioned, each of the commands that we declare in our Dockerfile generates a new layer. So, it is important to be very careful with the lines that we have in the Dockerfile. Even if this is a tradeoff regarding the readability of the Dockerfile, it is important to try to merge commands in the same RUN primitive to make sure we are not creating additional layers.

How to Analyze and Reduce Docker Image Size (Layer Analysis and Best Practices)
Sample for a Dockerfile with merged commands

You can also use Docker linters like Hadolint that will help you with this and other anti-patterns that you should avoid when you are creating a Dockerfile.

3.- Go for docker build — squash

The latest versions of the Docker engine provide a new option when you build your images to create with the minimized size squashing of the intermediate layers that can be created as part of the Dockerfile creation process.

That works, providing a new flag when you are doing the build of your image. So, instead of doing this:

docker build -t <your-image-name>:<tag> <Dockerfile location>

You should use an additional flag:

docker build --squash -t <your-image-name>:<tag> <Dockerfile location>

To be able to use this option, you should enable the experimental features on your Docker Engine. To do that, you need to enable that in your daemon.json file and restart the engine. If you are using Docker for Windows or Docker for Mac, you can do it using the user interface as shown below:

How to Analyze and Reduce Docker Image Size (Layer Analysis and Best Practices)

Summary

These tweaks will help you make your Docker images thinner and much more pleasant the process of pulling and pushing and, at the same time, even saving some money regarding the storage of the images in the repository of your choice. And not only for you but for many others that can leverage the work that you are doing. So think about yourself but also think about the community.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

EKS Hybrid Series: How to Change Namespaces for AWS Fargate Serverless Deployments

EKS Hybrid Series: How to Change Namespaces for AWS Fargate Serverless Deployments

In the previous post of these series regarding how to set up a Hybrid EKS cluster making use of both traditional EC2 machines but also serverless options using Fargate, we were able to create the EKS cluster with both deployment fashion available. If you didn’t take a look at it yet, do it now!

https://medium.com/@alexandrev/hybrid-aws-kubernetes-cluster-using-eks-ec2-and-fargate-13198d864baa

At that point, we have an empty cluster with everything ready to deploy new workloads, but we still need to configure a few things before doing the deployment. First thing is to decide which workloads are going to be deployed using the serverless option and which ones will use the traditional EC2 option.

By default, all the workloads deployed on the namespaces default and kube-system as you can see in the picture below form the AWS Console:

EKS Hybrid Series: How to Change Namespaces for AWS Fargate Serverless Deployments

So that means that all workloads from the default namespace and the kube-system namespace will be deployed in a serverless fashion. If that’s what you’d like perfect. But sometimes you’d like to start with a delimited set of namespaces where you’d like to use the serverless option and rely on the traditional deployment.

We can check that same information using eksctl and to do that we need to type the following command:

eksctl get fargateprofile --cluster cluster-test-hybrid -o yaml

The output of that command should be something similar of the information that we can see in the AWS Console:

- name: fp-default
 podExecutionRoleARN: arn:aws:iam::938784100097:role/eksctl-cluster-test-hybrid-FargatePodExecutionRole-1S12LVS5S2L62
 selectors:
 — namespace: default
 — namespace: kube-system
 subnets:
 — subnet-022f9cc3fd1180bb8
 — subnet-0aaecd5250ebcb02e
 — subnet-01b0bae6fa66ecd31

NOTE: If you don’t remember the name of your cluster you just need to type the command eksctl get clusters

So, this is what we’re going to do and to do that the first thing we need to do is to create a new namespace named “serverless” that is going to hold our serverless deployment and to do that we use a kubectl command as follows:

kubectl create namespace serverless

And now, we just need to create a new fargate profile that is going to replace the one that we have at the moment and to do that we need to make use again of eksctl to handle that job:

eksctl create fargateprofile --cluster cluster-test-hybrid --name fp-serverless-profile --namespace serverless

NOTE: We also can use not only namespace to limit the scope of our serverless deployment but also tags, so we can have in the same namespace workloads that are deployed using traditional deployment and others using serverless fashion. That will give us all the posibilities to design your cluster as you wish. To do that we will append the argument labels in a key=value fashion.

And we will get an output similar to this:

[ℹ] creating Fargate profile “fp-serverless-profile” on EKS cluster “cluster-test-hybrid”
[ℹ] created Fargate profile “fp-serverless-profile” on EKS cluster “cluster-test-hybrid”

If now we check the number of profiles that we have available we should get two profiles handling three namespaces (the ones that are managed by the default profile — default and kube-system — and the one — serverless — handled by the one we just created now)

We just will use the following command to delete the default profile:

eksctl delete fargateprofile --cluster cluster-test-hybrid fp-default

And the output of that command should be similar to this one:

[ℹ] deleted Fargate profile “fp-default” on EKS cluster “cluster-test-hybrid”

And after that, we have now ready our cluster with limited scope for serverless deployments. In the next post of the series, we will just deploy workloads on both fashions to see the difference between them. So, don’t miss the updates regarding this series making sure that you follow my posts, and if you’d like the article, or you have some doubts or comments, please leave your feedback using the comments below!

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

EKS Fargate Hybrid Kubernetes Cluster: Combine EC2 Nodes and Serverless Pods on AWS

EKS Fargate Hybrid Kubernetes Cluster: Combine EC2 Nodes and Serverless Pods on AWS

EKS Fargate AWS Kubernetes Cluster: Learn how to create a Kubernetes cluster that can use also all the power of serverless computing using AWS Fargate

We know that there are several movements and paradigms that are pushing us hard to change our architectures trying to leverage much more managed services and taking care of the operational level so we can focus on what’s really important for our own business: creating applications and deliver value through them.

AWS from Amazon has been a critical partner during that journey, especially in the container world. With the release of EKS some time ago were able to provide a managed Kubernetes service that everyone can use, but also introducing the CaaS solution Fargate also gives us the power to run a container workload in a serverless fashion, without needing to worry about anything else.

But you could be thinking about if those services can work together. And the short answer is yes. But even more important than that we’re seeing that also they can work in a mixed mode:

So you can have an EKS cluster that has some nodes that are Fargate services and some nodes that are normal EC2 machines for workloads that are working in a state-full fashion or fits better in a traditional EC2 approach. And everything works by the same rules and is managed by the same EKS Cluster.

So, that sounds amazing but, How we can do that? Let’s see.

eksctl

To get to that point there is a tool that we need to introduce first, and that tool is named eksctl and it is a command-line utility that helps us to do any action to interact with the EKS service and simplifies a lot the work to do and also be able to automate most of the tasks in a non-human required mode. So, the first thing we need to is to get eksctl in our platforms ready. Let’s see how we can get that.

We have here all the detailed from Amazon itself about how to install eksctl in different platforms, no matter if you’re using Windows, Linux, or MacOS X:

After doing that we can check that we have installed the eksctl software running the command:

eksctl version

And we should get an output similar to this one:

EKS Fargate Hybrid Kubernetes Cluster: Combine EC2 Nodes and Serverless Pods on AWS
eksctl version output command

So after doing that we can see that we have access to all the power behind the EKS service just typing these simple commands into our console window.

Creating the EKS Hybrid Cluster

Now, we’re going to create a mixed environment with some EC2 machines and enable the Fargate support for EKS. To do that, we will start with the following command:

eksctl create cluster --version=1.15 --name=cluster-test-hybrid --region=eu-west-1 --max-pods-per-node=1000 --fargate
[ℹ]  eksctl version 0.26.0
[ℹ]  using region eu-west-1
[ℹ]  setting availability zones to [eu-west-1c eu-west-1a eu-west-1b]
[ℹ]  subnets for eu-west-1c - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for eu-west-1a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for eu-west-1b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  using Kubernetes version 1.15
[ℹ]  creating EKS cluster "cluster-test-hybrid" in "eu-west-1" region with Fargate profile
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=cluster-test-hybrid'
[ℹ]  CloudWatch logging will not be enabled for cluster "cluster-test-hybrid" in "eu-west-1"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-1 --cluster=cluster-test-hybrid'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "cluster-test-hybrid" in "eu-west-1"
[ℹ]  2 sequential tasks: { create cluster control plane "cluster-test-hybrid", create fargate profiles }
[ℹ]  building cluster stack "eksctl-cluster-test-hybrid-cluster"
[ℹ]  deploying stack "eksctl-cluster-test-hybrid-cluster"
[ℹ]  creating Fargate profile "fp-default" on EKS cluster "cluster-test-hybrid"
[ℹ]  created Fargate profile "fp-default" on EKS cluster "cluster-test-hybrid"
[ℹ]  "coredns" is now schedulable onto Fargate
[ℹ]  "coredns" is now scheduled onto Fargate
[ℹ]  "coredns" pods are now scheduled onto Fargate
[ℹ]  waiting for the control plane availability...
[✔]  saved kubeconfig as "C:\\Users\\avazquez/.kube/config"
[ℹ]  no tasks
[✔]  all EKS cluster resources for "cluster-test-hybrid" have been created
[ℹ]  kubectl command should work with "C:\\Users\\avazquez/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "cluster-test-hybrid" in "eu-west-1" region is ready

This command will setup the EKS cluster enabling the Fargate support.

NOTE: The first thing that we should notice is that the Fargate support for EKS is not yet available in all the AWS regions. So, depending on the region that you’re using you could get an error. At this moment this is just enabled in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo) based on the information from AWS Announcements: https://aws.amazon.com/about-aws/whats-new/2020/04/eks-adds-fargate-support-in-frankfurt-oregon-singapore-and-sydney-aws-regions/

So, now, we should add to that cluster a Node Group. a Node Group is a set of EC2 instances that are going to be managed as part of it. And to do that we will use the following command:

eksctl create nodegroup --cluster cluster-test-hybrid --managed
[ℹ]  eksctl version 0.26.0
[ℹ]  using region eu-west-1
[ℹ]  will use version 1.15 for new nodegroup(s) based on control plane version
[ℹ]  nodegroup "ng-1262d9c0" present in the given config, but missing in the cluster
[ℹ]  1 nodegroup (ng-1262d9c0) was included (based on the include/exclude rules)
[ℹ]  will create a CloudFormation stack for each of 1 managed nodegroups in cluster "cluster-test-hybrid"
[ℹ]  2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "ng-1262d9c0" } } }
[ℹ]  checking cluster stack for missing resources
[ℹ]  cluster stack has all required resources
[ℹ]  building managed nodegroup stack "eksctl-cluster-test-hybrid-nodegroup-ng-1262d9c0"
[ℹ]  deploying stack "eksctl-cluster-test-hybrid-nodegroup-ng-1262d9c0"
[ℹ]  no tasks
[✔]  created 0 nodegroup(s) in cluster "cluster-test-hybrid"
[ℹ]  nodegroup "ng-1262d9c0" has 2 node(s)
[ℹ]  node "ip-192-168-69-215.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-9-111.eu-west-1.compute.internal" is ready
[ℹ]  waiting for at least 2 node(s) to become ready in "ng-1262d9c0"
[ℹ]  nodegroup "ng-1262d9c0" has 2 node(s)
[ℹ]  node "ip-192-168-69-215.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-9-111.eu-west-1.compute.internal" is ready
[✔]  created 1 managed nodegroup(s) in cluster "cluster-test-hybrid"
[ℹ]  checking security group configuration for all nodegroups
[ℹ]  all nodegroups have up-to-date configuration

So now we should be able to use kubectl to manage this new cluster. If you don’t have installed kubectl or you haven’t heard about it. This is the command line tool that allow us to manage your Kubernetes Cluster and you can install it based on the documentation shown here:

So, now, we should start taking a look at the infrastructure that we have. So if we type the following command to see the nodes at our disposal:

kubectl get nodes

We see an output similar to this:

NAME                                                    STATUS   ROLES    AGE   VERSION
fargate-ip-192-168-102-22.eu-west-1.compute.internal    Ready    <none>   10m   v1.15.10-eks-094994
fargate-ip-192-168-112-125.eu-west-1.compute.internal   Ready    <none>   10m   v1.15.10-eks-094994
ip-192-168-69-215.eu-west-1.compute.internal            Ready    <none>   85s   v1.15.11-eks-bf8eea
ip-192-168-9-111.eu-west-1.compute.internal             Ready    <none>   87s   v1.15.11-eks-bf8eea

As you can see we have 4 “nodes” two that start with the fargate name that are fargate nodes and two that just start with ip-… and those are the traditional EC2 instances. And after that moment that’s it, we have our mixed environment ready to use.

We can check the same cluster using the AWS EKS page to see that configuration more in detail. If we enter in the EKS page for this cluster we see in the Compute tab the following information:

EKS Fargate Hybrid Kubernetes Cluster: Combine EC2 Nodes and Serverless Pods on AWS

We see under Node Groups the data around the EC2 machines that are managed as part of this cluster and as you can see we saw 2 as the Desired Capacity and that’s why we have 2 EC2 instances in our cluster. And regarding the Fargate profile, we see the namespaces set to default and kube-system and that means that all the deployments to those namespaces are going to be deployed using Fargate Tasks.

Summary

In the following articles in these series, we will see how to progress on our Hybrid cluster, deploy workloads scale it based on the demand that we’re getting, enabling integration with other services like AWS CloudWatch, and so on. So, stay tuned, and don’t forget to follow my articles to not miss any new updates as soon as it’s available to you!

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.