BanzaiCloud Logging Operator on Kubernetes: Log Aggregation in 5 Minutes

BanzaiCloud Logging Operator on Kubernetes: Log Aggregation in 5 Minutes

In the previous article, we described what capability BanzaiCloud Logging Operator provides and its main features. So, today we are going to see how we can implement it.

The first thing we need to do is to install the operator itself, and to do that, we have a helm chart at our disposal, so the only thing that we will need to do are the following commands:

 helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com
helm upgrade --install --wait --create-namespace --namespace logging logging-operator banzaicloud-stable/logging-operator

That will create a logging namespace (in case you didn’t have it yet), and it will deploy the operator components itself, as you can see in the picture below:

BanzaiCloud Logging Operator on Kubernetes: Log Aggregation in 5 Minutes
BanzaiCloud Logging Operator installed using HelmChart

So, now we can start creating the resources we need using the CRD that we commented on in the previous article but to do a recap. These are the ones that we have at our disposal:

  • logging – The logging resource defines the logging infrastructure for your cluster that collects and transports your log messages. It also contains configurations for Fluentd and Fluent-bit.
  • output / clusteroutput – Defines an Output for a logging flow, where the log messages are sent. output will be namespaced based, and clusteroutput will be cluster based.
  • flow / clusterflow – Defines a logging flow using filters and outputs. The flow routes the selected log messages to the specified outputs. flow will be namespaced based, and clusterflows will be cluster based.

So, first of all, we are going to define our scenario. I don’t want to make something complex; I wish that all the logs that my workloads generate, no matter what namespace they are in, are sent to a Grafana Loki instance that I have also installed on that Kubernetes Cluster on a specific endpoint using the Simple Scalable configuration for Grafana Loki.

So, let’s start with the components that we need. First, we need a Logging object to define my Logging infrastructure, and I will create it with the following command.

kubectl -n logging apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
  name: default-logging-simple
spec:
  fluentd: {}
  fluentbit: {}
  controlNamespace: logging
EOF

We will keep the default configuration for fluentd and fluent-bit just for the sake of the sample, and later on in upcoming articles, we can talk about a specific design, but that’s it.

Once the CRD is processed, the components will appear on your logging namespace. In my case that I’m using a 3-node cluster, I will see 3 instances for fluent-bit deployed as a DaemonSet and a single example of fluentd, as you can see in the picture below:

BanzaiCloud Logging Operator on Kubernetes: Log Aggregation in 5 Minutes
BanzaiCloud Logging Operator configuration after applying Logging CRD

So, now we need to define the communication with Loki, and as I would like to use this for any namespace I can have on my cluster, I will use the ClusterOutput option instead of the normal Output one that is namespaced based. And to do that, we will use the following command (please ensure that the endpoint is the right one; in our case, this is loki-gateway. default as it is running inside the Kubernetes Cluster:

kubectl -n logging apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
 name: loki-output
spec:
 loki:
   url: http://loki-gateway.default
   configure_kubernetes_labels: true
   buffer:
     timekey: 1m
     timekey_wait: 30s
     timekey_use_utc: true
EOF

And pretty much we have everything; we just need one flow to communicate our Logging configuration to the ClusterOutput we just created. And again, we will go with the ClusterFlow because we would like to define this at the Cluster level and not in a by-namespaced fashion. So we will use the following command:

 kubectl -n logging  apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
  name: loki-flow
spec:
  filters:
    - tag_normaliser: {}
  match:
    - select: {}
  globalOutputRefs:
    - loki-output
EOF

And after some time to do the reload of the configuration (1-2 minutes or so), you will start to see in the Loki traces something like this:

BanzaiCloud Logging Operator on Kubernetes: Log Aggregation in 5 Minutes
Grafana showing the logs submitted by the BanzaiCloud Logging Operator

And that indicates that we are already receiving push of logs from the different components, mainly the fluentd element we configured in this case. But I think it is better to see it graphically with Grafana:

BanzaiCloud Logging Operator on Kubernetes: Log Aggregation in 5 Minutes
Grafana showing the logs submitted by the BanzaiCloud Logging Operator

And that’s it! And to change our logging configuration is as simple as changing the CRD component we defined, applying matches and filters, or sending it to a new place. Straightforwardly we have this completely managed.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Log Aggregation in Kubernetes Explained with BanzaiCloud Logging Operator

Log Aggregation in Kubernetes Explained with BanzaiCloud Logging Operator

We already have talked about the importance of Log Aggregation in Kubernetes and why the change in the behavior of the components makes it a mandatory requirement for any new architecture we deployed today.

To solve that part, we have a lot of different stacks that you probably have heard about. For example, if we follow the traditional Elasticsearch path, we will have the pure ELK stack from Elasticsearch, Logstash, and Kibana. Now this stack has been extended with the different “Beats” (FileBeat, NetworkBeat, …) that provides a light log forwarder to be added to the task.

Also, you can change Logstash for a CNCF component such as Fluentd that probably you have heard about, and in that case, we’re talking about an EFK stack following the same principle. And also, have the Grafana Labs view using promtail, Grafana Loki, and Grafana for dashboarding following a different perspective.

Then you can switch and change any component for the one of your preference, but in the end, you will have three different kinds of components:

  • Forwarder: Component that will listen to all the log inputs, mainly the stdout/stderr output from your containers, and push it to a central component.
  • Aggregator: Component that will receive all the traces for the forwarded, and it will have some rules to filter some of the events, format, and enrich the ones received before sending it to central storage.
  • Storage: Component that will receive the final traces to be stored and retrieved for the different clients.

To simplify the management of that in Kubernetes, we have a great Kubernetes Operator named BanzaiCloud Logging Operator that tries to follow that approach in a declarative / policy manner. So let’s see how it works, and to explain it better, I will use its central diagram from its website:

Log Aggregation in Kubernetes Explained with BanzaiCloud Logging Operator
BanzaiCloud Logging Operator Architecture

This operator uses the same technologies we were talking about. It covers mainly the two first steps: Forwarding and Aggregation and the configuration to be sent to a Central Storage of your choice. To do that works with the following technologies, all of them part of the CNCF Landscape:

  • Fluent-bit will act as a forwarded deployed on a DaemonSet mode to collect all the logs you have configured.
  • Fluentd will act as an aggregator defining the flows and rules of your choice to adapt the trace flow you are receiving and sending to the output of your choice.

And as this is a Kubernetes Operator, this works in a declarative way. We will define a set of objects that will define our logging policies. We have the following components:

  • logging – The logging resource defines the logging infrastructure for your cluster that collects and transports your log messages. It also contains configurations for Fluentd and Fluent-bit.
  • output / clusteroutput – Defines an Output for a logging flow, where the log messages are sent. output will be namespaced based, and clusteroutput will be cluster based.
  • flow / clusterflow – Defines a logging flow using filters and outputs. The flow routes the selected log messages to the specified outputs. flow will be namespaced based, and clusterflows will be cluster based.

In the picture below, you will see how these objects are “interacting” to define your desired logging architecture:

Log Aggregation in Kubernetes Explained with BanzaiCloud Logging Operator
BanzaiCloud Logging Operator CRD Relationship

And apart from the policy mode, it also includes a lot of great features such as:

  • Namespace isolation
  • Native Kubernetes label selectors
  • Secure communication (TLS)
  • Configuration validation
  • Multiple flow support (multiply logs for different transformations)
  • Multiple output support (store the same logs in multiple storages: S3, GCS, ES, Loki, and more…)
  • Multiple logging system support (multiple Fluentd, Fluent Bit deployment on the same cluster)

In upcoming articles we were talking about how we can implement this so you can see all the benefits that this CRD-based, policy-based approach can provide to your architecture.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kubernetes Operators: 5 Things You Truly Need to Know

Kubernetes Operators: 5 Things You Truly Need to Know
Kubernetes Operators: 5 Things You Truly Need to Know
Kubernetes Operators: 5 Things You Truly Need to Know (Photo by Dan Lohmar on Unsplash)

Kubernetes Operator has been the new normal to deploy big workloads on Kubernetes, but as some of these principles don’t align immediately with the main concepts of Kubernetes usually generates a little bit of confusion and doubts when you need to use them or even create them.

What Are Kubernetes Operators?

Operators are the way to extend Kubernetes capabilities to manage big workloads where different options are related. In components with a distributed architecture such as monitoring system, log aggregation system, or even service mesh, you can find that. Based on the words from the Kubernetes official documentation , operators are defined as below:

Operators are software extensions to Kubernetes that use custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop.

Its primary usage is for standard services and not as much as the simple application or user workloads, but it could be used in cases even for that scenario.

How Does Kubernetes Operator Works?

The central concept behind the Kubernetes Operator is the extension concept. It is based on the definition and management of custom Kubernetes objects named Custom Resource Definition (CRDs) that allow a description in a Kubernetes way of new concepts that you could need for your workloads.

Some samples of these CRDs are the ServiceMonitor and PodMonitor that we explained in the previous posts, for example, but many more to add. So, that means that now you have a new YAML file to define your objects, and you can use the main primitives from Kubernetes to create, edit, or delete them as needed.

So, for these components to do any work, you need to code some specific controllers that are translating the changes done to those YAML files to reach primitives to the status of the cluster.

Kubernetes Operator Pattern (https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md#foundation)
Kubernetes Operator Pattern (https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md#foundation)

How To Manage Kubernetes Operators?

The Kubernetes operator can be installed like any other Kubernetes workload, so depending on the case can be distributed as a YAML file or a Helm Chart. You even can find a shared repository of operators on OperatorsHub.

OperatorHub: Central Repository for Kubernetes Operators
OperatorHub: Central Repository for Kubernetes Operators

Kubernetes Operator vs. Helm Charts

As already discussed, they are not the same kind of object as Helm Charts because Helm Charts only work at the deployment level doing the packaging and managing of those releases, but operators go a step beyond that because managing and controlling the lifecycle at the runtime level. And as commented, Helm and Operators are compatible; this is, for example, how Prometheus Operator works that have a Helm Chart to deploy itself, as you can find here.

How To Build a Kubernetes Operator

If your goal after reading this is to create a Kubernetes Operator, you need to know that there are already some frameworks that will make your life easier at that task.

Tools like Kopf, kubebuilder, metacontroller , or even the CNCF Operator Framework will provide you the tools and the everyday tasks to start focusing on what your operator needs to do, and they will handle the main daily tasks for you.

 More Resources To Learn about Kubernetes Operator

Suppose you want to learn more about Kubernetes Operators or the Operator pattern. In that case, I strongly recommend you look at the CNCF Operator Whitepaper that you can find here.

This will cover all the topics discussed above in more technical detail and introduce other vital issues, such as security lifecycle management or event best practices.

Other interesting resources are the bibliography resource from the Whitepaper itself that I am going to add here just in case you want to jump directly to the source:

  • Dobies, J., & Wood, J. (2020). Kubernetes Operators. O’Reilly.
  • Ibryam, B. (2019). Kubernetes Patterns. O’Reilly.
  • Operator Framework. (n.d.). Operator Capabilities. Operator Framework. Retrieved 11 2020, 24, from https://operatorframework.io/operator-capabilities/
  • Philips, B. (2016, 03 16). Introducing Operators: Putting Operational Knowledge into Software. CoreOS Blog. Retrieved 11 24, 2020, from https://coreos.com/blog/introducing-operators.html
  • Hausenblas, M & Schimanski, S. (2019). Programming Kubernetes. O’Reilly.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Why You Shouldn’t Set Kubernetes ResourceQuotas Using Limit Values (and What to Do Instead)

woman in blue denim jacket holding white paper

If You Aren’t Careful You Can Block The Scalability Of Your Workloads

Why You Shouldn’t Set Kubernetes ResourceQuotas Using Limit Values (and What to Do Instead)

Photo by Joshua Hoehne on Unsplash

One of the great things about container-based developments idefiningne isolation spaces where you have guaranteed resources such as CPU and memory. This is also extended on Kubernetes-based environments at the namespace level, so you can have different virtual environments that cannot exceed the usage of resources at a specified level.

To define that you have the concept of Resource Quota that works at the namespace level. Based on its own definition (https://kubernetes.io/docs/concepts/policy/resource-quotas/)

A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.

You have several options to define the usage of these Resource Quotas but we will focus on this article on the main ones as follows:

  • limits.cpu: Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
  • limits.memory: Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
  • requests.cpu: Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value.
  • requests.memory: Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value.

So you can think that this is a great option to define a limit.cpu and limit.memory quota, so you make sure that you will not extend that amount of usage by any means. But you need to be careful about what this means, and to illustrate that I will use a sample.

I have a single workload with a single pod with the following resource limitation:

  • requests.cpu: 500m
    • limits.cpu: 1
    • requests.memory: 500Mi
    • limits.memory: 1 GB

Your application is a Java-based application that exposes a REST Service ant has configured a Horizontal Pod Autoscaler rule to scale when the amount of CPU exceeds its 50%.

So, we start in the primary situation: with a single instance that requires to run 150 m of vCPU and 200 RAM, so a little bit less than 50% to avoid the autoscaler. But we have a Resource Quota about the limits of the pod (1 vCPU and 1 GB) so we have blocked that. We have more requests and we need to scale to two instances. To simplify the calculations, we will assume that we will use the same amount of resources for each of the instances, and we will continue that way until we reach 8 instances. So lets’ see how it changes the limits defined (the one that will limit the number of objects I can create in my namespace) and the actual amount of resources that I am using:

Why You Shouldn’t Set Kubernetes ResourceQuotas Using Limit Values (and What to Do Instead)

So, for resources used amount of 1.6 vCPU I have blocked 8 vCPU, and in case that was my Resource Limit, I cannot create more instances, even though I have 6.4 vCPU not used that I have allowed deploying because of this kind of limitation I cannot do it.

Yes, I am able to ensure the principle that I never will use more than 8 vCPU, but I’ve been blocked very early on that trend affecting the behavior and scalability of my workloads.

Because of that, you need to be very careful when you are defining these kinds of limits and making sure what you are trying to achieve because to solve one problem you can be generating another one.

I hope this can help you to prevent this issue for happening in your daily work or at least to keep it in mind when you are facing similar scenarios.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Promtail Explained: Turning Logs into Metrics for Prometheus and Loki

Promtail Explained: Turning Logs into Metrics for Prometheus and Loki

Promtail is the solution when you need to provide metrics that are only present on the log traces of the software you need to monitor to provide a consistent monitoring platform

It is a common understanding that three pillars in the observability world help us to get a complete view of the status of our own platforms and systems: Logs, Traces, and Metrics.

To provide a summary of the differences between each of them:

  • Metrics are the counters about the state of the different components from both a technical and a business view. So we can see here things like the CPU consumption, the number of requests, memory, or disk usage…
  • Logs are the different messages that each of the pieces of software in our platform provides to understand its current behavior and detect some non-expected situations.
  • Trace is the different data regarding the end-to-end request flow across the platform with the services and systems that have been part of that flow and data related to that concrete request.

We have solutions that claim to address all of them, mainly in the enterprise software with Dynatrace, AppDynamics, and similar. And on the other hand, we try to go with a specific solution for each of them that we can easily integrate together and we have discussed a lot about that options in previous articles.

But, some situations in that software don’t work following this path because we live in the most heterogeneous era. We all embrace, at some level, the polyglot approach on the new platforms. In some cases, we can see that software is using log traces to provide data related to metrics or other matters, and here is when we need to rely on pieces of software that help us “fix” that situation, and Promtail does specifically that.

Promtail is mainly a log forwarder similar to others like fluentd or fluent-bit from CNCF or logstash from the ELK stack. In this case, this is the solution from Grafana Labs, and as you can imagine, this is part of the Grafana stack with Loki to be the “master-mind” that we cover in this article that I recommend you to take a look at if you haven’t read it yet:

Promtail has two main ways of behaving as part of this architecture, and the first one is very similar to others in this space, as we commented before. It helps us ship our log traces from our containers to the central location that will mainly be Loki and can be a different one and provide the usual options to play and transform those traces as we can do in other solutions. You can look at all the options in the link below, but as you can imagine, this includes transformation, filtering, parsing, and so on.

But what makes promtail so different is just one of the actions that you can do, and that action is metrics. Metrics provides a specific way to, based on the data that we are reading from the logs, create Prometheus metrics that a Prometheus server can scrape. That means that you can use the log traces that you are processing that can something like this:

[2021–06–06 22:02.12] New request received for customer_id: 123
[2021–06–06 22:02.12] New request received for customer_id: 191
[2021–06–06 22:02.12] New request received for customer_id: 522

With this information apart to send those metrics to the central location to create a metric call, for example: `total_request_count` that will be generated by the promtail agent and also exposed by it and being able also to use a metrics approach even for systems or components that don’t provide a standard way to do that like a formal metrics API.

And the way to do this is very well integrated with the configuration. This is done with an additional stage (this is how we call the actions we can do in Promtail) that is namedmetrics.

The schema of that metric stage is straightforward, and if you are familiar with Prometheus, you will see how direct it is from a definition of Prometheus metrics to this snippet:

# A map where the key is the name of the metric and the value is a specific
# metric type.
metrics:
  [<string>: [ <metric_counter> | <metric_gauge> | <metric_histogram> ] ...]

So we start defining the kind of metrics that we would like to define, and we have the usual ones: counter, gauge, or histogram, and for each of them, we have a set of options to be able to declare our metrics as you can see here for a Counter Metrics

# The metric type. Must be Counter.
type: Counter

# Describes the metric.

[description: <string>]

# Defines custom prefix name for the metric. If undefined, default name “promtail_custom_” will be prefixed.

[prefix: <string>]

# Key from the extracted data map to use for the metric, # defaulting to the metric’s name if not present.

[source: <string>]

# Label values on metrics are dynamic which can cause exported metrics # to go stale (for example when a stream stops receiving logs). # To prevent unbounded growth of the /metrics endpoint any metrics which # have not been updated within this time will be removed. # Must be greater than or equal to ‘1s’, if undefined default is ‘5m’

[max_idle_duration: <string>]

config: # If present and true all log lines will be counted without # attempting to match the source to the extract map. # It is an error to specify `match_all: true` and also specify a `value`

[match_all: <bool>]

# If present and true all log line bytes will be counted. # It is an error to specify `count_entry_bytes: true` without specifying `match_all: true` # It is an error to specify `count_entry_bytes: true` without specifying `action: add`

[count_entry_bytes: <bool>]

# Filters down source data and only changes the metric # if the targeted value exactly matches the provided string. # If not present, all data will match.

[value: <string>]

# Must be either “inc” or “add” (case insensitive). If # inc is chosen, the metric value will increase by 1 for each # log line received that passed the filter. If add is chosen, # the extracted value most be convertible to a positive float # and its value will be added to the metric. action: <string>

And with that, you will have your metric created and exposed, just waiting for a Prometheus server to scrape it. If you would like to see all the options available, all this documentation is available in the Grafana Labs documentation that you can check in the link:

I hope you will find this interesting and a useful way to keep all your observability information managed correctly using the right solution and provide a solution for these pieces of software that don’t follow your paradigm.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.