Your Weekly Summary of What I Found More Relevant in the Cloud-Native Ecosystem.

Summary
On this issue, we have a mix of things, form technical tutorial to analysis and historical context about important open-source decisions.
We will start with a look to extend the traditional scalability options in Kubernetes beyond Horizontal Pod Autoscaler (HPA) default capabilities using KEDA project to scale based on the Kafka Consumer Lag.
We will continue talking about the reasons behind the decission of removing Dockershim from the Kubernetes project that is finally a reality with the release of the 1.24 version.
And we will end with one article that provides some solutions to the usually concern of observability exploding on the big architectures and how we can address it.
Come with me on this journey!
Stories

This article from Devtron Labs covers the configuration needed to scale based on the Kafka Consumer Lag instead of resource consumption.
KEDA is one of the main projects when we need to do scalability in the right way, especially using external metrics to our containers (number of requests, pending messages) and so on. Read these articles to add another item to your toolchain.

Kubernetes 1.24 has been released, and because of that, Dockershim support is removed from the project and based on the words of Kat Cosgrove, this is good for the project. This article tries to cover the historical context of this decision and why this is good about it.
This topic is one of the news that generated a lot of noise last year, with many people panicking once this was announced. But now, that is a reality. I think it is still essential to do this explanation exercise so everyone understands the decision and can manage its concern and understand, as Kat explains that this is a good thing for the project.

The article covers the situation where the movement to a polyglot and holistic observability stack generates a significant amount of data that we need to properly manage to avoid this becoming a big concern for your architecture. Martin covers in his article the main points and provides some things you should keep in mind.
There is no customer that I have seen that is moving to a pure observability stack that at some point in time it is concerning about the storage needed and how to manage it at a production scale, so if you or your enterprise is on that journey you cannot miss this article
Quote
The future belongs to those who believe in the beauty of their dreams.
Eleanor Roosevelt