Tu resumen semanal de lo que encontré más relevante en el ecosistema nativo de la nube.

Resumen
En esta edición, tenemos una mezcla de cosas, desde tutoriales técnicos hasta análisis y contexto histórico sobre decisiones importantes de código abierto.
Comenzaremos con una mirada para extender las opciones tradicionales de escalabilidad en Kubernetes más allá de las capacidades predeterminadas del Horizontal Pod Autoscaler (HPA) utilizando el proyecto KEDA para escalar basado en el retraso del consumidor de Kafka.
Continuaremos hablando sobre las razones detrás de la decisión de eliminar Dockershim del proyecto Kubernetes, que finalmente es una realidad con el lanzamiento de la versión 1.24.
Y terminaremos con un artículo que proporciona algunas soluciones a la preocupación habitual de la explosión de la observabilidad en las grandes arquitecturas y cómo podemos abordarlo.
¡Acompáñame en este viaje!
Historias

This article from Devtron Labs covers the configuration needed to scale based on the Kafka Consumer Lag instead of resource consumption.
KEDA is one of the main projects when we need to do scalability in the right way, especially using external metrics to our containers (number of requests, pending messages) and so on. Read these articles to add another item to your toolchain.

Kubernetes 1.24 has been released, and because of that, Dockershim support is removed from the project and based on the words of Kat Cosgrove, this is good for the project. This article tries to cover the historical context of this decision and why this is good about it.
This topic is one of the news that generated a lot of noise last year, with many people panicking once this was announced. But now, that is a reality. I think it is still essential to do this explanation exercise so everyone understands the decision and can manage its concern and understand, as Kat explains that this is a good thing for the project.

The article covers the situation where the movement to a polyglot and holistic observability stack generates a significant amount of data that we need to properly manage to avoid this becoming a big concern for your architecture. Martin covers in his article the main points and provides some things you should keep in mind.
There is no customer that I have seen that is moving to a pure observability stack that at some point in time it is concerning about the storage needed and how to manage it at a production scale, so if you or your enterprise is on that journey you cannot miss this article
Cita
El futuro pertenece a aquellos que creen en la belleza de sus sueños.
Eleanor Roosevelt




