Skip to content

Autoscaling

Unlocking Performance and Adaptability: Exploring Kubernetes Vertical Autoscaling

Unlocking Performance and Adaptability: Exploring Kubernetes Vertical Autoscaling

Discover the power of Vertical Pod Autoscaling in Kubernetes, revolutionizing the way you scale workloads. With the ability to add resources to existing pods, vertical scaling offers enhanced performance and flexibility. Learn how this feature complements horizontal scaling, and find out when to leverage it for optimal results. From optimizing CPU and memory allocations to accommodating changing component requirements, Vertical Pod Autoscaling empowers you to adapt and fine-tune your deployments. Explore the benefits of this cutting-edge capability and unlock new possibilities for maximizing performance in your Kubernetes environment

Kubernetes Autoscaling 1.26: A Game-Changer for KEDA Users?

Kubernetes Autoscaling 1.26: A Game-Changer for KEDA Users?

The latest release of Kubernetes, version 1.26, has introduced several new autoscaling capabilities that allow users to scale their workloads based on custom metrics, multiple metrics, and a range of APIs. These features offer increased flexibility and options for scaling in Kubernetes environments. However, the KEDA project still provides additional features, such as the ability to scale “from zero” and “to zero,” which can be useful for certain types of workloads. In this article, we will explore the new autoscaling capabilities in Kubernetes 1.26 and how they compare to the features offered by KEDA.