• Skip to primary navigation
  • Skip to main content
Alexandre Vazquez
  • Home
  • TIBCO
    • TIBCO BusinessWorks
    • Flogo
    • TIBFAQS
  • Architecture
    • API
    • Security
    • Integration
    • Event Processing
  • Kubernetes
  • Monitoring
    • Observability
    • Prometheus
    • Log Aggregation
      • Loki
  • Service Mesh
    • Istio
  • Helm
  • Editorial
  • About Me

Level-Up Your Deployment Strategy with Canarying in Kubernetes

Published on 2021-05-04. Last Updated on 2022-05-07 by Alexandre Vazquez

Save time and money on your application platform deploying applications differently in Kubernetes.

Level-Up Your Deployment Strategy With Canarying In Kubernetes
Photo by Jason Leung on Unsplash

We have come from a time where we deploy an application using an apparent and straight-line process. The traditional way is pretty much like this:

  • Wait until a weekend or some time where the load is low, and business can tolerate some service unavailability.
  • We schedule the change and warn all the teams involved for that time to be ready to manage the impact.
  • We deploy the new version and we have all the teams during the functional test they need to do to ensure that is working fine, and we wait for the real load to happen.
  • We monitor during the first hours to see if something wrong happens, and in case it does, we establish a rollback process.
  • As soon as everything goes fine, we wait until the next release in 3–4 months.

But this is not valid anymore. Business demands IT to be agile, change quickly, and not afford to do that kind of resource effort each week or, even worse, each day. Do you think that it’s possible to gather all teams each night to deploy the latest changes? It is not feasible at all.

So, technology advance to help us solve that issue better, and here is where Canarying came here to help us.

Introducing Canary Deployments

Canary deployments (or just Canarying as you prefer) are not something new, and a lot of people has been talking a lot about it:

Level-Up Your Deployment Strategy With Canarying In Kubernetes
bliki: CanaryRelease
A canary release occurs when you roll out a new version of some software to a small subset of your user base to see if there are any problems before you make it available to everyone.
Level-Up Your Deployment Strategy With Canarying In Kubernetes
Google - Site Reliability Engineering
Release engineering is a term we use to describe all the processes and artifacts related to getting code from a repository into a running production system. Automating releases can help avoid many of the traditional pitfalls associated with release engineering: the toil of repetitive and manual task…

It has been here for some time, but before, it was neither easy nor practical to implement it. Basically is based on deploying the new version into production, but you still keeping the traffic pointing to the old version of the application and you just start shifting some of the traffic to the new version.

Level-Up Your Deployment Strategy With Canarying In Kubernetes
Canary Release In Kubernetes Environment Graphical Representation

Based on that small subset of requests you monitor how the new version performs at different levels, functional level, performance level, and so on. Once you feel comfortable with the performance that is providing you just shift all the traffic to the new version, and you deprecate the old version

Level-Up Your Deployment Strategy With Canarying In Kubernetes
Removal Of Old Version After All Traffic Has Been Shifted To The Newly Deployed&Nbsp;Version.

The benefits that come with this approach are huge:

  • You don’t need a big staging environment as before because you can do some of the tests with real data into the production while not affecting your business and the availability of your services.
  • You can reduce time to market and increase the frequency of deployments because you can do it with less effort and people involved.
  • Your deployment window has been extended a lot as you do not need to wait for a specific time window, and because of that, you can deploy new functionality more frequently.

Implementing Canary Deployment in Kubernetes

To implement Canary Deployment in Kubernetes, we need to provide more flexibility to how the traffic is routed among our internal components, which is one of the capabilities that get extended from using a Service Mesh.

We already discussed the benefits of using a Service Mesh as part of your environment, but if you would like to retake a look, please refer to this article:

Level-Up Your Deployment Strategy With Canarying In Kubernetes
Technology wars: API Management Solution vs Service Mesh
Service Mesh vs. API Management Solution: is it the same? Are they compatible? Are they rivals? Photo by Alvaro Reyes on Unsplash When we talk about communication in a distributed cloud-native world and especially when we are talking about container-based architectures based on Kubernetes platform like AKS, EKS, Openshift, and so on, two technologies generate a lot […]

We have several technology components that can provide those capabilities, but this is how you will be able to create the traffic routes to implement this. To see how you can take a look at the following article about one of the default options that is Istio:

Level-Up Your Deployment Strategy With Canarying In Kubernetes
Integrating Istio with BWCE Applications
Introduction Services Mesh is one the “greatest new thing” in our PaaS environments. No matter if you’re working with K8S, Docker Swarm, pure-cloud with EKS or AWS, you’ve heard and probably tried to know how can be used this new thing that has so many advantages because it provides a lot of options in handling […]

But be able to route the traffic is not enough to implement a complete canary deployment approach. We also need to be able to monitor and act based on those metrics to avoid manual intervention. To do this, we need to include different tools to provide those capabilities:

Prometheus is the de-facto option to monitor workloads deployed on the Kubernetes environment, and here you can get more info about how both projects play together.

Level-Up Your Deployment Strategy With Canarying In Kubernetes
Kubernetes Service Discovery for Prometheus
In previous posts, we described how to set up Prometheus to work with your TIBCO BusinessWorks Container Edition apps, and you can read more about it here. In that post, we described that there were several ways to update Prometheus about the services that ready to monitor. And we choose the most simple at that […]

And to manage the overall process, you can have a Continuous Deployment tool to put some governance around that using options like Spinnaker or using our of the extensions for the Continuous integration tools like GitLab or GitHub:

Level-Up Your Deployment Strategy With Canarying In Kubernetes
Using Spinnaker for Automated Canary Analysis
Automated canary analysis lets you partially roll out a change then evaluate it against the current deployment to assess its performance.

Summary

In this article, we covered how we can evolve a traditional deployment model to keep pace with innovation that businesses require today and how canary deployment techniques can help us on that journey, and the technology components needed to set up this strategy in your own environment.

If you find this content interesting please think about making a contribution using the button below to keep this content updated and increased!


Related articles:

Integrating Istio With Bwce ApplicationsIntegrating Istio with BWCE Applications Technology Wars: Api Management Solution Vs Service MeshTechnology wars: API Management Solution vs Service Mesh Observability In A Polyglot Microservice EcosystemObservability in a Polyglot Microservice Ecosystem Kubernetes Autoscaling: Learn How To Scale Your Kubernetes Deployments DynamicallyKubernetes Autoscaling: Learn How to Scale Your Kubernetes Deployments Dynamically

Copyright © 2023 · Custom on Genesis Framework · WordPress · Log in