Istio allows you to configure Sticky Session, among other network features, for your Kubernetes workloads. As we have commented in several posts regarding Istio, istio deploys a service mesh that provides a central control plane to have all the configuration regarding the network aspects of your Kubernetes workloads. This covers many different aspects of the communication inside the container platform, such as security covering security transport, authentication or authorization, and, at the same time, network features, such as routing and traffic distribution, which is the main topic for today’s article.
These routing capabilities are similar to what a traditional Load Balancer of Level 7 can provide. When we talk about Level 7, we’re referring to the conventional levels that compound the OSI stack, where level 7 is related to the Application Level.
A Sticky Session or Session Affinity configuration is one of the most common features you can need to implement in this scenario. The use-case is the following one:
You have several instances of your workloads, so different pod replicas in a Kubernetes situation. All of these pods behind the same service. By default, it will redirect the requests in a round-robin fashion among the pod replicas in a Ready state, so Kubernetes understand that they’re ready to get the request unless you define it differently.
But in some cases, mainly when you are dealing with a web application or any stateful application that handles the concept of a session, you could want the replica that processes the first request and also handles the rest of the request during the lifetime of the session.
Of course, you could do that easily just by routing all traffic to one request, but in that case, we lose other features such as traffic load balancing and HA. So, this is usually implemented using Session Affinity or Sticky Session policies that provides best of both worlds: same replica handling all the request from an user, but traffic distribution between different users.
How Sticky Session Works?
The behavior behind this is relatively easy. Let’s see how it works.
First, the important thing is that you need “something” as part of your network requests that identify all the requests that belong to the same session, so the routing component (in this case, this role is played by istio) can determine which part needs to handle these requests.
This is “something” that we use to do that, it can be different depending on your configuration, but usually, this is a Cookie or an HTTP Header that we send in each request. Hence, we know that the replica handles all requests of that specific type.
How does Istio implement Sticky Session support?
In the case of using Istio to do this role, we can implement that by using a specific Destination Rule that allows us, among other capabilities, to define the traffic policy to define how we want the traffic to be split and to implement the Sticky Session we need to use the “consistentHash” feature, that allows that all the requests that compute to the same hash will be sent to the replica.
When we define the consistentHash features, we can say how this hash will be created and, in other words, which components will be used to generate this hash, and this can be one of the following options:
- httpHeaderName: Uses an HTTP Header to do the traffic distribution
- httpCookie: Uses an HTTP Cookie to do the traffic distribution
- httpQueryParameterName: Uses a Query String to do the traffic Distribution.
- maglev: Uses Google’s Maglev Load Balancer to do the determination. You can read more about Maglev in the article from Google.
- ringHash: Uses a ring-based hashed approach to load balancing between the available pods.
So, as you can see, you will have a lot of different options. Still, just the first three would be the most used to implement a sticky session, and usually, the HTTP Cookie (httpCookie) option will be the preferred one, as it would rely on the HTTP approach to manage the session between clients and servers.
Sticky Session Implementation Sample using TIBCO BW
We will define a very simple TIBCO BW workload to implement a REST service, serving a GET reply with a hardcoded value. To simplify the validation process, the application will log the hostname of the pod so quickly we can see who is handling each of the requests:
We deploy this in our Kubernetes cluster and expose it using a Kubernetes service; in our case, the name of this service will be
On top of that, we apply the istio configuration, which will require three (3) istio objects: gateway, virtual service, and the destination rule. As our focus is on the destination rule, we will try to keep it as simple as possible in the other two objects:
apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: default-gw spec: selector: istio: ingressgateway servers: - hosts: - '*' port: name: http number: 80 protocol: HTTP
apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: test-vs spec: gateways: - default-gw hosts: - test.com http: - match: - uri: prefix: / route: - destination: host: test2-bwce-srv port: number: 8080
And finally, the DestinationRule will use a httpCookie that we will name ISTIOD, as you can see in the snippet below:
apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-sticky-dr namespace: default spec: host: test2-bwce-srv.default.svc.cluster.local trafficPolicy: loadBalancer: consistentHash: httpCookie: name: ISTIOID ttl: 60s
Now, that we have already started our test, and after launching the first request, we get a new Cookie that is generated by istio itself that is shown in the Postman response window:
This request has been handled for one of the replicas available of the service, as you can see here:
All subsequent request from Postman already includes the cookie, and all of them are handled from the same pod:
While the other replica’ log is empty, as all the requests have been routed to that specific pod.
We covered in this article the reason behind the need for a sticky session in Kubernetes workload and how we can achieve that using the capabilities of the Istio Service Mesh. So, I hope this can help implement this configuration on your workloads that you can need today or in the future