Safeguarding Your Servers: Preventing Information Disclosure with Istio Service Mesh

Safeguarding Your Servers: Preventing Information Disclosure with Istio Service Mesh

In today’s digital landscape, where data breaches and cyber threats are becoming increasingly sophisticated, ensuring the security of your servers is paramount. One of the critical security concerns that organizations must address is “Server Information Disclosure.” Server Information Disclosure occurs when sensitive information about a server’s configuration, technology stack, or internal structure is inadvertently exposed to unauthorized parties. Hackers can exploit this vulnerability to gain insights into potential weak points and launch targeted attacks. Such breaches can lead to data theft, service disruption, and reputation damage.

Information Disclosure and Istio Service Mesh

One example is the Server HTTP Header, usually included in most of the HTTP responses where you have the server that is providing this response. The values can vary depending on the stack, but matters such as Jetty, Tomcat, or similar ones are usually seen. But also, if you are using a Service Mesh such as Istio, you will see the header with a value of istio-envoy, as you can see here:

Information Disclosure of Server Implementation using Istio Service mesh

As commented, this is of such importance for several levels of security, such as:

  • Data Privacy: Server information leakage can expose confidential data, undermining user trust and violating data privacy regulations such as GDPR and HIPAA.
  • Reduced Attack Surface: By concealing server details, you minimize the attack surface available to potential attackers.
  • Security by Obscurity: While not a foolproof approach, limiting disclosure adds an extra layer of security, making it harder for hackers to gather intelligence.

How to mitigate that with Istio Service Mesh?

When using Istio, we can define different rules to add and remove HTTP headers based on our needs, as you can see in the following documentation here: https://discuss.istio.io/t/remove-header-operation/1692 using simple clauses to the definition of your VirtualService as you can see here:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: k8snode-virtual-service
spec:
  hosts:
  - "example.com"
  gateways:
  - k8snode-gateway
  http:
    headers:
      response:
        remove:
          - "x-my-fault-source"
  - route:
    - destination:
        host: k8snode-service
        subset: version-1 

Unfortunately, this is not useful for all HTTP headers, especially the “main” ones, so the ones that are not custom added by your workloads but the ones that are mainly used and defined in the HTTP W3C standard https://www.w3.org/Protocols/

So, in the case of the Server HTTP header is a little bit more complex to do, and you need to use an EnvoyFilter, one of the most sophisticated objects part of the Istio Service Mesh. Based on the words in the official Istio documentation, an EnvoyFilter provides a mechanism to customize the Envoy configuration generated by Istio Pilot. So, you can use EnvoyFilter to modify values for certain fields, add specific filters, or even add entirely new listeners, clusters, etc.

EnvoyFilter Implementation to Remove Header

So now that we know that we need to create a custom EnvoyFilter let’s see which one we need to use to remove the Server header and how this is made to get more knowledge about this component. Here you can see the EnvoyFilter for that job:

---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: gateway-response-remove-headers
  namespace: istio-system
spec:
  workloadSelector:
    labels:
      istio: ingressgateway
  configPatches:
  - applyTo: NETWORK_FILTER
    match:
      context: GATEWAY
      listener:
        filterChain:
          filter:
            name: "envoy.filters.network.http_connection_manager"
    patch:
      operation: MERGE
      value:
        typed_config:
          "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
          server_header_transformation: PASS_THROUGH
  - applyTo: ROUTE_CONFIGURATION
    match:
      context: GATEWAY
    patch:
      operation: MERGE
      value:
        response_headers_to_remove:
        - "server"

So let’s focus on the parts of the specification of the EnvoyFilter where we can get for one side the usual workloadSelector, to know where this component will be applied, that in this case will be the istio ingressgateway. Then we enter into the configPatches section, that are the sections where we use the customization that we need to do, and in our case, we have two of them:

Both act on the context: GATEWAY and apply to two different objects: NETWORK\_FILTER AND ROUTE\_CONFIGURATION. You can also use filters on sidecars to affect the behavior of them. The first bit what it does is including the custom filter http\_connection\_maanger that allows the manipulation of the HTTP context, including for our primary purpose also the HTTP header, and then we have the section bit that acts on the ROUTE\_CONFIGURATION removing the server header as we can see by using the option response_header_to_remove

Conclusion

As you can see, this is not easy to implement. Still, at the same time, it is evidence of the power and low-level capabilities that you have when using a robust service mesh such as Istio to interact and modify the behavior of any tiny detail that you want for your benefit and, in this case, also to improve and increase the security of your workloads deployed behind the Service Mesh scope.

In the ever-evolving landscape of cybersecurity threats, safeguarding your servers against information disclosure is crucial to protect sensitive data and maintain your organization’s integrity. Istio empowers you to fortify your server security by providing robust tools for traffic management, encryption, and access control.

Remember, the key to adequate server security is a proactive approach that addresses vulnerabilities before they can be exploited. Take the initiative to implement Istio and elevate your server protection.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Enhancing Service Mesh DNS Resolution with Istio’s Proxy DNS Capability: Benefits and Use-Cases

Enhancing Service Mesh DNS Resolution with Istio's Proxy DNS Capability: Benefits and Use-Cases

Istio is a popular open-source service mesh that provides a range of powerful features for managing and securing microservices-based architectures. We have talked a lot about its capabilities and components, but today we will talk about how we can use Istio to help with the DNS resolution mechanism.

As you already know, In a typical Istio deployment, each service is accompanied by a sidecar proxy, Envoy, which intercepts and manages the traffic between services. The Proxy DNS capability of Istio leverages this proxy to handle DNS resolution requests more intelligently and efficiently.

Traditionally, when a service within a microservices architecture needs to communicate with another service, it relies on DNS resolution to discover the IP address of the target service. However, traditional DNS resolution can be challenging to manage in complex and dynamic environments, such as those found in Kubernetes clusters. This is where the Proxy DNS capability of Istio comes into play.

 Istio Proxy DNS Capabilities

With Proxy DNS, Istio intercepts and controls DNS resolution requests from services and performs the resolution on their behalf. Instead of relying on external DNS servers, the sidecar proxies handle the DNS resolution within the service mesh. This enables Istio to provide several valuable benefits:

  • Service discovery and load balancing: Istio’s Proxy DNS allows for more advanced service discovery mechanisms. It can dynamically discover services and their corresponding IP addresses within the mesh and perform load balancing across instances of a particular service. This eliminates the need for individual services to manage DNS resolution and load balancing.
  • Security and observability: Istio gains visibility into the traffic between services by handling DNS resolution within the mesh. It can apply security policies, such as access control and traffic encryption, at the DNS level. Additionally, Istio can collect DNS-related telemetry data for monitoring and observability, providing insights into service-to-service communication patterns.
  • Traffic management and control: Proxy DNS enables Istio to implement advanced traffic management features, such as routing rules and fault injection, at the DNS resolution level. This allows for sophisticated traffic control mechanisms within the service mesh, enabling A/B testing, canary deployments, circuit breaking, and other traffic management strategies.

 Istio Proxy DNS Use-Cases

There are some moments when you cannot or don’t want to rely on the normal DNS resolution. Why is that? Starting because DNS is a great protocol but lacks some capabilities, such as location discovery. If you have the same DNS assigned to three IPs, it will provide each of them in a round-robin fashion and cannot rely on the location.

Or you have several IPs, and you want to block some of them for some specific service; these are great things you can do with Istio Proxy DNS.

Istio Proxy DNS Enablement

You need to know that Istio Proxy DNS capabilities are not enabled by default, so you must help if you want to use it. The good thing is that you can allow that at different levels, from the full mesh level to just a single pod level, so you can choose what is best for you in each case.

For example, if we want to enable it at the pod level, we need to inject the following configuration in the Istio proxy:

    proxy.istio.io/config: |
		proxyMetadata:   
         # Enable basic DNS proxying
         ISTIO_META_DNS_CAPTURE: "true" 
         # Enable automatic address allocation, optional
         ISTIO_META_DNS_AUTO_ALLOCATE: "true"

The same configuration can be part of the Mesh level as part of the operator installation, as you can find the documentation here on the Istio official page.

Conclusion

In summary, the Proxy DNS capability of Istio enhances the DNS resolution mechanism within the service mesh environment, providing advanced service discovery, load balancing, security, observability, and traffic management features. Istio centralizes and controls DNS resolution by leveraging the sidecar proxies, simplifying the management and optimization of service-to-service communication in complex microservices architectures.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Exploring Istio Security Policies for Enhanced Service Mesh Protection with 3 Objects

Exploring Istio Security Policies for Enhanced Service Mesh Protection with 3 Objects

Istio Security Policies are crucial in securing microservices within a service mesh environment. We have discussed Istio and the capabilities that it can introduce to your Kubernetes workloads. Still, today we’re going to be more detailed regarding the different objects and resources that would help us make our workloads much more secure and enforce the communication between them. These objects include PeerAuthentication, RequestAuthentication, and AuthorizationPolicy objects.

PeerAuthentication: Enforcing security on pod-to-pod communication

PeerAuthentication focuses on securing communication between services by enforcing mutual TLS (Transport Layer Security) authentication and authorization. It enables administrators to define authentication policies for workloads based on the source of the requests, such as specific namespaces or service accounts. Configuring PeerAuthentication ensures that only authenticated and authorized services can communicate, preventing unauthorized access and man-in-the-middle attacks. This can be achieved depending on the value of the mode where defining this object being STRICT for only allowed mTLS communication, PERMISSIVE to allow both kinds of communication, DISABLE to forbid the mTLS connection and keep the traffic insecure, and UNSET to use the inherit option. This is a sample of the definition of the object:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: foo
spec:
  mtls:
    mode: PERMISSIVE

RequestAuthentication: Defining authentication methods for Istio Workloads

RequestAuthentication, on the other hand, provides fine-grained control over the authentication of inbound requests. It allows administrators to specify rules and requirements for validating and authenticating incoming requests based on factors like JWT (JSON Web Tokens) validation, API keys, or custom authentication methods. With RequestAuthentication, service owners can enforce specific authentication mechanisms for different endpoints or routes, ensuring that only authenticated clients can access protected resources. Here you can see a sample of a RequestAuthentication object:

apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
  name: jwt-auth-policy
  namespace: my-namespace
spec:
  selector:
    matchLabels:
      app: my-app
  jwtRules:
    - issuer: "issuer.example.com"
      jwksUri: "https://example.com/.well-known/jwks.json"

As commented, JWT validation is the most used approach as JWT tokens are becoming the de-facto industry standard for incoming validations and the OAuth V2 authorization protocol. Here you can define the rules the JWT needs to meet to be considered a valid request. But the RequestAuthentication only describes the “authentication methods” supported by the workloads but doesn’t enforce it or provide any details regarding the Authorization.

That means that if you define a workload to need to use JWT authentication, sending the request with the token will validate that token and ensure it is not expired. It meets all the rules you have specified in the object definition, but it will also allow bypassing requests with no token at all, as you’re just defining what the workloads support but not enforcing it. To do that, we need to introduce the last object of this set, the AuthorizationPolicy object.

AuthorizationPolicy: Fine-grained Authorization Policy Definition for Istio Policies

AuthorizationPolicy offers powerful access control capabilities to regulate traffic flow within the service mesh. It allows administrators to define rules and conditions based on attributes like source, destination, headers, and even request payload to determine whether a request should be allowed or denied. AuthorizationPolicy helps enforce fine-grained authorization rules, granting or denying access to specific resources or actions based on the defined policies. Only authorized clients with appropriate permissions can access specific endpoints or perform particular operations within the service mesh. Here you can see a sample of an Authorization Policy object:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: rbac-policy
  namespace: my-namespace
spec:
  selector:
    matchLabels:
      app: my-app
  rules:
    - from:
        - source:
            principals: ["user:user@example.com"]
      to:
        - operation:
            methods: ["GET"]

Here you can go as detailed as you need; you can apply rules on the source of the request to ensure that only some recommendations can go through (for example, requests that are from a JWT token to use in combination with the RequestAuthenitcation object), but also rules on the target, if this is going to a specific host or path or method or a combination of both. Also, you can apply to ALLOW rules or DENY rules (or even CUSTOM) and define a set of them, and all of them will be enforced as a whole. The evaluation is determined by the following rules as stated in the Istio Official Documentation:

  • If there are any CUSTOM policies that match the request, evaluate and deny the request if the evaluation result is denied.
  • If there are any DENY policies that match the request, deny the request.
  • If there are no ALLOW policies for the workload, allow the request.
  • If any of the ALLOW policies match the request, allow the request. Deny the request.
Authorization Policy validation flow from: https://istio.io/latest/docs/concepts/security/

This will provide all the requirements you could need to be able to do a full definition of all the security policies needed.

 Conclusion

In conclusion, Istio’s Security Policies provide robust mechanisms for enhancing the security of microservices within a service mesh environment. The PeerAuthentication, RequestAuthentication, and AuthorizationPolicy objects offer a comprehensive toolkit to enforce authentication and authorization controls, ensuring secure communication and access control within the service mesh. By leveraging these Istio Security Policies, organizations can strengthen the security posture of their microservices, safeguarding sensitive data and preventing unauthorized access or malicious activities within their service mesh environment.

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

Istio allows you to configure Sticky Session, among other network features, for your Kubernetes workloads. As we have commented in several posts regarding Istio, istio deploys a service mesh that provides a central control plane to have all the configuration regarding the network aspects of your Kubernetes workloads. This covers many different aspects of the communication inside the container platform, such as security covering security transport, authentication or authorization, and, at the same time, network features, such as routing and traffic distribution, which is the main topic for today’s article.

These routing capabilities are similar to what a traditional Load Balancer of Level 7 can provide. When we talk about Level 7, we’re referring to the conventional levels that compound the OSI stack, where level 7 is related to the Application Level.

A Sticky Session or Session Affinity configuration is one of the most common features you can need to implement in this scenario. The use-case is the following one:

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

You have several instances of your workloads, so different pod replicas in a Kubernetes situation. All of these pods behind the same service. By default, it will redirect the requests in a round-robin fashion among the pod replicas in a Ready state, so Kubernetes understand that they’re ready to get the request unless you define it differently.

But in some cases, mainly when you are dealing with a web application or any stateful application that handles the concept of a session, you could want the replica that processes the first request and also handles the rest of the request during the lifetime of the session.

Of course, you could do that easily just by routing all traffic to one request, but in that case, we lose other features such as traffic load balancing and HA. So, this is usually implemented using Session Affinity or Sticky Session policies that provides best of both worlds: same replica handling all the request from an user, but traffic distribution between different users.

How Sticky Session Works?

The behavior behind this is relatively easy. Let’s see how it works.

First, the important thing is that you need “something” as part of your network requests that identify all the requests that belong to the same session, so the routing component (in this case, this role is played by istio) can determine which part needs to handle these requests.

This is “something” that we use to do that, it can be different depending on your configuration, but usually, this is a Cookie or an HTTP Header that we send in each request. Hence, we know that the replica handles all requests of that specific type.

How does Istio implement Sticky Session support?

In the case of using Istio to do this role, we can implement that by using a specific Destination Rule that allows us, among other capabilities, to define the traffic policy to define how we want the traffic to be split and to implement the Sticky Session we need to use the “consistentHash” feature, that allows that all the requests that compute to the same hash will be sent to the replica.

When we define the consistentHash features, we can say how this hash will be created and, in other words, which components will be used to generate this hash, and this can be one of the following options:

  • httpHeaderName: Uses an HTTP Header to do the traffic distribution
  • httpCookie: Uses an HTTP Cookie to do the traffic distribution
  • httpQueryParameterName: Uses a Query String to do the traffic Distribution.
  • maglev: Uses Google’s Maglev Load Balancer to do the determination. You can read more about Maglev in the article from Google.
  • ringHash: Uses a ring-based hashed approach to load balancing between the available pods.

So, as you can see, you will have a lot of different options. Still, just the first three would be the most used to implement a sticky session, and usually, the HTTP Cookie (httpCookie) option will be the preferred one, as it would rely on the HTTP approach to manage the session between clients and servers.

Sticky Session Implementation Sample using TIBCO BW

We will define a very simple TIBCO BW workload to implement a REST service, serving a GET reply with a hardcoded value. To simplify the validation process, the application will log the hostname of the pod so quickly we can see who is handling each of the requests:

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

We deploy this in our Kubernetes cluster and expose it using a Kubernetes service; in our case, the name of this service will be test2-bwce-srv

On top of that, we apply the istio configuration, which will require three (3) istio objects: gateway, virtual service, and the destination rule. As our focus is on the destination rule, we will try to keep it as simple as possible in the other two objects:

 apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: default-gw
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: http
      number: 80
      protocol: HTTP

Virtual Service:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: test-vs
spec:
  gateways:
  - default-gw
  hosts:
  - test.com
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: test2-bwce-srv
        port:
          number: 8080

And finally, the DestinationRule will use a httpCookie that we will name ISTIOD, as you can see in the snippet below:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
    name: default-sticky-dr
    namespace: default
spec:
    host: test2-bwce-srv.default.svc.cluster.local
    trafficPolicy:
      loadBalancer:
        consistentHash:
          httpCookie: 
            name: ISTIOID
            ttl: 60s

Now, that we have already started our test, and after launching the first request, we get a new Cookie that is generated by istio itself that is shown in the Postman response window:

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

This request has been handled for one of the replicas available of the service, as you can see here:

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

All subsequent request from Postman already includes the cookie, and all of them are handled from the same pod:

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

While the other replica’ log is empty, as all the requests have been routed to that specific pod.

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

Summary

We covered in this article the reason behind the need for a sticky session in Kubernetes workload and how we can achieve that using the capabilities of the Istio Service Mesh. So, I hope this can help implement this configuration on your workloads that you can need today or in the future

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Understanding Istio ServiceEntry: How to Extend Your Service Mesh to External Endpoints

Understanding Istio ServiceEntry: How to Extend Your Service Mesh to External Endpoints

What Is An Istio ServiceEntry?

Istio ServiceEntry is the way to define an endpoint that doesn’t belong to the Istio Service Registry. Once the ServiceEntry is part of the registry, it can define rules and enforce policies as if they belong to the mesh.

Istio Service Entry answers the question you probably have done several times when using a Service Mesh. How can I do the same magic with external endpoints that I can do when everything is under my service mesh scope? And Istio Service Entry objects provide precisely that:

A way to have an extended mesh managing another kind of workload or, even better, in Istio’s own words:

ServiceEntry enables adding additional entries into Istio’s internal service registry so that auto-discovered services in the mesh can access/route to these manually specified services.

These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes).

What are the main capabilities of Istio ServiceEntry?

Here you can see a sample of the YAML definition of a Service Entry:

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: external-svc-redirect
spec:
  hosts:
  - wikipedia.org
  - "*.wikipedia.org"
  location: MESH_EXTERNAL
  ports:
  - number: 443
    name: https
    protocol: TLS
  resolution: NONE

In this case, we have an external-svc-redirectServiceEntry object that is handling all calls going to the wikipedia.org, and we define the port and protocol to be used (TLS – 443) and classify this service as external to the mesh (MESH_EXTERNAL) as this is an external Web page.

You can also specify more details inside the ServiceEntry configuration, so you can, for example, define a hostname or IP and translate that to a different hostname and port because you can also specify the resolution mode you want to use for this specific Service Entry. If you see the snippet above, you will find a resolution field with NONE value that says it will not make any particular resolution. But other values valid are the following ones:

  • NONE: Assume that incoming connections have already been resolved (to a specific destination IP address).
  • STATIC: Use the static IP addresses specified in endpoints as the backing instances associated with the service.
  • DNS: Attempt to resolve the IP address by querying the ambient DNS asynchronously.
  • DNSROUNDROBIN: Attempt to resolve the IP address by querying the ambient DNS asynchronously. Unlike DNS, DNSROUNDROBIN only uses the first IP address returned when a new connection needs to be initiated without relying on complete results of DNS resolution, and references made to hosts will be retained even if DNS records change frequently eliminating draining connection pools and connection cycling.

To define the target of the ServiceEntry, you need to specify its endpoints by using a WorkloadEntry object. To do that, you need to provide the following data:

  • address: Address associated with the network endpoint without the port.
  • ports: Set of ports associated with the endpoint
  • weight: The load balancing weight associated with the endpoint.
  • locality: The locality associated with the endpoint. A locality corresponds to a failure domain (e.g., country/region/zone).
  • network: Network enables Istio to group endpoints resident in the same L3 domain/network.

What Can You Do With Istio ServiceEntry?

The number of use cases is enormous. Once a ServiceEntry is similar to what you have a Virtual Service defined, you can apply any destination rule to them to do a load balancer, a protocol switch, or any logic that can be done with the DestinationRule object. The same applies to the rest of the Istio CRD, such as RequestAuthentication, and PeerAuthorization, among others.

You can also have a graphical representation of the ServiceEntry inside Kiali, a visual representation for the Istio Service Mesh, as you can see in the picture below:

Understanding Istio ServiceEntry: How to Extend Your Service Mesh to External Endpoints

As you can define, an extended mesh with endpoints outside the Kubernetes cluster is something that is becoming more usual with the explosion of clusters available and the hybrid environments when you need to manage clusters of different topologies and not lose the centralized policy-based network management that the Istio Service Mesh provides to your platform.

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

Introduction

Istio TLS configuration is one of the essential features when we enable a Service Mesh. Istio Service Mesh provides so many features to define in a centralized, policy way how transport security, among other characteristics, is handled in the different workloads you have deployed on your Kubernetes cluster.

One of the main advantages of this approach is that you can have your application focus on the business logic they need to implement. These security aspects can be externalized and centralized without necessarily including an additional effort in each application you have deployed. This is especially relevant if you are following a polyglot approach (as you should) across your Kubernetes cluster workloads.

So, this time we’re going to have our applications just handling HTTP traffic for both internal and external, and depending on where we are reaching, we will force that connection to be TLS without the workload needed to be aware of it. So, let’s see how we can enable this Istio TLS configuration

Scenario View

We will use this picture you can see below to keep in mind the concepts and components that will interact as part of the different configurations we will apply to this.

  • We will use the ingress gateway to handle all incoming traffic to the Kubernetes cluster and the egress gateway to handle all outcoming traffic from the cluster.
  • We will have a sidecar container deployed in each application to handle the communication from the gateways or the pod-to-pod communication.

To simplify the testing applications, we will use the default sample applications Istio provides, which you can find here.

How to Expose TLS in Istio?

This is the easiest part, as all the incoming communication you will receive from the outside will enter the cluster through the Istio Ingress Gateway, so it is this component the one that needs to handle the TLS connection and then use the usual security approach to talk to the pod exposing the logic.

By default, the Istio Ingress Gateway already exposes a TLS port, as you can see in the picture below:

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

So we will need to define a Gateway that receives all this traffic through the HTTPS and redirect that to the pods, and we will do it as you can see here:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: bookinfo-gateway-https
  namespace: default
spec:
  selector:
    istio: ingressgateway
  servers:
    - hosts:
        - '*'
      port:
        name: https
        number: 443
        protocol: HTTPS
      tls:
        mode: SIMPLE # enables HTTPS on this port
        credentialName: httpbin-credential 

As we can see, it is a straightforward configuration, just adding the port HTTPS on the 443 and providing the TLS configuration:

And with that, we can already reach using SSL the same pages:

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

How To Consume SSL from Istio?

Now that we have generated a TLS incoming request without the application knowing anything, we will go one step beyond that and do the most challenging configuration. We will set up TLS/SSL connection to any outgoing communication outside the Kubernetes cluster without the application knowing anything about it.

To do so, we will use one of the Istio concepts we have already covered in a specific article. That concept is the Istio Service Entry that allows us to define an endpoint to manage it inside the MESH.

Here we can see the Wikipedia endpoint added to the Service Mesh registry:

 apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: se-app
  namespace: default
spec:
  hosts:
  - wikipedia.org
  ports:
  - name: https
    number: 443
    protocol: HTTPS
  resolution: DNS

Once we have configured the ServiceEntry, we can define a DestinationRule to force all connections to wikipedia.org will use the TLS configuration:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: tls-app
  namespace: default
spec:
  host: wikipedia.org
  trafficPolicy:
    tls:
      mode: SIMPLE

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kiali 101: Understanding and Utilizing this Essential Istio Service Mesh Management Tool

Kiali 101: Understanding and Utilizing this Essential Istio Service Mesh Management Tool

What Is Kiali?

Kiali is an open-source project that provides observability for your Istio service mesh. Developed by Red Hat, Kiali helps users understand the structure and behavior of their mesh and any issues that may arise.

Kiali provides a graphical representation of your mesh, showing the relationships between the various service mesh components, such as services, virtual services, destination rules, and more. It also displays vital metrics, such as request and error rates, to help you monitor the health of your mesh and identify potential issues.

 What are Kiali Main Capabilities?

One of the critical features of Kiali is its ability to visualize service-to-service communication within a mesh. This lets users quickly see how services are connected, and requests are routed through the mesh. This is particularly useful for troubleshooting, as it can help you quickly identify problems with service communication, such as misconfigured routing rules or slow response times.

Kiali 101: Understanding and Utilizing this Essential Service Mesh Management Tool

Kiali also provides several tools for monitoring the health of your mesh. For example, it can alert you to potential problems, such as a high error rate or a service not responding to requests. It also provides detailed tracking information, allowing you to see the exact path a request took through the mesh and where any issues may have occurred.

In addition to its observability features, Kiali provides several other tools for managing your service mesh. For example, it includes a traffic management module, which allows you to control the flow of traffic through your mesh easily, and a configuration management module, which helps you manage and maintain the various components of your mesh.

Overall, Kiali is an essential tool for anyone using an Istio service mesh. It provides valuable insights into the structure and behavior of your mesh, as well as power monitoring and management tools. Whether you are starting with Istio or an experienced user, Kiali can help ensure that your service mesh runs smoothly and efficiently.

What are the main benefits of using Kiali?

The main benefits of using Kiali are:

  • Improved observability of your Istio service mesh. Kiali provides a graphical representation of your mesh, showing the relationships between different service mesh components and displaying key metrics. This allows you to quickly understand the structure and behavior of your mesh and identify potential issues.
  • Easier troubleshooting. Kiali’s visualization of service-to-service communication and detailed tracing information make it easy to identify problems with service communication and pinpoint the source of any issues.
  • Enhanced traffic management. Kiali includes a traffic management module allowing you to control traffic flow through your mesh easily.
  • Improved configuration management. Kiali’s configuration management module helps you manage and maintain the various components of your mesh.

How To Install Kiali?

There are several ways to install Kiali as part of your Service Mesh deployment, being the preferred option to use the Operator model available here.

You can install this operator using Helm or OperatorHub. To install it using Helm Charts, you need to add the following repository using this command:

 helm repo add kiali https://kiali.org/helm-charts

** Remember that once you add a new repo, you need to run the following command to update the charts available

helm repo update

Now, you can install it using the helm installprimitive such as in the following sample:

helm install \
    --set cr.create=true \
    --set cr.namespace=istio-system \
    --namespace kiali-operator \
    --create-namespace \
    kiali-operator \
    kiali/kiali-operator

If you prefer going down the route of OperatorHub, you can use the following URL . Now, by clicking on the Install button, you will see the steps to have the component installed in your Kubernetes environment.

Kiali 101: Understanding and Utilizing this Essential Service Mesh Management Tool

In case you want a simple installation of Kiali, you can also use the sample YAML available inside the Istio installation folder using the following command:

kubectl apply -f $ISTIO_HOME/samples/addons/kiali.yaml

How does Kiali work?

Kiali is just the graphical representation of the information available regarding how the service mesh works. So it is not the responsibility of Kiali to store those metrics but to retrieve them and draw them in a relevant way for the user of the tool.

Prometheus does the storage of this data, so Kiali uses the Prometheus REST API to retrieve the information and draw it graphically, as you can see here:

  • It is going to show several relevant parts of the graph. It will show the namespace selected and inside of them the different apps (it would detect an app in case you have a label added to the workload with the name app ). Inside, each app will add different services and pods with other icons (triangles for the services and squares for the pods).
  • It will also show how the traffic reaches the cluster through the different ingress gateways and how it goes out in case we have any egress gateway configured.
  • It will show the kind of traffic we’re handling and the different error rates based on the kind of protocol, such as TCP, HTTP, and so on, as you can see in the picture below. The protocol is decided based on a naming convention on the port name from the service with the expected format: protocol-name
Kiali 101: Understanding and Utilizing this Essential Service Mesh Management Tool

Can Kiali be used with any service mesh?

No, Kiali is specifically designed for use with Istio service meshes.

It provides observability, monitoring, and management tools for Istio service meshes but is incompatible with other service mesh technologies.

If you use a different service mesh, you will need to find an additional tool for managing and monitoring it.

Are there other alternatives to Kiali?

Even if you cannot see natural alternatives to Kiali to visualize your workloads and traffic through the Istio Service Mesh, you can use other tools to grab the metrics that feed Kiali and have custom visualization using more generic tools such as Grafana, among others.

Let’s talk about similar tools to Kialia for other Service Meshes, such as Linkerd, Consul Connect, or even Kuma. Most follow a different approach where the visualization part is not a separate “project” but relies on a standard visualization tool. That gives you much more flexibility, but at the same time, it lacks most of the excellent visualization of the traffic that Kialia provides, such as graph views or being able to modify the traffic directly from the graph view.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture

Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture

CNCF-sponsored service Mesh Linkerd provides a lot of needed features in nowadays microservices architectures.

If you are reading this, probably, you are already aware of the challenges that come with a microservices architecture. It could be because you are reading about those or even because you are challenging them right now in your own skin.

One of the most common challenges is network and communication. With the eclosion of many components that need communication and the ephemeral approach of the cloud-native developments, many new features are a need when in the past were just a nice-to-have.

Concepts like service registry and service discovery, service authentication, dynamic routing policies, and circuit breaker patterns are no longer things that all the cool companies are doing but something basic to master the new microservice architecture as part of a cloud-native architecture platform, and here is where the Service Mesh project is increasing its popularity as a solution for most of this challenges and providing these features that are needed.

If you remember, a long time ago, I already cover that topic to introduce Istio as one of the options that we have:

But this project created by Google and IBM is not the only option that you have to provide those capabilities. As part of the Cloud Native Computing Foundation (CNCF), the Linkerd project provides similar features.

How to install Linkerd

To start using Linkerd, the first thing that we need to do is to install the software and to do that. We need to do two installations, one on the Kubernetes server and another on the host.

To install on the host, you need to go to the releases page and download the edition for your OS and install it.

I am using a Windows-based system in my sample, so I use chocolatey to install the client. After doing so, I can see the version of the CLI typing the following command:

linkerd version

And you will get an output that will say something similar to this:

PS C:\WINDOWS\system32> linkerd.exe version
Client version: stable-2.8.1
Server version: unavailable

Now we need to do the installation on the Kubernetes server, and to do so, we use the following command:

linkerd install | kubectl apply -f -

And you will get an output similar to this one:

PS C:\WINDOWS\system32> linkerd install | kubectl apply -f -
namespace/linkerd created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created
serviceaccount/linkerd-identity created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-controller created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-controller created
serviceaccount/linkerd-controller created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created
serviceaccount/linkerd-destination created
role.rbac.authorization.k8s.io/linkerd-heartbeat created
rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
serviceaccount/linkerd-heartbeat created
role.rbac.authorization.k8s.io/linkerd-web created
rolebinding.rbac.authorization.k8s.io/linkerd-web created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-web-admin created
serviceaccount/linkerd-web created
customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/trafficsplits.split.smi-spec.io created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-prometheus created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-prometheus created
serviceaccount/linkerd-prometheus created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
serviceaccount/linkerd-proxy-injector created
secret/linkerd-proxy-injector-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator created
serviceaccount/linkerd-sp-validator created
secret/linkerd-sp-validator-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-tap created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-tap-admin created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-tap created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-tap-auth-delegator created
serviceaccount/linkerd-tap created
rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-tap-auth-reader created
secret/linkerd-tap-tls created
apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created
podsecuritypolicy.policy/linkerd-linkerd-control-plane created
role.rbac.authorization.k8s.io/linkerd-psp created
rolebinding.rbac.authorization.k8s.io/linkerd-psp created
configmap/linkerd-config created
secret/linkerd-identity-issuer created
service/linkerd-identity created
deployment.apps/linkerd-identity created
service/linkerd-controller-api created
deployment.apps/linkerd-controller created
service/linkerd-dst created
deployment.apps/linkerd-destination created
cronjob.batch/linkerd-heartbeat created
service/linkerd-web created
deployment.apps/linkerd-web created
configmap/linkerd-prometheus-config created
service/linkerd-prometheus created
deployment.apps/linkerd-prometheus created
deployment.apps/linkerd-proxy-injector created
service/linkerd-proxy-injector created
service/linkerd-sp-validator created
deployment.apps/linkerd-sp-validator created
service/linkerd-tap created
deployment.apps/linkerd-tap created
configmap/linkerd-config-addons created
serviceaccount/linkerd-grafana created
configmap/linkerd-grafana-config created
service/linkerd-grafana created
deployment.apps/linkerd-grafana created

Now we can check that the installation has been done properly using the command:

linkerd check

And if everything has been done properly, you will get an output like this one:

PS C:\WINDOWS\system32> linkerd check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API
linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist
linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

Then we can see the dashboard from Linkerd using the following command:

linkerd dashboard
Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture
Dashboard initial web page after a clean Linkerd installation

Deployment of the apps

We will use the same apps that we use some time ago to deploy istio, so if you want to remember what they are doing, you need to look again at that article.

I have uploaded the code to my GitHub repository, and you can find it here: https://github.com/alexandrev/bwce-linkerd-scenario

To deploy, you need to have your docker images pushed to a docker registry, and I will use Amazon ECR as the docker repository that I am going to use.

So I need to build and push those images with the following commands:

docker build -t provider:1.0 .
docker tag provider:1.0 938784100097.dkr.ecr.eu-west-2.amazonaws.com/provider-linkerd:1.0
docker push 938784100097.dkr.ecr.eu-west-2.amazonaws.com/provider-linkerd:1.0
docker build -t consumer:1.0 .
docker tag consumer:1.0 938784100097.dkr.ecr.eu-west-2.amazonaws.com/consumer-linkerd:1.0
docker push 938784100097.dkr.ecr.eu-west-2.amazonaws.com/consumer-linkerd:1.0

And after that, we are going to deploy the images on the Kubernetes cluster:

kubectl apply -f .\provider.yaml
kubectl apply -f .\consumer.yaml

And now we can see those apps in the Linkerd Dashboard on the default namespace:

Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture
Image showing the provider and consumer app as linked applications

And now, we can reach the consumer endpoint using the following command:

kubectl port-forward pod/consumer-v1-6cd49d6487-jjm4q 6000:6000

And if we reach the endpoint, we got the expected reply from the provider.

Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture
Sample response provided by the provider

And in the dashboard, we can see the stats of the provider:

Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture
Linkerd dashboard showing the stats of the flow

Also, linked by default provided a Grafana dashboard where you can see more metrics you can get there using the grafana link that the dashboard has.

Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture
Grafana link on the Linkerd Dashboard

When you enter that, you could see something like the dashboard shown below:

Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture
Grafana dashboard showing the linkerd statistics

Summary

With all this process, we have seen how easily we can deploy a linkerd service mesh in our Kubernetes cluster and how applications can integrate and interact with them. In the next posts, we will dive into the most advanced features that will help us in the new challenges that come with the Microservices architecture.

Technology wars: API Management Solution vs Service Mesh

Technology wars: API Management Solution vs Service Mesh

Service Mesh vs. API Management Solution: is it the same? Are they compatible? Are they rivals?

When we talk about communication in a distributed cloud-native world and especially when we are talking about container-based architectures based on Kubernetes platform like AKS, EKS, Openshift, and so on, two technologies generate a lot of confusion because they seem to be covering the same capabilities: Those are Service Mesh and API Management Solutions.

It is has been a controversial topic where different bold statements have been made: People who think that those technologies to work together in a complementary mode, others who believe that they’re trying to solve the same problems in different ways and even people who think one is just the evolution of the other to the new cloud-native architecture.

API Management Solutions

API Management Solutions have been part of our architectures for so long. It is a crucial component of any architecture nowadays that is created following the principles of the API-Led Architecture, and they’re an evolution of the pre-existent API Gateway we’ve included as an evolution of the pure proxies in the late 90s and early 2000.

API Management Solutions is a critical component of your API Strategy because it enable your company to work on an API Led Approach. And that is much more than the technical aspect of it. We usually try to simplify the API Led Approach to the technical side with the API-based development and the microservices we’re creating and the collaborative spirit in mind we use today to make any piece of software that is deployed on the production environment.

But it is pretty much more than that. API Lead Architectures is about creating products from our API, providing all the artifacts (technical and non-technical) that we need to do that conversion. A quick list of those artifacts (but it is not an exhaustive list are the following ones)

  • API Documentation Support
  • Package Plans Definition
  • Subscription capabilities
  • Monetization capabilities
  • Self-Service API Discovery
  • Versioning capabilities

Traditionally, the API Management solution also comes with API Gateway capabilities embedded to cover even the technical aspect of it, and that also provide some other capabilities more in the technical level:

  • Exposition
  • Routing
  • Security
  • Throttling

Service Mesh

Service Mesh is more a buzz word these days and a technology that is now trending because it has been created to solve some of the challenges that are inherent to the microservice and container approach and everything under the cloud-native label.

In this case, it comes from the technical side so, it is much more a bottom-top approach because their existence is to be able to solve a technical problem and try to provide a better user experience to the new developers and system administrators in this new world much more complicated. And what are the challenges that have been created in this transition? Let’s take a look at them:

Service Registry & Discovery is one of the critical things that we need to cover because with the elastic paradigm of the cloud-native world makes that the services are switching its location from time to time being started in new machines when needed, remove of them when there is no enough load to require its presence, so it is essential to provide a way to easily manage that new reality that we didn’t need in the past when our services were bounded to a specific machine or set of devices.

Security is another important topic in any architecture we can create today, and with the polyglot approach we’ve incorporated in our architectures is another challenging thing because we need to provide a secure way to communicate our services that are supported by any technology we’re using and anyone we can use in the future. And we’re not talking just about pure Authentication but also Authorization because in a service-to-service communication we also need to provide a way to check if the microservice that is calling another one is allowed to do so and do that in an agile way not to stop all the new advantages that your cloud-native architecture provides because of its conception.

Routing requirements also have been changed in these new architectures. If you remember how we usually deploy in traditional architectures, we typically try to find a zero down-time approach (when possible) but a very standard procedure. Deploy a new version, validate its working, and open the traffic for anyone, but today the requirements claim for much more complex paradigms. The Service Mesh technologies support rollout strategies like A/B Testing, Weight-based routing, Canary deployments.

Rival or Companion?

So, after doing a quick view of the purpose of these technologies and the problem they tried to solve, are they rivals or companions? Should we choose one or the other or try to place both of them in our architecture?

Like always, the answer to those questions is the same: “It depends!”. It depends on what you’re trying to do, what your company is trying to achieve, what you’re building..

  • API Management solution is needed as long as you’re implementing an API Strategy in your organization. Service Mesh technology is not trying to fill that gap. They can provide technical capabilities to cover that traditional has been done the API Gateway component, but this is just one of the elements of the API Management Solution. The other parts that provide the management and the governance capabilities are not covered by any Service Mesh today.
  • Service Mesh is needed if you have a cloud-native architecture based on the container platform that is firmly based on HTTP communication for synchronous communication. It provides so many technical capabilities that will make your life much more manageable that as soon as you include it into your architecture, you cannot live without it.
  • Service Mesh is only going to provide its capabilities in a container platform approach. So, if you have a more heterogeneous landscape as much of the enterprise do today, (you have a container platform but also other platforms like SaaS application, some systems still on-prem and traditional architectures that all of them are providing capabilities that you’d like to leverage as part of the API products), you will need to include an API Management Solution.

So, these technologies can play together in a complete architecture to cover different kinds of requirements, especially when we’re talking about complex heterogeneous-architectures with a need to include an API Lead approach.

In upcoming articles, we will cover how we can integrate both technologies from the technical aspect and how the data flow among the different components of the architecture.

Integrating Istio with BWCE Applications

Integrating Istio with BWCE Applications

Introduction

Services Mesh is one the “greatest new thing” in our PaaS environments. No matter if you’re working with K8S, Docker Swarm, pure-cloud with EKS or AWS, you’ve heard and probably tried to know how can be used this new thing that has so many advantages because it provides a lot of options in handling communication between components without impacting the logic of the components. And if you’ve heard from Service Mesh, you’ve heard from Istio as well, because this is the “flagship option” at the moment, even when other options like Linkerd or AWS App Mesh are also a great option, Istio is the most used Service Mesh at the moment.

You probably have seen some examples about how to integrate Istio with your open source based developments, but what happens if you have a lot of BWCE or BusinessWorks applications… can you use all this power? Or are you going to be banned for this new world?

Do not panic! This article is going to show you how easy you can use Istio with your BWCE application inside a K8S cluster. So, let the match…. BEGIN!

Scenario

The scenario that we’re going to test is quite simple. It’s a simple consumer provider approach. We’re going to use a simple Web Service SOAP/HTTP exposed by a backend to show that this can work not only with fancy REST API but even with any HTTP traffic that we could generate at the BWCE Application level.

Integrating Istio with BWCE Applications

So, we are going to invoke a service that is going to request a response from its provider and give us the response. That’s pretty easy to set up using pure BWCE without anything else.

All code related to this example is available for you in the following GitHub repo: Go get the code!

Steps

Step 1 Install Istio inside your Kubernetes Cluster

In my case I’m using Kubernetes cluster inside my Docker Desktop installation so you can do the same or use your real Kubernetes cluster, that’s up to you. The first step anyway is to install istio. And to do that, nothing better than following the steps given at isito-workshop that you can find here: https://polarsquad.github.io/istio-workshop/install-istio/ (UPDATE: No longer available)

Once you’ve finished we’re going to get the following scenario in our kubernetes cluster, so please, check that the result is the same with the following commands:

kubectl get pods -n istio-system

You should see that all pods are Running as you can see in the picture below:

Integrating Istio with BWCE Applications

kubectl -n istio-system get deployment -listio=sidecar-injector

You should see that there is one instance (CURRENT = 1) available.

Integrating Istio with BWCE Applications

kubectl get namespace -L istio-injection

You should see that ISTIO-INJECTION is enabled for the default namespace as the image shown below:

Integrating Istio with BWCE Applications

Step 2 Build BWCE Applications

Now, as we have all the infrastructure needed at the Istio level we can start building our application and to do that we don’t have to do anything different in our BWCE application. Eventually, they’re going to be two application that talks using HTTP as protocol, so nothing specific.

This is something important because when we usually talk about Service Mesh and Istio with customers, the same question always arises: Is Istio supported in BWCE? Can we use Istio as a protocol to communicate our BWCE application? So, they expect that it should exist some palette or some custom plugin they should install to support Istio. But none of that is needed at the application level. And that applies not only to BWCE but also to any other technology like Flogo or even open source technologies because at the end Istio (and Envoy is the other part needed in this technology that we usually avoid talking about when we talk about Istio) works in a Proxy mode using one of the most usual patterns in containers that is the “sidecar pattern”.

So, the technology that is exposing and implementing or consuming the service knows nothing about all this “magic” that is being executed in the middle of this communication process.

We’re going to define the following properties as environment variables like we’ll do in case we’re not using Istio:

Provider application:

  • PROVIDER_PORT → Port where the provider is going to listen for incoming requests.

Consumer application:

  • PROVIDER_PORT → Port where the provider host will be listening to.
  • PROVIDER_HOST → Host or FQDN (aka K8S service name) where provider service will be exposed.
  • CONSUMER_PORT → Port where the consumer service is going to listen for incoming requests.

So, as you can see if you check that the code of the BWCE application we don’t need to do anything special to support Istio in our BWCE applications.

NOTE: There is only an important topic that is not related to the Istio integration but how BWCE populates the property BW.CLOUD.HOST that is never translated to loopback interface or even 0.0.0.0. So it’s better that you change that variable for a custom one or to use localhost or 0.0.0.0 to listen in the loopback interface because is where the Istio proxy is going to send the requests.

After that we’re going to create our Dockerfiles for these services without anything, in particular, something similar to what you can see here:

Integrating Istio with BWCE Applications

NOTE: As a prerequisite, we’re using BWCE Base Docker Image named as bwce_base.2.4.3 that corresponds to version 2.4.3 of BusinessWorks Container Edition.

And now we build our docker images in our repository as you can see in the following picture:

Integrating Istio with BWCE Applications

Step 3: Deploy the BWCE Applications

Now, when all the images are being created we need to generate the artifacts needed to deploy these applications in our Cluster. Once again in this case nothing special neither in our YAML file, as you can see in the picture below we’re going to define a K8S service and a K8S deployment based on the imaged we’ve created in the previous step:

Integrating Istio with BWCE Applications

A similar thing happens with consumer deployment as well as you can see in the image below:

Integrating Istio with BWCE Applications

And we can deploy them in our K8S cluster with the following commands:

kubectl apply -f kube/provider.yaml

kubectl apply -f kube/consumer.yaml

Now, you should see the following components deployed. Only to fulfill all the components needed in our structure we’re going to create an ingress to make possible to execute requests from outside the cluster to those components and to do that we’re going to use the following yaml file:

kubectl apply -f kube/ingress.yaml

And now, after doing that, we’re going to invoke the service inside our SOAPUI project and we should get the following response:

Integrating Istio with BWCE Applications

Step 4 — Recap what we’ve just done

Ok, it’s working and you think… mmmm I can get this working without Istio and I don’t know if Istio is still doing anything or not…

Ok, you’re right, this could not be so great as you were expected, but, trust me, we’re just going step by step… Let’s see what’s really happening instead of a simple request from outside the cluster to the consumer service and that request being forwarded to the backend, what’s happening is a little bit more complex. Let’s take a look at the image below:

Integrating Istio with BWCE Applications

Incoming request from the outside is being handled by an Ingress Envoy Controller that is going to execute all rules defined to choose which service should handle the request, in our case the only consumer-v1 component is going to do it, and the same thing happens in the communication between consumer and provider.

So, we’re getting some interceptors in the middle that COULD apply the logic that is going to help us to route traffic between our components with the deployment of rules at runtime level without changing the application, and that is the MAGIC.

Step 5 — Implement Canary Release

Ok, now let’s apply some of this magic to our case. One of the most usual patterns that we usually apply when we’re rolling out an update in some of our services is the canary approach. Only to do a quick explanation or what this is:

Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.

Integrating Istio with BWCE Applications

If you want to read more about this you can take a look at the full article in Martin Fowler’s blog.

So, now we’re going to generate a small change in our provider application, that is going to change the response to be sure that we’re targeting version two, as you can see in the image below:

Integrating Istio with BWCE Applications

Now, we are going to build this application and generate the new image called provider:v2.

But before we’re going to deploy it using the YAML file called provider-v2.yaml we’re going to set a rule in our Istio Service Mesh that all traffic should be targeted to v1 even when others are applied. To do that we’re going to deploy the file called default.yaml that has the following content:

Integrating Istio with BWCE Applications

So, in this case, what we’re saying to Istio is that even if there are other components registered to the service, it should reply always the v1, so we can now deploy the v2 without any issue because it is not going to reply to any traffic until we decide to do so. So, now we can deploy the v2 with the following command:

kubectl apply -f provider-v2.yaml

And even when we execute the SOAPUI request we’re still getting a v1 reply even if we check in the K8S service configuration that the v2 is still bounded to that service.

Ok, now we’re going to start doing the release and we’re going to start with 10% to the new version and 90% of the requests to the old one, and to do that we’re going to deploy the rule canary.yaml using the following command:

kubectl apply -f canary.yaml

Where canary.yaml has the content shown below:

Integrating Istio with BWCE Applications

And now when we try enough times we’re going to get that most of the requests (90% approx) is going to reply v1 and only for 10% is going to reply from my new version:

Integrating Istio with BWCE Applications

Now, we can monitor how v2 is performing without affecting all customers and if everything goes as expected we can continue increasing that percentage until all customers are being replied by v2.