Exposing TCP Ports with Istio Ingress Gateway in Kubernetes (Step-by-Step Guide)

Exposing TCP Ports with Istio Ingress Gateway in Kubernetes (Step-by-Step Guide)

Istio has become an essential tool for managing HTTP traffic within Kubernetes clusters, offering advanced features such as Canary Deployments, mTLS, and end-to-end visibility. However, some tasks, like exposing a TCP port using the Istio IngressGateway, can be challenging if you’ve never done it before. This article will guide you through the process of exposing TCP ports with Istio Ingress Gateway, complete with real-world examples and practical use cases.

Understanding the Context

Istio is often used to manage HTTP traffic in Kubernetes, providing powerful capabilities such as traffic management, security, and observability. The Istio IngressGateway serves as the entry point for external traffic into the Kubernetes cluster, typically handling HTTP and HTTPS traffic. However, Istio also supports TCP traffic, which is necessary for use cases like exposing databases or other non-HTTP services running in the cluster to external consumers.

Exposing a TCP port through Istio involves configuring the IngressGateway to handle TCP traffic and route it to the appropriate service. This setup is particularly useful in scenarios where you need to expose services like TIBCO EMS or Kubernetes-based databases to other internal or external applications.

Steps to Expose a TCP Port with Istio IngressGateway

1.- Modify the Istio IngressGateway Service:

Before configuring the Gateway, you must ensure that the Istio IngressGateway service is configured to listen on the new TCP port. This step is crucial if you’re using a NodePort service, as this port needs to be opened on the Load Balancer.

   apiVersion: v1
   kind: Service
   metadata:
 name: istio-ingressgateway
 namespace: istio-system
   spec:
 ports:
 - name: http2
   port: 80
   targetPort: 80
 - name: https
   port: 443
   targetPort: 443
 - name: tcp
   port: 31400
   targetPort: 31400
   protocol: TCP

2.- Update the Istio IngressGateway service to include the new port 31400 for TCP traffic.

Configure the Istio IngressGateway: After modifying the service, configure the Istio IngressGateway to listen on the desired TCP port.

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: tcp-ingress-gateway
  namespace: istio-system
spec:
  selector:
istio: ingressgateway
  servers:
  - port:
	  number: 31400
	  name: tcp
	  protocol: TCP
	hosts:
	- "*"

In this example, the IngressGateway is configured to listen on port 31400 for TCP traffic.

3.- Create a Service and VirtualService:

After configuring the gateway, you need to create a Service that represents the backend application and a VirtualService to route the TCP traffic.

apiVersion: v1
kind: Service
metadata:
  name: tcp-service
  namespace: default
spec:
  ports:
  - port: 31400
	targetPort: 8080
	protocol: TCP
  selector:
app: tcp-app

The Service above maps port 31400 on the IngressGateway to port 8080 on the backend application.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: tcp-virtual-service
  namespace: default
spec:
  hosts:
  - "*"
  gateways:
  - tcp-ingress-gateway
  tcp:
  - match:
	- port: 31400
	route:
	- destination:
		host: tcp-service
		port:
		  number: 8080

The VirtualService routes TCP traffic coming to port 31400 on the gateway to the tcp-service on port 8080.

4.- Apply the Configuration

Apply the above configurations using kubectl to create the necessary Kubernetes resources.

kubectl apply -f istio-ingressgateway-service.yaml
kubectl apply -f tcp-ingress-gateway.yaml
kubectl apply -f tcp-service.yaml
kubectl apply -f tcp-virtual-service.yaml

After applying these configurations, the Istio IngressGateway will expose the TCP port to external traffic.

Practical Use Cases

  • Exposing TIBCO EMS Server: One common scenario is exposing a TIBCO EMS (Enterprise Message Service) server running within a Kubernetes cluster to other internal applications or external consumers. By configuring the Istio IngressGateway to handle TCP traffic, you can securely expose EMS’s TCP port, allowing it to communicate with services outside the Kubernetes environment.
  • Exposing Databases: Another use case is exposing a database running within Kubernetes to external services or different clusters. By exposing the database’s TCP port through the Istio IngressGateway, you enable other applications to interact with it, regardless of their location.
  • Exposing a Custom TCP-Based Service: Suppose you have a custom application running within Kubernetes that communicates over TCP, such as a game server or a custom TCP-based API service. You can use the Istio IngressGateway to expose this service to external users, making it accessible from outside the cluster.

Conclusion

Exposing TCP ports using the Istio IngressGateway can be a powerful technique for managing non-HTTP traffic in your Kubernetes cluster. With the steps outlined in this article, you can confidently expose services like TIBCO EMS, databases, or custom TCP-based applications to external consumers, enhancing the flexibility and connectivity of your applications.

Prevent Server Information Disclosure in Kubernetes with Istio Service Mesh

Prevent Server Information Disclosure in Kubernetes with Istio Service Mesh

In today’s digital landscape, where data breaches and cyber threats are becoming increasingly sophisticated, ensuring the security of your servers is paramount. One of the critical security concerns that organizations must address is “Server Information Disclosure.” Server Information Disclosure occurs when sensitive information about a server’s configuration, technology stack, or internal structure is inadvertently exposed to unauthorized parties. Hackers can exploit this vulnerability to gain insights into potential weak points and launch targeted attacks. Such breaches can lead to data theft, service disruption, and reputation damage.

Information Disclosure and Istio Service Mesh

One example is the Server HTTP Header, usually included in most of the HTTP responses where you have the server that is providing this response. The values can vary depending on the stack, but matters such as Jetty, Tomcat, or similar ones are usually seen. But also, if you are using a Service Mesh such as Istio, you will see the header with a value of istio-envoy, as you can see here:

Information Disclosure of Server Implementation using Istio Service mesh

As commented, this is of such importance for several levels of security, such as:

  • Data Privacy: Server information leakage can expose confidential data, undermining user trust and violating data privacy regulations such as GDPR and HIPAA.
  • Reduced Attack Surface: By concealing server details, you minimize the attack surface available to potential attackers.
  • Security by Obscurity: While not a foolproof approach, limiting disclosure adds an extra layer of security, making it harder for hackers to gather intelligence.

How to mitigate that with Istio Service Mesh?

When using Istio, we can define different rules to add and remove HTTP headers based on our needs, as you can see in the following documentation here: https://discuss.istio.io/t/remove-header-operation/1692 using simple clauses to the definition of your VirtualService as you can see here:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: k8snode-virtual-service
spec:
  hosts:
  - "example.com"
  gateways:
  - k8snode-gateway
  http:
    headers:
      response:
        remove:
          - "x-my-fault-source"
  - route:
    - destination:
        host: k8snode-service
        subset: version-1 

Unfortunately, this is not useful for all HTTP headers, especially the “main” ones, so the ones that are not custom added by your workloads but the ones that are mainly used and defined in the HTTP W3C standard https://www.w3.org/Protocols/

So, in the case of the Server HTTP header is a little bit more complex to do, and you need to use an EnvoyFilter, one of the most sophisticated objects part of the Istio Service Mesh. Based on the words in the official Istio documentation, an EnvoyFilter provides a mechanism to customize the Envoy configuration generated by Istio Pilot. So, you can use EnvoyFilter to modify values for certain fields, add specific filters, or even add entirely new listeners, clusters, etc.

EnvoyFilter Implementation to Remove Header

So now that we know that we need to create a custom EnvoyFilter let’s see which one we need to use to remove the Server header and how this is made to get more knowledge about this component. Here you can see the EnvoyFilter for that job:

---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: gateway-response-remove-headers
  namespace: istio-system
spec:
  workloadSelector:
    labels:
      istio: ingressgateway
  configPatches:
  - applyTo: NETWORK_FILTER
    match:
      context: GATEWAY
      listener:
        filterChain:
          filter:
            name: "envoy.filters.network.http_connection_manager"
    patch:
      operation: MERGE
      value:
        typed_config:
          "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
          server_header_transformation: PASS_THROUGH
  - applyTo: ROUTE_CONFIGURATION
    match:
      context: GATEWAY
    patch:
      operation: MERGE
      value:
        response_headers_to_remove:
        - "server"

So let’s focus on the parts of the specification of the EnvoyFilter where we can get for one side the usual workloadSelector, to know where this component will be applied, that in this case will be the istio ingressgateway. Then we enter into the configPatches section, that are the sections where we use the customization that we need to do, and in our case, we have two of them:

Both act on the context: GATEWAY and apply to two different objects: NETWORK\_FILTER AND ROUTE\_CONFIGURATION. You can also use filters on sidecars to affect the behavior of them. The first bit what it does is including the custom filter http\_connection\_maanger that allows the manipulation of the HTTP context, including for our primary purpose also the HTTP header, and then we have the section bit that acts on the ROUTE\_CONFIGURATION removing the server header as we can see by using the option response_header_to_remove

Conclusion

As you can see, this is not easy to implement. Still, at the same time, it is evidence of the power and low-level capabilities that you have when using a robust service mesh such as Istio to interact and modify the behavior of any tiny detail that you want for your benefit and, in this case, also to improve and increase the security of your workloads deployed behind the Service Mesh scope.

In the ever-evolving landscape of cybersecurity threats, safeguarding your servers against information disclosure is crucial to protect sensitive data and maintain your organization’s integrity. Istio empowers you to fortify your server security by providing robust tools for traffic management, encryption, and access control.

Remember, the key to adequate server security is a proactive approach that addresses vulnerabilities before they can be exploited. Take the initiative to implement Istio and elevate your server protection.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Istio Proxy DNS Explained: How DNS Capture Improves Service Mesh Traffic Control

Istio Proxy DNS Explained: How DNS Capture Improves Service Mesh Traffic Control

Istio is a popular open-source service mesh that provides a range of powerful features for managing and securing microservices-based architectures. We have talked a lot about its capabilities and components, but today we will talk about how we can use Istio to help with the DNS resolution mechanism.

As you already know, In a typical Istio deployment, each service is accompanied by a sidecar proxy, Envoy, which intercepts and manages the traffic between services. The Proxy DNS capability of Istio leverages this proxy to handle DNS resolution requests more intelligently and efficiently.

Traditionally, when a service within a microservices architecture needs to communicate with another service, it relies on DNS resolution to discover the IP address of the target service. However, traditional DNS resolution can be challenging to manage in complex and dynamic environments, such as those found in Kubernetes clusters. This is where the Proxy DNS capability of Istio comes into play.

 Istio Proxy DNS Capabilities

With Proxy DNS, Istio intercepts and controls DNS resolution requests from services and performs the resolution on their behalf. Instead of relying on external DNS servers, the sidecar proxies handle the DNS resolution within the service mesh. This enables Istio to provide several valuable benefits:

  • Service discovery and load balancing: Istio’s Proxy DNS allows for more advanced service discovery mechanisms. It can dynamically discover services and their corresponding IP addresses within the mesh and perform load balancing across instances of a particular service. This eliminates the need for individual services to manage DNS resolution and load balancing.
  • Security and observability: Istio gains visibility into the traffic between services by handling DNS resolution within the mesh. It can apply security policies, such as access control and traffic encryption, at the DNS level. Additionally, Istio can collect DNS-related telemetry data for monitoring and observability, providing insights into service-to-service communication patterns.
  • Traffic management and control: Proxy DNS enables Istio to implement advanced traffic management features, such as routing rules and fault injection, at the DNS resolution level. This allows for sophisticated traffic control mechanisms within the service mesh, enabling A/B testing, canary deployments, circuit breaking, and other traffic management strategies.

 Istio Proxy DNS Use-Cases

There are some moments when you cannot or don’t want to rely on the normal DNS resolution. Why is that? Starting because DNS is a great protocol but lacks some capabilities, such as location discovery. If you have the same DNS assigned to three IPs, it will provide each of them in a round-robin fashion and cannot rely on the location.

Or you have several IPs, and you want to block some of them for some specific service; these are great things you can do with Istio Proxy DNS.

Istio Proxy DNS Enablement

You need to know that Istio Proxy DNS capabilities are not enabled by default, so you must help if you want to use it. The good thing is that you can allow that at different levels, from the full mesh level to just a single pod level, so you can choose what is best for you in each case.

For example, if we want to enable it at the pod level, we need to inject the following configuration in the Istio proxy:

    proxy.istio.io/config: |
		proxyMetadata:   
         # Enable basic DNS proxying
         ISTIO_META_DNS_CAPTURE: "true" 
         # Enable automatic address allocation, optional
         ISTIO_META_DNS_AUTO_ALLOCATE: "true"

The same configuration can be part of the Mesh level as part of the operator installation, as you can find the documentation here on the Istio official page.

Conclusion

In summary, the Proxy DNS capability of Istio enhances the DNS resolution mechanism within the service mesh environment, providing advanced service discovery, load balancing, security, observability, and traffic management features. Istio centralizes and controls DNS resolution by leveraging the sidecar proxies, simplifying the management and optimization of service-to-service communication in complex microservices architectures.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Istio Security Policies Explained: PeerAuthentication, RequestAuthentication, and AuthorizationPolicy

Istio Security Policies Explained: PeerAuthentication, RequestAuthentication, and AuthorizationPolicy

Istio Security Policies are crucial in securing microservices within a service mesh environment. We have discussed Istio and the capabilities that it can introduce to your Kubernetes workloads. Still, today we’re going to be more detailed regarding the different objects and resources that would help us make our workloads much more secure and enforce the communication between them. These objects include PeerAuthentication, RequestAuthentication, and AuthorizationPolicy objects.

PeerAuthentication: Enforcing security on pod-to-pod communication

PeerAuthentication focuses on securing communication between services by enforcing mutual TLS (Transport Layer Security) authentication and authorization. It enables administrators to define authentication policies for workloads based on the source of the requests, such as specific namespaces or service accounts. Configuring PeerAuthentication ensures that only authenticated and authorized services can communicate, preventing unauthorized access and man-in-the-middle attacks. This can be achieved depending on the value of the mode where defining this object being STRICT for only allowed mTLS communication, PERMISSIVE to allow both kinds of communication, DISABLE to forbid the mTLS connection and keep the traffic insecure, and UNSET to use the inherit option. This is a sample of the definition of the object:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: foo
spec:
  mtls:
    mode: PERMISSIVE

RequestAuthentication: Defining authentication methods for Istio Workloads

RequestAuthentication, on the other hand, provides fine-grained control over the authentication of inbound requests. It allows administrators to specify rules and requirements for validating and authenticating incoming requests based on factors like JWT (JSON Web Tokens) validation, API keys, or custom authentication methods. With RequestAuthentication, service owners can enforce specific authentication mechanisms for different endpoints or routes, ensuring that only authenticated clients can access protected resources. Here you can see a sample of a RequestAuthentication object:

apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
  name: jwt-auth-policy
  namespace: my-namespace
spec:
  selector:
    matchLabels:
      app: my-app
  jwtRules:
    - issuer: "issuer.example.com"
      jwksUri: "https://example.com/.well-known/jwks.json"

As commented, JWT validation is the most used approach as JWT tokens are becoming the de-facto industry standard for incoming validations and the OAuth V2 authorization protocol. Here you can define the rules the JWT needs to meet to be considered a valid request. But the RequestAuthentication only describes the “authentication methods” supported by the workloads but doesn’t enforce it or provide any details regarding the Authorization.

That means that if you define a workload to need to use JWT authentication, sending the request with the token will validate that token and ensure it is not expired. It meets all the rules you have specified in the object definition, but it will also allow bypassing requests with no token at all, as you’re just defining what the workloads support but not enforcing it. To do that, we need to introduce the last object of this set, the AuthorizationPolicy object.

AuthorizationPolicy: Fine-grained Authorization Policy Definition for Istio Policies

AuthorizationPolicy offers powerful access control capabilities to regulate traffic flow within the service mesh. It allows administrators to define rules and conditions based on attributes like source, destination, headers, and even request payload to determine whether a request should be allowed or denied. AuthorizationPolicy helps enforce fine-grained authorization rules, granting or denying access to specific resources or actions based on the defined policies. Only authorized clients with appropriate permissions can access specific endpoints or perform particular operations within the service mesh. Here you can see a sample of an Authorization Policy object:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: rbac-policy
  namespace: my-namespace
spec:
  selector:
    matchLabels:
      app: my-app
  rules:
    - from:
        - source:
            principals: ["user:user@example.com"]
      to:
        - operation:
            methods: ["GET"]

Here you can go as detailed as you need; you can apply rules on the source of the request to ensure that only some recommendations can go through (for example, requests that are from a JWT token to use in combination with the RequestAuthenitcation object), but also rules on the target, if this is going to a specific host or path or method or a combination of both. Also, you can apply to ALLOW rules or DENY rules (or even CUSTOM) and define a set of them, and all of them will be enforced as a whole. The evaluation is determined by the following rules as stated in the Istio Official Documentation:

  • If there are any CUSTOM policies that match the request, evaluate and deny the request if the evaluation result is denied.
  • If there are any DENY policies that match the request, deny the request.
  • If there are no ALLOW policies for the workload, allow the request.
  • If any of the ALLOW policies match the request, allow the request. Deny the request.
Authorization Policy validation flow from: https://istio.io/latest/docs/concepts/security/

This will provide all the requirements you could need to be able to do a full definition of all the security policies needed.

 Conclusion

In conclusion, Istio’s Security Policies provide robust mechanisms for enhancing the security of microservices within a service mesh environment. The PeerAuthentication, RequestAuthentication, and AuthorizationPolicy objects offer a comprehensive toolkit to enforce authentication and authorization controls, ensuring secure communication and access control within the service mesh. By leveraging these Istio Security Policies, organizations can strengthen the security posture of their microservices, safeguarding sensitive data and preventing unauthorized access or malicious activities within their service mesh environment.

Enable Sticky Sessions in Kubernetes Using Istio (Session Affinity Explained)

Enable Sticky Sessions in Kubernetes Using Istio (Session Affinity Explained)

Istio allows you to configure Sticky Session, among other network features, for your Kubernetes workloads. As we have commented in several posts regarding Istio, istio deploys a service mesh that provides a central control plane to have all the configuration regarding the network aspects of your Kubernetes workloads. This covers many different aspects of the communication inside the container platform, such as security covering security transport, authentication or authorization, and, at the same time, network features, such as routing and traffic distribution, which is the main topic for today’s article.

These routing capabilities are similar to what a traditional Load Balancer of Level 7 can provide. When we talk about Level 7, we’re referring to the conventional levels that compound the OSI stack, where level 7 is related to the Application Level.

A Sticky Session or Session Affinity configuration is one of the most common features you can need to implement in this scenario. The use-case is the following one:

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

You have several instances of your workloads, so different pod replicas in a Kubernetes situation. All of these pods behind the same service. By default, it will redirect the requests in a round-robin fashion among the pod replicas in a Ready state, so Kubernetes understand that they’re ready to get the request unless you define it differently.

But in some cases, mainly when you are dealing with a web application or any stateful application that handles the concept of a session, you could want the replica that processes the first request and also handles the rest of the request during the lifetime of the session.

Of course, you could do that easily just by routing all traffic to one request, but in that case, we lose other features such as traffic load balancing and HA. So, this is usually implemented using Session Affinity or Sticky Session policies that provides best of both worlds: same replica handling all the request from an user, but traffic distribution between different users.

How Sticky Session Works?

The behavior behind this is relatively easy. Let’s see how it works.

First, the important thing is that you need “something” as part of your network requests that identify all the requests that belong to the same session, so the routing component (in this case, this role is played by istio) can determine which part needs to handle these requests.

This is “something” that we use to do that, it can be different depending on your configuration, but usually, this is a Cookie or an HTTP Header that we send in each request. Hence, we know that the replica handles all requests of that specific type.

How does Istio implement Sticky Session support?

In the case of using Istio to do this role, we can implement that by using a specific Destination Rule that allows us, among other capabilities, to define the traffic policy to define how we want the traffic to be split and to implement the Sticky Session we need to use the “consistentHash” feature, that allows that all the requests that compute to the same hash will be sent to the replica.

When we define the consistentHash features, we can say how this hash will be created and, in other words, which components will be used to generate this hash, and this can be one of the following options:

  • httpHeaderName: Uses an HTTP Header to do the traffic distribution
  • httpCookie: Uses an HTTP Cookie to do the traffic distribution
  • httpQueryParameterName: Uses a Query String to do the traffic Distribution.
  • maglev: Uses Google’s Maglev Load Balancer to do the determination. You can read more about Maglev in the article from Google.
  • ringHash: Uses a ring-based hashed approach to load balancing between the available pods.

So, as you can see, you will have a lot of different options. Still, just the first three would be the most used to implement a sticky session, and usually, the HTTP Cookie (httpCookie) option will be the preferred one, as it would rely on the HTTP approach to manage the session between clients and servers.

Sticky Session Implementation Sample using TIBCO BW

We will define a very simple TIBCO BW workload to implement a REST service, serving a GET reply with a hardcoded value. To simplify the validation process, the application will log the hostname of the pod so quickly we can see who is handling each of the requests:

How To Enable Sticky Session on Your Kubernetes Workloads using Istio?

We deploy this in our Kubernetes cluster and expose it using a Kubernetes service; in our case, the name of this service will be test2-bwce-srv

On top of that, we apply the istio configuration, which will require three (3) istio objects: gateway, virtual service, and the destination rule. As our focus is on the destination rule, we will try to keep it as simple as possible in the other two objects:

 apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: default-gw
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: http
      number: 80
      protocol: HTTP

Virtual Service:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: test-vs
spec:
  gateways:
  - default-gw
  hosts:
  - test.com
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: test2-bwce-srv
        port:
          number: 8080

And finally, the DestinationRule will use a httpCookie that we will name ISTIOD, as you can see in the snippet below:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
    name: default-sticky-dr
    namespace: default
spec:
    host: test2-bwce-srv.default.svc.cluster.local
    trafficPolicy:
      loadBalancer:
        consistentHash:
          httpCookie: 
            name: ISTIOID
            ttl: 60s

Now, that we have already started our test, and after launching the first request, we get a new Cookie that is generated by istio itself that is shown in the Postman response window:

Enable Sticky Sessions in Kubernetes Using Istio (Session Affinity Explained)

This request has been handled for one of the replicas available of the service, as you can see here:

Enable Sticky Sessions in Kubernetes Using Istio (Session Affinity Explained)

All subsequent request from Postman already includes the cookie, and all of them are handled from the same pod:

Enable Sticky Sessions in Kubernetes Using Istio (Session Affinity Explained)

While the other replica’ log is empty, as all the requests have been routed to that specific pod.

Enable Sticky Sessions in Kubernetes Using Istio (Session Affinity Explained)

Summary

We covered in this article the reason behind the need for a sticky session in Kubernetes workload and how we can achieve that using the capabilities of the Istio Service Mesh. So, I hope this can help implement this configuration on your workloads that you can need today or in the future

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Mastering Istio ServiceEntry: Connect Your Service Mesh to APIs

Mastering Istio ServiceEntry: Connect Your Service Mesh to APIs

What Is An Istio ServiceEntry?

Istio ServiceEntry is the way to define an endpoint that doesn’t belong to the Istio Service Registry. Once the ServiceEntry is part of the registry, it can define rules and enforce policies as if they belong to the mesh.

Istio Service Entry answers the question you probably have done several times when using a Service Mesh. How can I do the same magic with external endpoints that I can do when everything is under my service mesh scope? And Istio Service Entry objects provide precisely that:

A way to have an extended mesh managing another kind of workload or, even better, in Istio’s own words:

ServiceEntry enables adding additional entries into Istio’s internal service registry so that auto-discovered services in the mesh can access/route to these manually specified services.

These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes).

What are the main capabilities of Istio ServiceEntry?

Here you can see a sample of the YAML definition of a Service Entry:

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: external-svc-redirect
spec:
  hosts:
  - wikipedia.org
  - "*.wikipedia.org"
  location: MESH_EXTERNAL
  ports:
  - number: 443
    name: https
    protocol: TLS
  resolution: NONE

In this case, we have an external-svc-redirectServiceEntry object that is handling all calls going to the wikipedia.org, and we define the port and protocol to be used (TLS – 443) and classify this service as external to the mesh (MESH_EXTERNAL) as this is an external Web page.

You can also specify more details inside the ServiceEntry configuration, so you can, for example, define a hostname or IP and translate that to a different hostname and port because you can also specify the resolution mode you want to use for this specific Service Entry. If you see the snippet above, you will find a resolution field with NONE value that says it will not make any particular resolution. But other values valid are the following ones:

  • NONE: Assume that incoming connections have already been resolved (to a specific destination IP address).
  • STATIC: Use the static IP addresses specified in endpoints as the backing instances associated with the service.
  • DNS: Attempt to resolve the IP address by querying the ambient DNS asynchronously.
  • DNSROUNDROBIN: Attempt to resolve the IP address by querying the ambient DNS asynchronously. Unlike DNS, DNSROUNDROBIN only uses the first IP address returned when a new connection needs to be initiated without relying on complete results of DNS resolution, and references made to hosts will be retained even if DNS records change frequently eliminating draining connection pools and connection cycling.

To define the target of the ServiceEntry, you need to specify its endpoints by using a WorkloadEntry object. To do that, you need to provide the following data:

  • address: Address associated with the network endpoint without the port.
  • ports: Set of ports associated with the endpoint
  • weight: The load balancing weight associated with the endpoint.
  • locality: The locality associated with the endpoint. A locality corresponds to a failure domain (e.g., country/region/zone).
  • network: Network enables Istio to group endpoints resident in the same L3 domain/network.

What Can You Do With Istio ServiceEntry?

The number of use cases is enormous. Once a ServiceEntry is similar to what you have a Virtual Service defined, you can apply any destination rule to them to do a load balancer, a protocol switch, or any logic that can be done with the DestinationRule object. The same applies to the rest of the Istio CRD, such as RequestAuthentication, and PeerAuthorization, among others.

You can also have a graphical representation of the ServiceEntry inside Kiali, a visual representation for the Istio Service Mesh, as you can see in the picture below:

Understanding Istio ServiceEntry: How to Extend Your Service Mesh to External Endpoints

As you can define, an extended mesh with endpoints outside the Kubernetes cluster is something that is becoming more usual with the explosion of clusters available and the hybrid environments when you need to manage clusters of different topologies and not lose the centralized policy-based network management that the Istio Service Mesh provides to your platform.

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

Introduction

Istio TLS configuration is one of the essential features when we enable a Service Mesh. Istio Service Mesh provides so many features to define in a centralized, policy way how transport security, among other characteristics, is handled in the different workloads you have deployed on your Kubernetes cluster.

One of the main advantages of this approach is that you can have your application focus on the business logic they need to implement. These security aspects can be externalized and centralized without necessarily including an additional effort in each application you have deployed. This is especially relevant if you are following a polyglot approach (as you should) across your Kubernetes cluster workloads.

So, this time we’re going to have our applications just handling HTTP traffic for both internal and external, and depending on where we are reaching, we will force that connection to be TLS without the workload needed to be aware of it. So, let’s see how we can enable this Istio TLS configuration

Scenario View

We will use this picture you can see below to keep in mind the concepts and components that will interact as part of the different configurations we will apply to this.

  • We will use the ingress gateway to handle all incoming traffic to the Kubernetes cluster and the egress gateway to handle all outcoming traffic from the cluster.
  • We will have a sidecar container deployed in each application to handle the communication from the gateways or the pod-to-pod communication.

To simplify the testing applications, we will use the default sample applications Istio provides, which you can find here.

How to Expose TLS in Istio?

This is the easiest part, as all the incoming communication you will receive from the outside will enter the cluster through the Istio Ingress Gateway, so it is this component the one that needs to handle the TLS connection and then use the usual security approach to talk to the pod exposing the logic.

By default, the Istio Ingress Gateway already exposes a TLS port, as you can see in the picture below:

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

So we will need to define a Gateway that receives all this traffic through the HTTPS and redirect that to the pods, and we will do it as you can see here:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: bookinfo-gateway-https
  namespace: default
spec:
  selector:
    istio: ingressgateway
  servers:
    - hosts:
        - '*'
      port:
        name: https
        number: 443
        protocol: HTTPS
      tls:
        mode: SIMPLE # enables HTTPS on this port
        credentialName: httpbin-credential 

As we can see, it is a straightforward configuration, just adding the port HTTPS on the 443 and providing the TLS configuration:

And with that, we can already reach using SSL the same pages:

Secure Your Services with Istio: A Step-by-Step Guide to Setting up Istio TLS Connections

How To Consume SSL from Istio?

Now that we have generated a TLS incoming request without the application knowing anything, we will go one step beyond that and do the most challenging configuration. We will set up TLS/SSL connection to any outgoing communication outside the Kubernetes cluster without the application knowing anything about it.

To do so, we will use one of the Istio concepts we have already covered in a specific article. That concept is the Istio Service Entry that allows us to define an endpoint to manage it inside the MESH.

Here we can see the Wikipedia endpoint added to the Service Mesh registry:

 apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: se-app
  namespace: default
spec:
  hosts:
  - wikipedia.org
  ports:
  - name: https
    number: 443
    protocol: HTTPS
  resolution: DNS

Once we have configured the ServiceEntry, we can define a DestinationRule to force all connections to wikipedia.org will use the TLS configuration:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: tls-app
  namespace: default
spec:
  host: wikipedia.org
  trafficPolicy:
    tls:
      mode: SIMPLE

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kiali Explained: Observability and Traffic Visualization for Istio Service Mesh

Kiali Explained: Observability and Traffic Visualization for Istio Service Mesh

What Is Kiali?

Kiali is an open-source project that provides observability for your Istio service mesh. Developed by Red Hat, Kiali helps users understand the structure and behavior of their mesh and any issues that may arise.

Kiali provides a graphical representation of your mesh, showing the relationships between the various service mesh components, such as services, virtual services, destination rules, and more. It also displays vital metrics, such as request and error rates, to help you monitor the health of your mesh and identify potential issues.

 What are Kiali Main Capabilities?

One of the critical features of Kiali is its ability to visualize service-to-service communication within a mesh. This lets users quickly see how services are connected, and requests are routed through the mesh. This is particularly useful for troubleshooting, as it can help you quickly identify problems with service communication, such as misconfigured routing rules or slow response times.

Kiali 101: Understanding and Utilizing this Essential Service Mesh Management Tool

Kiali also provides several tools for monitoring the health of your mesh. For example, it can alert you to potential problems, such as a high error rate or a service not responding to requests. It also provides detailed tracking information, allowing you to see the exact path a request took through the mesh and where any issues may have occurred.

In addition to its observability features, Kiali provides several other tools for managing your service mesh. For example, it includes a traffic management module, which allows you to control the flow of traffic through your mesh easily, and a configuration management module, which helps you manage and maintain the various components of your mesh.

Overall, Kiali is an essential tool for anyone using an Istio service mesh. It provides valuable insights into the structure and behavior of your mesh, as well as power monitoring and management tools. Whether you are starting with Istio or an experienced user, Kiali can help ensure that your service mesh runs smoothly and efficiently.

What are the main benefits of using Kiali?

The main benefits of using Kiali are:

  • Improved observability of your Istio service mesh. Kiali provides a graphical representation of your mesh, showing the relationships between different service mesh components and displaying key metrics. This allows you to quickly understand the structure and behavior of your mesh and identify potential issues.
  • Easier troubleshooting. Kiali’s visualization of service-to-service communication and detailed tracing information make it easy to identify problems with service communication and pinpoint the source of any issues.
  • Enhanced traffic management. Kiali includes a traffic management module allowing you to control traffic flow through your mesh easily.
  • Improved configuration management. Kiali’s configuration management module helps you manage and maintain the various components of your mesh.

How To Install Kiali?

There are several ways to install Kiali as part of your Service Mesh deployment, being the preferred option to use the Operator model available here.

You can install this operator using Helm or OperatorHub. To install it using Helm Charts, you need to add the following repository using this command:

 helm repo add kiali https://kiali.org/helm-charts

** Remember that once you add a new repo, you need to run the following command to update the charts available

helm repo update

Now, you can install it using the helm installprimitive such as in the following sample:

helm install \
    --set cr.create=true \
    --set cr.namespace=istio-system \
    --namespace kiali-operator \
    --create-namespace \
    kiali-operator \
    kiali/kiali-operator

If you prefer going down the route of OperatorHub, you can use the following URL . Now, by clicking on the Install button, you will see the steps to have the component installed in your Kubernetes environment.

Kiali 101: Understanding and Utilizing this Essential Service Mesh Management Tool

In case you want a simple installation of Kiali, you can also use the sample YAML available inside the Istio installation folder using the following command:

kubectl apply -f $ISTIO_HOME/samples/addons/kiali.yaml

How does Kiali work?

Kiali is just the graphical representation of the information available regarding how the service mesh works. So it is not the responsibility of Kiali to store those metrics but to retrieve them and draw them in a relevant way for the user of the tool.

Prometheus does the storage of this data, so Kiali uses the Prometheus REST API to retrieve the information and draw it graphically, as you can see here:

  • It is going to show several relevant parts of the graph. It will show the namespace selected and inside of them the different apps (it would detect an app in case you have a label added to the workload with the name app ). Inside, each app will add different services and pods with other icons (triangles for the services and squares for the pods).
  • It will also show how the traffic reaches the cluster through the different ingress gateways and how it goes out in case we have any egress gateway configured.
  • It will show the kind of traffic we’re handling and the different error rates based on the kind of protocol, such as TCP, HTTP, and so on, as you can see in the picture below. The protocol is decided based on a naming convention on the port name from the service with the expected format: protocol-name
Kiali 101: Understanding and Utilizing this Essential Service Mesh Management Tool

Can Kiali be used with any service mesh?

No, Kiali is specifically designed for use with Istio service meshes.

It provides observability, monitoring, and management tools for Istio service meshes but is incompatible with other service mesh technologies.

If you use a different service mesh, you will need to find an additional tool for managing and monitoring it.

Are there other alternatives to Kiali?

Even if you cannot see natural alternatives to Kiali to visualize your workloads and traffic through the Istio Service Mesh, you can use other tools to grab the metrics that feed Kiali and have custom visualization using more generic tools such as Grafana, among others.

Let’s talk about similar tools to Kialia for other Service Meshes, such as Linkerd, Consul Connect, or even Kuma. Most follow a different approach where the visualization part is not a separate “project” but relies on a standard visualization tool. That gives you much more flexibility, but at the same time, it lacks most of the excellent visualization of the traffic that Kialia provides, such as graph views or being able to modify the traffic directly from the graph view.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Introduction

Services Mesh is one the “greatest new thing” in our PaaS environments. No matter if you’re working with K8S, Docker Swarm, pure-cloud with EKS or AWS, you’ve heard and probably tried to know how can be used this new thing that has so many advantages because it provides a lot of options in handling communication between components without impacting the logic of the components. And if you’ve heard from Service Mesh, you’ve heard from Istio as well, because this is the “flagship option” at the moment, even when other options like Linkerd or AWS App Mesh are also a great option, Istio is the most used Service Mesh at the moment.

You probably have seen some examples about how to integrate Istio with your open source based developments, but what happens if you have a lot of BWCE or BusinessWorks applications… can you use all this power? Or are you going to be banned for this new world?

Do not panic! This article is going to show you how easy you can use Istio with your BWCE application inside a K8S cluster. So, let the match…. BEGIN!

Scenario

The scenario that we’re going to test is quite simple. It’s a simple consumer provider approach. We’re going to use a simple Web Service SOAP/HTTP exposed by a backend to show that this can work not only with fancy REST API but even with any HTTP traffic that we could generate at the BWCE Application level.

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

So, we are going to invoke a service that is going to request a response from its provider and give us the response. That’s pretty easy to set up using pure BWCE without anything else.

All code related to this example is available for you in the following GitHub repo: Go get the code!

Steps

Step 1 Install Istio inside your Kubernetes Cluster

In my case I’m using Kubernetes cluster inside my Docker Desktop installation so you can do the same or use your real Kubernetes cluster, that’s up to you. The first step anyway is to install istio. And to do that, nothing better than following the steps given at isito-workshop that you can find here: https://polarsquad.github.io/istio-workshop/install-istio/ (UPDATE: No longer available)

Once you’ve finished we’re going to get the following scenario in our kubernetes cluster, so please, check that the result is the same with the following commands:

kubectl get pods -n istio-system

You should see that all pods are Running as you can see in the picture below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

kubectl -n istio-system get deployment -listio=sidecar-injector

You should see that there is one instance (CURRENT = 1) available.

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

kubectl get namespace -L istio-injection

You should see that ISTIO-INJECTION is enabled for the default namespace as the image shown below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Step 2 Build BWCE Applications

Now, as we have all the infrastructure needed at the Istio level we can start building our application and to do that we don’t have to do anything different in our BWCE application. Eventually, they’re going to be two application that talks using HTTP as protocol, so nothing specific.

This is something important because when we usually talk about Service Mesh and Istio with customers, the same question always arises: Is Istio supported in BWCE? Can we use Istio as a protocol to communicate our BWCE application? So, they expect that it should exist some palette or some custom plugin they should install to support Istio. But none of that is needed at the application level. And that applies not only to BWCE but also to any other technology like Flogo or even open source technologies because at the end Istio (and Envoy is the other part needed in this technology that we usually avoid talking about when we talk about Istio) works in a Proxy mode using one of the most usual patterns in containers that is the “sidecar pattern”.

So, the technology that is exposing and implementing or consuming the service knows nothing about all this “magic” that is being executed in the middle of this communication process.

We’re going to define the following properties as environment variables like we’ll do in case we’re not using Istio:

Provider application:

  • PROVIDER_PORT → Port where the provider is going to listen for incoming requests.

Consumer application:

  • PROVIDER_PORT → Port where the provider host will be listening to.
  • PROVIDER_HOST → Host or FQDN (aka K8S service name) where provider service will be exposed.
  • CONSUMER_PORT → Port where the consumer service is going to listen for incoming requests.

So, as you can see if you check that the code of the BWCE application we don’t need to do anything special to support Istio in our BWCE applications.

NOTE: There is only an important topic that is not related to the Istio integration but how BWCE populates the property BW.CLOUD.HOST that is never translated to loopback interface or even 0.0.0.0. So it’s better that you change that variable for a custom one or to use localhost or 0.0.0.0 to listen in the loopback interface because is where the Istio proxy is going to send the requests.

After that we’re going to create our Dockerfiles for these services without anything, in particular, something similar to what you can see here:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

NOTE: As a prerequisite, we’re using BWCE Base Docker Image named as bwce_base.2.4.3 that corresponds to version 2.4.3 of BusinessWorks Container Edition.

And now we build our docker images in our repository as you can see in the following picture:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Step 3: Deploy the BWCE Applications

Now, when all the images are being created we need to generate the artifacts needed to deploy these applications in our Cluster. Once again in this case nothing special neither in our YAML file, as you can see in the picture below we’re going to define a K8S service and a K8S deployment based on the imaged we’ve created in the previous step:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

A similar thing happens with consumer deployment as well as you can see in the image below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

And we can deploy them in our K8S cluster with the following commands:

kubectl apply -f kube/provider.yaml

kubectl apply -f kube/consumer.yaml

Now, you should see the following components deployed. Only to fulfill all the components needed in our structure we’re going to create an ingress to make possible to execute requests from outside the cluster to those components and to do that we’re going to use the following yaml file:

kubectl apply -f kube/ingress.yaml

And now, after doing that, we’re going to invoke the service inside our SOAPUI project and we should get the following response:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Step 4 — Recap what we’ve just done

Ok, it’s working and you think… mmmm I can get this working without Istio and I don’t know if Istio is still doing anything or not…

Ok, you’re right, this could not be so great as you were expected, but, trust me, we’re just going step by step… Let’s see what’s really happening instead of a simple request from outside the cluster to the consumer service and that request being forwarded to the backend, what’s happening is a little bit more complex. Let’s take a look at the image below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Incoming request from the outside is being handled by an Ingress Envoy Controller that is going to execute all rules defined to choose which service should handle the request, in our case the only consumer-v1 component is going to do it, and the same thing happens in the communication between consumer and provider.

So, we’re getting some interceptors in the middle that COULD apply the logic that is going to help us to route traffic between our components with the deployment of rules at runtime level without changing the application, and that is the MAGIC.

Step 5 — Implement Canary Release

Ok, now let’s apply some of this magic to our case. One of the most usual patterns that we usually apply when we’re rolling out an update in some of our services is the canary approach. Only to do a quick explanation or what this is:

Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

If you want to read more about this you can take a look at the full article in Martin Fowler’s blog.

So, now we’re going to generate a small change in our provider application, that is going to change the response to be sure that we’re targeting version two, as you can see in the image below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Now, we are going to build this application and generate the new image called provider:v2.

But before we’re going to deploy it using the YAML file called provider-v2.yaml we’re going to set a rule in our Istio Service Mesh that all traffic should be targeted to v1 even when others are applied. To do that we’re going to deploy the file called default.yaml that has the following content:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

So, in this case, what we’re saying to Istio is that even if there are other components registered to the service, it should reply always the v1, so we can now deploy the v2 without any issue because it is not going to reply to any traffic until we decide to do so. So, now we can deploy the v2 with the following command:

kubectl apply -f provider-v2.yaml

And even when we execute the SOAPUI request we’re still getting a v1 reply even if we check in the K8S service configuration that the v2 is still bounded to that service.

Ok, now we’re going to start doing the release and we’re going to start with 10% to the new version and 90% of the requests to the old one, and to do that we’re going to deploy the rule canary.yaml using the following command:

kubectl apply -f canary.yaml

Where canary.yaml has the content shown below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

And now when we try enough times we’re going to get that most of the requests (90% approx) is going to reply v1 and only for 10% is going to reply from my new version:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Now, we can monitor how v2 is performing without affecting all customers and if everything goes as expected we can continue increasing that percentage until all customers are being replied by v2.