Why Visual Diff Is Essential for Low-Code Development and Team Collaboration

Why Visual Diff Is Essential for Low-Code Development and Team Collaboration

Helping you excel using low code in distributed teams and parallel programming

Most enterprises are exploring low-code/no-code development now that the most important thing is to achieve agility on the technology artifacts from different perspectives (development, deployment, and operation).

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

The benefits of this way of working make this almost a no-brainer decision for most companies. We already covered them in a previous article. Take a look if you have not read it yet.

But we know that all new things come with their own challenges that we need to address and master in order to unleash the full benefits that these new paradigms or technologies are providing. Much like with cloud-native architecture, we need to be able to adapt.

Sometimes it is not the culture that we need to change. Sometimes the technology and the tools also need to evolve to address those challenges and help us on that journey. And this is how Visual Diff came into life.

When you develop using a low-code approach, all the development process is easier. You need to combine different blocks that do the logic you need, and everything is simpler than a bunch of code lines.

Example of low-code development
Low-code development approach using TIBCO BusinessWorks.

But we also need to manage all these artifacts in a repository whereby all of them are focused on source code development. That means that when you are working with those tools at the end, you are not working with a “low-code approach” but rather a source code approach. Things like merging different branches and looking to the version history to know the changes are complex.

And they are complex because they are performed by the repository itself, which is focused on the file changes and the source code that changes. But one of the great benefits of low-code development is that the developer doesn’t need to be aware of the source code generated as part of the visual, faster activity. So, how can we solve that? What can we use to solve that?

Low-code technologies need to advance to take the lead here. For example, this is what TIBCO BusinessWorks has done with the release of their Visual Diff capability.

So, you still have your integration with your source code repository. You can do all the processes and activities you usually need to do in this kind of parallel distributed development. Still, you can also see all those activities from a “low-code” perspective.

That means that when I am taking a look at the version history, I can see the visual artifacts that have been modified. The activities added or deleted are shown there in a meaningful way for low-code development. That closes the loop about how low-code developments can take all the advantages of the modern source code repositories and their flows (GitFlow, GitHub Flow, One Flow, etc.) as well as the advantages of the low-code perspective.

Let’s say there are two options with which you can see how an application has been changed. One is the traditional approach and the other uses the Visual Diff:

Visual Diff of your processes
Option A: Visual Diff of your processes
Same processes but with a Text Comparision approach
Option B: Same processes but with a Text Comparison approach

So, based on this evidence, what do you think is easier to understand? Even if you are a true coder as I am, we cannot deny the ease and benefits of the low-code approach for massive and standard development in the enterprise world.


Summary

No matter how fast we are developing with all the accelerators and frameworks that we have, a well-defined low-code application will be faster than any of us. It is the same battle that we had in the past with the Graphical Interfaces or mouse control versus the keyboard.

We accept that there is a personal preference to choose one or the other, but when we need to decide what is more effective and need to rely on the facts, we cannot be blind to what is in front of us.

I hope you have enjoyed this article. Have a nice day!

Kubernetes Batch Processing with TIBCO BusinessWorks: Jobs, Patterns, and Use Cases

Kubernetes Batch Processing with TIBCO BusinessWorks: Jobs, Patterns, and Use Cases
Table Of Contents

Add a header to begin generating the table of contents

We all know that in the rise of the cloud-native development and architectures, we’ve seen Kubernetes based platforms as the new standard all focusing on new developments following the new paradigms and best practices: Microservices, Event-Driven Architectures new shiny protocols like GraphQL or gRPC, and so on and so forth.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

!– /wp:paragraph –>

Detect Performance Bottlenecks in TIBCO BusinessWorks Container Edition Using Statistics

Detect Performance Bottlenecks in TIBCO BusinessWorks Container Edition Using Statistics

Usually, when you’re developing or running your container application you will get to a moment when something goes wrong. But not in a way you can solve with your logging system and with testing.

A moment when there is some bottleneck, something that is not performing as well as you want, and you’d like to take a look inside. And that’s what we’re going to do. We’re going to watch inside.

Because our BusinessWorks Container Edition provides so great features to do it that you need to use it into your favor because you’re going to thank me for the rest of your life. So, I don’t want to spend one more minute about this. I’d like to start telling you right now.

The first thing we need to do, we need to go inside the OSGi console from the container. So, the first thing we do is to expose the 8090 port as you can see in the picture below

Detect Performance Bottlenecks in TIBCO BusinessWorks Container Edition Using Statistics

Now, we can expose that port to your host, using the port-forward command

kubectl port-forward deploy/phenix-test-project-v1 8090:8090

And then we can execute an HTTP Request to execute any info using commands like this:

curl -v http://localhost:8090/bw/framework.json/osgi?command=<command>

And we’re are going to execute first the activation of the process statistics like this:

curl -v http://localhost:8090/bw/framework.json/osgi?command=startpsc
Detect Performance Bottlenecks in TIBCO BusinessWorks Container Edition Using Statistics

And as you can see it says that statistics has been enabled for echo application, so using that application name we’re going to gather the statistics at the level

curl -v http://localhost:8090/bw/framework.json/osgi?command=lpis%20echo
Detect Performance Bottlenecks in TIBCO BusinessWorks Container Edition Using Statistics

And you can see the statistics at the process level where you can see the following metrics:

  • Process metadata (name, parent process and version)
  • Total instance by status (create, suspended, failed and executed)
  • Execution time (total, average, min, max, most recent)
  • Elapsed time (total, average, min, max, most recent)

And we can get the statistics at the activity level:

Detect Performance Bottlenecks in TIBCO BusinessWorks Container Edition Using Statistics

And with that, you can detect any bottleneck you’re facing into your application and also be sure which activity or which process is responsible for it. So you can solve it in a quick way.

Have fun and use the tools at your disposal!

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

Prometheus is becoming the new standard for Kubernetes monitoring and today we are going to cover how we can do Prometheus TIBCO monitoring in Kubernetes.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

We’re living in a world with constant changes and this is even more true in the Enterprise Application world. I’ll not spend much time talking about things you already know, but just say that the microservices architecture approach and the PaaS solutions have been a game-changer for all enterprise integration technologies.

This time I’d like to talk about monitoring and the integration capabilities we have of using Prometheus to monitor our microservices developed under TIBCO technology. I don’t like to spend too much time either talking about what Prometheus is, as you probably already know, but in a summary, this is an open-source distributed monitoring platform that has been the second project released by the Cloud Native Computing Foundation (after Kubernetes itself) and that has been established as a de-facto industry standard for monitoring K8S clusters (alongside with other options in the market like InfluxDB and so on).

Prometheus has a lot of great features, but one of them is that it has connectors for almost everything and that’s very important today because it is so complicated/unwanted/unusual to define a platform with a single product for the PaaS layer. So today, I want to show you how to monitor your TIBCO BusinessWorks Container Edition applications using Prometheus.

Most of the info I’m going to share is available in the bw-tooling GitHub repo, so you can get to there if you need to validate any specific statement.

Ok, are we ready? Let’s start!!

I’m going to assume that we already have a Kubernetes cluster in place and Prometheus installed as well. So, the first step is to enhance the BusinessWorks Container Edition base image to include the Prometheus capabilities integration. To do that we need to go to the GitHub repo page and follow these instructions:

  • Download & unzip the prometheus-integration.zip folder.
  • Open TIBCO BusinessWorks Studio and point it to a new workspace.
  • Right-click in Project Explorer → Import… → select Plug-ins and Fragments → select Import from the directory radio button
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Browse it to prometheus-integration folder (unzipped in step 1)
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Now click Next → Select Prometheus plugin → click Add button → click Finish. This will import the plugin in the studio.
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Now, to create JAR of this plugin so first, we need to make sure to update com.tibco.bw.prometheus.monitor with ‘.’ (dot) in Bundle-Classpath field as given below in META-INF/MANIFEST.MF file.
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Right-click on Plugin → Export → Export…
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Select type as JAR file click Next
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Now Click Next → Next → select radio button to use existing MANIFEST.MF file and browse the manifest file
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Click Finish. This will generate prometheus-integration.jar

Now, with the JAR already created what we need to do is include it in your own base image. To do that we place the JAR file in the <TIBCO_HOME>/bwce/2.4/docker/resources/addons/jar

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

And we launch the building image command again from the <TIBCO_HOME>/bwce/2.4/docker folder to update the image using the following command (use the version you’re using at the moment)

docker build -t bwce_base:2.4.4 .

So, now we have an image with Prometheus support! Great! We’re close to the finish, we just create an image for our Container Application, in my case, this is going to be a very simple echo service that you can see here.

And we only need to keep these things in particular when we deploy to our Kubernetes cluster:

  • We should set an environment variable with the BW_PROMETHEUS_ENABLE to “TRUE”
  • We should expose the port 9095 from the container to be used by Prometheus to integrate.
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

Now, we only need to provide this endpoint to the Prometheus scrapper system. There are several ways to do that, but we’re going to focus on the simple one.

We need to change the prometheus.yml to add the following job data:

- job_name: 'bwdockermonitoring'
  honor_labels: true
  static_configs:
    - targets: ['phenix-test-project-svc.default.svc.cluster.local:9095']
      labels:
        group: 'prod'

And after restarting Prometheus we have all the data indexed in the Prometheus database to be used for any dashboard system.

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

In this case, I’m going to use Grafana to do quick dashboard.

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

Each of these graph components is configured based on the metrics that are being scraped by Prometheus TIBCO exporter.

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

The past month during the KubeCon 2019 Europe in Barcelona OpenTracing announces its merge with OpenCensus project to create a new standard named OpenTelemetry that is going to be live in September 2019.

So, I think that would be awesome to take a look at the capabilities regarding OpenTracing we have available in TIBCO BusinessWorks Container Edition

Today’s world is too complex in terms of how our architectures are defined and managed. New concepts in the last years like containers, microservices, service mesh, give us the option to reach a new level of flexibility, performance, and productivity but also comes with a cost of management we need to deal with.

Years ago architectures were simpler, service was a concept that was starting out, but even then a few issues begin to arise regarding monitoring, tracing, logging and so on. So, in those days everything was solved with a Development Framework that all our services were going to include because all of our services were developed by the same team, same technology, and in that framework, we can make sure things were handled properly.

Now, we rely on standards to do this kind of things, and for example, for Tracing, we rely on OpenTracing. I don’t want to spend time talking about what OpenTracing is where they have a full medium account talking themselves much better than I could ever do, so please take some minutes to read about it.

The only statement I want to do here is the following one:

Tracing is not Logging, and please be sure you understand that.

Tracing is about sampling, it’s like how flows are performing and if everything is worked but it is not about a specific request has been done well for customer ID whatever… that’s logging, no tracing.

So OpenTracing and its different implementations like Jaeger or Zipkin are the way we can implement tracing today in a really easy way, and this is not something that you could only do in your code-based development language, you can do it with our zero-code tools to develop cloud-native applications like TIBCO BusinessWorks Container Edition and that’s what I’d like to show you today. So, let the match, begin…

First thing I’d like to do is to show you the scenario we’re going to implement, and this is going to be the one shown in the image below:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

You are going to have two REST service that is going to call one to each other, and we’re going to export all the traces to Jaeger external component and later we can use its UI to analyze the flow in a graphical and easy way.

So, the first thing we need to do is to develop the services that as you can see in the pictures below are going to be quite easy because this is not the main purpose of our scenario.

Once, we have our docker images based on those applications we can start, but before we launch our applications, we need to launch our Jaeger system you can read all info about how to do it in the link below:

But at the end we only to run the following command:

docker run -d --name jaeger -e COLLECTOR_ZIPKIN_HTTP_PORT=9411  -p 5775:5775/udp  -p 6831:6831/udp  -p 6832:6832/udp  -p 5778:5778  -p 16686:16686  -p 14268:14268  -p 9411:9411  jaegertracing/all-in-one:1.8

And now, we’re ready to launch our applications and the only things we need to do in our developments because as you could see we didn’t do anything strange in our development and it was quite straightforward is to add the following environment variables when we launch our container

BW_JAVA_OPTS=”-Dbw.engine.opentracing.enable=true” -e JAEGER_AGENT_HOST=jaeger -e JAEGER_AGENT_PORT=6831 -e JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778

And… that’s it, we launch our containers with the following commands and wait until applications are up & running

docker run -ti -p 5000:5000 — name provider -e BW_PROFILE=Docker -e PROVIDER_PORT=5000 -e BW_LOGLEVEL=ERROR — link jaeger -e BW_JAVA_OPTS=”-Dbw.engine.opentracing.enable=true” -e JAEGER_AGENT_HOST=jaeger -e JAEGER_AGENT_PORT=6831 -e JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778 provider:1.0
OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained
docker run — name consumer -ti -p 6000:6000 -e BW_PROFILE=Docker — link jaeger — link provider -e BW_JAVA_OPTS=”-Dbw.engine.opentracing.enable=true” -e JAEGER_AGENT_HOST=jaeger -e JAEGER_AGENT_PORT=6831 -e JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778 -e CONSUMER_PORT=6000 -e PROVIDER_HOST=provider consumer:1.0
OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

Once they’re running, let’s generate some requests! To do that I’m going to use a SOAPUI project to generate some stable load for 60 secs, as you can see in the image below:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

And now we’re going to go to the following URL to see the Jaeger UI and we can see the following thing as soon as you click in the Search button

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

And then if we zoom in some specific trace:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

That’s pretty amazing but that’s not all, because you can see if you search in the UI about the data of this traces, you can see technical data from your BusinessWorks Container Edition flows as you can see in the picture below:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

But… what if you want to add your custom tags to those traces? You can do it as well!! Let me explain to you how.

Since BusinessWorks Container Edition 2.4.4 you are going to find a new tab in all your activities named “Tags” where you can add the custom tags that you want this activity to include, for example, a custom id that is going to be propagated through the whole process we can define it as you can see here.

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

And if you take a look at the data we have in the system, you can see all of these traces has this data:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

You can take a look at the code in the following GitHub repository:

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)

Introduction

Probes are how we’re able to say to Kubernetes that everything inside the pod is working as expected. Kubernetes has no way to know what’s happening inside at the fine-grained and has no way to know for each container if it is healthy or not, that’s why they need help from the container itself.

Imagine that you’re Kubernetes controller and you have like eight different pods , one with Java batch application, another with some Redis instance, other with nodejs application, other with a Flogo microservice (Note: Haven’t you heard about Flogo yet? Take some minutes to know about one of the next new things you can use now to build your cloud-native applications) , another with a Oracle database, other with some jetty web server and finally another with a BusinessWorks Container Edition application. How can you tell that every single component is working fine?

First, you can think that you can do it with the entrypoint component of your Dockerfile as you only specify one command to run inside each container, so check if that process is running, and that means that everything is healthy? Ok… fair enough…

But, is this true always? A running process at the OS/container level means that everything is working fine? Let’s think about the Oracle database for a minute, imagine that you have an issue with the shared memory and it keeps in an initializing status forever, K8S is going to check the command, it is going to find that is running and says to the whole cluster: Ok! Don’t worry! Database is working perfectly, go ahead and send your queries to it!!

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)
Photo by Rod Long on Unsplash

This could happen with similar components like a web server or even with an application itself, but it is too common when you have servers that can handle deployments on it, like BusinessWorks Container Edition itself. And that’ why this is very important for us as developers and even more important for us as administrators. So, let’s start!

The first thing we’re going to do is to build a BusinessWorks Container Edition Application, as this is not the main purpose of this article, we’re going to use the same ones I’ve created for the BusinessWorks Container Edition — Istio Integration that you could find here.

So, this is a quite simple application that exposes a SOAP Web Service. All applications in BusinessWorks Container Edition (as well as in BusinessWorks Enterprise Edition) has its own status, so you can ask them if they’re Running or not, that something the BusinessWorks Container internal “engine” (NOTE: We’re going to use the word engine to simplify when we’re talking about the internals of BWCE. In detail, the component that knows the status of the application is the internal AppNode the container starts, but let’s keep it simple for now)

Kubernetes Probes

In Kubernetes, exists the “probe” concept to perform health check to your container. This is performed by configuring liveness probes or readiness probes.

  • Liveness probe: Kubernetes uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress.
  • Readiness probe: Kubernetes uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balance

Even when there are two types of probes for BusinessWorks Container Edition both are handling the same way, the idea is the following one: As long as the application is Running, you can start sending traffic and when it is not running we need to restart the container, so that makes it simpler for us.

Implementing Probes

Each BusinessWorks Container Edition application that is started has an out of the box way to know if it is healthy or not. This is done by a special endpoint published by the engine itself:

http://localhost:7777/_ping/

So, if we have a normal BusinessWorks Container Edition application deployed on our Kubernetes cluster as we had for the Istio integration we have logs similar to these ones:

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)
Staring traces of a BusinessWorks Container Edition Application

As you can see logs says that the application is started. So, as we can’t launch a curl request from the inside the container (as we haven’t exposed the port 7777 to the outside yet and curl is not installed in the base image), the first thing we’re going to do is to expose it to the rest of the cluster.

To do that we change our Deployment.yml file that we have used to this one:

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)
Deployment.yml file with the 7777 port exposed

Now, we can go to any container in the cluster that has “curl” installed or any other way to launch a request like this one with the HTTP 200 code and the message “Application is running”.

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)
Successful execution of _ping endpoint

NOTE: If you forget the last / and try to invoke _ping instead of _ping/ you’re going to get an HTTP 302 Found code with the final location as you can see here:

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)
HTTP 302 code execution were pointing to _ping instead of _ping/

Ok, let’s see what happens if now we stop the application. To do that we’re going to go inside the container and use the OSGi console.

To do that once you’re inside the container you execute the following command:

ssh -p 1122 equinox@localhost

It is going to ask for credentials and use the default password ‘equinox’. After that is going to give you the chance to create a new user and you can use whatever credentials work for you. In my example, I’m going to use admin / adminadmin (NOTE: Minimum length for a password is eight (8) characters.

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)

And now, we’re in. And this allows us the option to execute several commands, as this is not the main topic for today I’m going to skip all the explanation but you can take a look at this link with all the info about this console.

If we execute frwk:la is going to show the applications deployed, in our case the only one, as it should be in BusinessWorks Container Edition application:

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)

To stop it, we are going to execute the following command to list all the OSGi bundle we have at the moment running in the system:

frwk:lb

Now, we find the bundles that belong to our application (at least two bundles (1 per BW Module and another for the Application)

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)
Showing bundles inside the BusinessWorks Container Application

And now we can stop it using felix:stop <ID>, so in my case, I need to execute the following commands:

stop “603”

stop “604”

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)
Commands to stop the bundles that belong to the application

And now the application is stopped

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)
OSGi console showing the application as Stopped

So, if now we try to launch the same curl command as we executed before, we’re getting the following output:

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)
Failed execution of ping endpoint when Application is stopped

As you can see an HTTP 500 Error which means something is not fine. If now we try to start again the application using the start bundle command (equivalent to the stop bundle command that we used before) for both bundles of the application, you are going to see that the application says is running again:

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)

And the command has the HTTP 200 output as it should have and the message “Application us running”

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)

So, now, after knowing how the _ping/ endpoint works we only need to add it to our deployment.yml file from Kubernetes. So we modified again our deployment file to be something like this:

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)

NOTE: It’s quite important the presence of initialDelaySeconds parameter to make sure the application has the option to start before start executing the probe. In case you don’t put this value you can get a Reboot Loop in your container.

NOTE: Example shows port 7777 as an exported port but this is only needed for the steps we’ve done before and you will not be needed in a real production environment.

So now we deploy again the YML file and once we get the application running we’re going to try the same approach, but now as we have the probes defined as soon as I stop the application containers has going to be restarted. Let’s see!

Kubernetes Liveness and Readiness Probes for TIBCO BusinessWorks (BWCE)

As you can see in the picture above after the application is Stopped the container has been restarted and because of that, we’ve got expelled from inside the container.

So, that’s all, I hope that helps you to set up your probes and in case you need more details, please take a look at the Kubernetes documentation about httpGet probes to see all the configuration and option that you can apply to them.

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Introduction

Services Mesh is one the “greatest new thing” in our PaaS environments. No matter if you’re working with K8S, Docker Swarm, pure-cloud with EKS or AWS, you’ve heard and probably tried to know how can be used this new thing that has so many advantages because it provides a lot of options in handling communication between components without impacting the logic of the components. And if you’ve heard from Service Mesh, you’ve heard from Istio as well, because this is the “flagship option” at the moment, even when other options like Linkerd or AWS App Mesh are also a great option, Istio is the most used Service Mesh at the moment.

You probably have seen some examples about how to integrate Istio with your open source based developments, but what happens if you have a lot of BWCE or BusinessWorks applications… can you use all this power? Or are you going to be banned for this new world?

Do not panic! This article is going to show you how easy you can use Istio with your BWCE application inside a K8S cluster. So, let the match…. BEGIN!

Scenario

The scenario that we’re going to test is quite simple. It’s a simple consumer provider approach. We’re going to use a simple Web Service SOAP/HTTP exposed by a backend to show that this can work not only with fancy REST API but even with any HTTP traffic that we could generate at the BWCE Application level.

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

So, we are going to invoke a service that is going to request a response from its provider and give us the response. That’s pretty easy to set up using pure BWCE without anything else.

All code related to this example is available for you in the following GitHub repo: Go get the code!

Steps

Step 1 Install Istio inside your Kubernetes Cluster

In my case I’m using Kubernetes cluster inside my Docker Desktop installation so you can do the same or use your real Kubernetes cluster, that’s up to you. The first step anyway is to install istio. And to do that, nothing better than following the steps given at isito-workshop that you can find here: https://polarsquad.github.io/istio-workshop/install-istio/ (UPDATE: No longer available)

Once you’ve finished we’re going to get the following scenario in our kubernetes cluster, so please, check that the result is the same with the following commands:

kubectl get pods -n istio-system

You should see that all pods are Running as you can see in the picture below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

kubectl -n istio-system get deployment -listio=sidecar-injector

You should see that there is one instance (CURRENT = 1) available.

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

kubectl get namespace -L istio-injection

You should see that ISTIO-INJECTION is enabled for the default namespace as the image shown below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Step 2 Build BWCE Applications

Now, as we have all the infrastructure needed at the Istio level we can start building our application and to do that we don’t have to do anything different in our BWCE application. Eventually, they’re going to be two application that talks using HTTP as protocol, so nothing specific.

This is something important because when we usually talk about Service Mesh and Istio with customers, the same question always arises: Is Istio supported in BWCE? Can we use Istio as a protocol to communicate our BWCE application? So, they expect that it should exist some palette or some custom plugin they should install to support Istio. But none of that is needed at the application level. And that applies not only to BWCE but also to any other technology like Flogo or even open source technologies because at the end Istio (and Envoy is the other part needed in this technology that we usually avoid talking about when we talk about Istio) works in a Proxy mode using one of the most usual patterns in containers that is the “sidecar pattern”.

So, the technology that is exposing and implementing or consuming the service knows nothing about all this “magic” that is being executed in the middle of this communication process.

We’re going to define the following properties as environment variables like we’ll do in case we’re not using Istio:

Provider application:

  • PROVIDER_PORT → Port where the provider is going to listen for incoming requests.

Consumer application:

  • PROVIDER_PORT → Port where the provider host will be listening to.
  • PROVIDER_HOST → Host or FQDN (aka K8S service name) where provider service will be exposed.
  • CONSUMER_PORT → Port where the consumer service is going to listen for incoming requests.

So, as you can see if you check that the code of the BWCE application we don’t need to do anything special to support Istio in our BWCE applications.

NOTE: There is only an important topic that is not related to the Istio integration but how BWCE populates the property BW.CLOUD.HOST that is never translated to loopback interface or even 0.0.0.0. So it’s better that you change that variable for a custom one or to use localhost or 0.0.0.0 to listen in the loopback interface because is where the Istio proxy is going to send the requests.

After that we’re going to create our Dockerfiles for these services without anything, in particular, something similar to what you can see here:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

NOTE: As a prerequisite, we’re using BWCE Base Docker Image named as bwce_base.2.4.3 that corresponds to version 2.4.3 of BusinessWorks Container Edition.

And now we build our docker images in our repository as you can see in the following picture:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Step 3: Deploy the BWCE Applications

Now, when all the images are being created we need to generate the artifacts needed to deploy these applications in our Cluster. Once again in this case nothing special neither in our YAML file, as you can see in the picture below we’re going to define a K8S service and a K8S deployment based on the imaged we’ve created in the previous step:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

A similar thing happens with consumer deployment as well as you can see in the image below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

And we can deploy them in our K8S cluster with the following commands:

kubectl apply -f kube/provider.yaml

kubectl apply -f kube/consumer.yaml

Now, you should see the following components deployed. Only to fulfill all the components needed in our structure we’re going to create an ingress to make possible to execute requests from outside the cluster to those components and to do that we’re going to use the following yaml file:

kubectl apply -f kube/ingress.yaml

And now, after doing that, we’re going to invoke the service inside our SOAPUI project and we should get the following response:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Step 4 — Recap what we’ve just done

Ok, it’s working and you think… mmmm I can get this working without Istio and I don’t know if Istio is still doing anything or not…

Ok, you’re right, this could not be so great as you were expected, but, trust me, we’re just going step by step… Let’s see what’s really happening instead of a simple request from outside the cluster to the consumer service and that request being forwarded to the backend, what’s happening is a little bit more complex. Let’s take a look at the image below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Incoming request from the outside is being handled by an Ingress Envoy Controller that is going to execute all rules defined to choose which service should handle the request, in our case the only consumer-v1 component is going to do it, and the same thing happens in the communication between consumer and provider.

So, we’re getting some interceptors in the middle that COULD apply the logic that is going to help us to route traffic between our components with the deployment of rules at runtime level without changing the application, and that is the MAGIC.

Step 5 — Implement Canary Release

Ok, now let’s apply some of this magic to our case. One of the most usual patterns that we usually apply when we’re rolling out an update in some of our services is the canary approach. Only to do a quick explanation or what this is:

Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

If you want to read more about this you can take a look at the full article in Martin Fowler’s blog.

So, now we’re going to generate a small change in our provider application, that is going to change the response to be sure that we’re targeting version two, as you can see in the image below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Now, we are going to build this application and generate the new image called provider:v2.

But before we’re going to deploy it using the YAML file called provider-v2.yaml we’re going to set a rule in our Istio Service Mesh that all traffic should be targeted to v1 even when others are applied. To do that we’re going to deploy the file called default.yaml that has the following content:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

So, in this case, what we’re saying to Istio is that even if there are other components registered to the service, it should reply always the v1, so we can now deploy the v2 without any issue because it is not going to reply to any traffic until we decide to do so. So, now we can deploy the v2 with the following command:

kubectl apply -f provider-v2.yaml

And even when we execute the SOAPUI request we’re still getting a v1 reply even if we check in the K8S service configuration that the v2 is still bounded to that service.

Ok, now we’re going to start doing the release and we’re going to start with 10% to the new version and 90% of the requests to the old one, and to do that we’re going to deploy the rule canary.yaml using the following command:

kubectl apply -f canary.yaml

Where canary.yaml has the content shown below:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

And now when we try enough times we’re going to get that most of the requests (90% approx) is going to reply v1 and only for 10% is going to reply from my new version:

Integrating Istio with TIBCO BWCE Applications (Service Mesh and Canary Releases)

Now, we can monitor how v2 is performing without affecting all customers and if everything goes as expected we can continue increasing that percentage until all customers are being replied by v2.