#TIBFAQS Enabling Remote Debugging for TIBCO BusinessWorks Application on Kubernetes

#TIBFAQS Enabling Remote Debugging for TIBCO BusinessWorks Application on Kubernetes

Provide more agility to your troubleshooting efforts by debugging exactly where the error is happening using Remote Debugging techniques

#TIBFAQS Enabling Remote Debugging for TIBCO BusinessWorks Application on Kubernetes
Photo by Markus Winkler on Unsplash

Container revolution has provided a lot of benefits, as we have been discussed in-depth in other articles, and at the same time, it has also provided some new challenges that we need to tackle.

All agility that we have now in the hands of our developers needs to be also extended to the maintenance work and fixing things. We need to be agile as well. We know some of the main issues that we have regarding this: It works on my environment, with the data set that I have, I couldn’t see the issue, or I couldn’t reproduce the error, are just sentences that we listen to over and over and delay the resolution of some errors or improvements even when the solution is simple we struggle in getting a real scenario to test.

And here is where Remote Debugging comes in. Remote Debugging is, just as its own name clearly states, to be able to debug something that is not local that is remote. It has been focused since its conception in Mobile Development because it doesn’t matter how good the simulator is. You will always need to test in a real device to make sure everything is working properly.

So this is the same concept but applicable to a container, so that means that I have a TIBCO BusinessWorks application running on Kubernetes. We want to debug it as it has been running locally, as shown in the image before. To be able to do that, we need to follow these steps:

Enabling the Remote Debugging in the pod

The first step is to enable the remote debug option in the application and to do that, we need to use the internal API that the BusinessWorks provides, and to do that, we need to execute from inside the container:

curl -XPOST http://localhost:8090/bw/bwengine.json/debug/?interface=0.0.0.0&port=5554&engineName=Main

In case that we do not have any tool like curl or wget to hit a URL inside the container, you can always use the port-forward strategy to make the 8090 port from the pod accessible to enable the debug port using a command similar to the one below:

kubectl port-forward hello-world-test-78b6f9b4b-25hss 8090:8090

And then, we can hit it from our local machine to enable remote debugging

Make the Debug Port accessible to the Studio

To do the remote debugging, we need to be able to connect our local TIBCO BusinessStudio to this specific pod that is executing the load and, to do that, we need to have access to the Debug port. To get this, we have mainly two options that are the ones shown in the subsections below: Expose the port at the pod level and port-forwarding option.

Expose the port at the Pod Level

We need to have the debug port opened in our pod. To do that, we need to define another port that is not in use by the application, and it is not the default administration port (8090) to the one to be exposed. For example, in my case, I will use 5554 as the debug port, and to do that, I define another port to be accessed.

#TIBFAQS Enabling Remote Debugging for TIBCO BusinessWorks Application on Kubernetes
Definition of the debug port as a Service

Port-Forwarding Option

Another option if we do not want to expose the debug port all the time, even if this is not going to be used unless we’re executing the remote debug, we have another option to do a port-forward to the debug port in our local.

kubectl port-forward hello-world-test-78b6f9b4b-cctgh 5554:5554

Connection to the TIBCO Business Studio

Now that we have everything ready, we need to connect our local TIBCO Business Studio to the pod, and to do that, we need to follow these steps:

Run → Debug Configurations, and we select the Remote BusinessWorks

#TIBFAQS Enabling Remote Debugging for TIBCO BusinessWorks Application on Kubernetes
Selection of the Remote BusinessWorks application option in the Debug Configuration

And now we need to provide the connection details. For this case, we will use the localhost and port 5554 and click on the Debug button.

#TIBFAQS Enabling Remote Debugging for TIBCO BusinessWorks Application on Kubernetes
Setting the connection properties for the Remote Debugging

After that moment, we will establish a connection between both environments: the pod running on our Kubernetes cluster and our local TIBCO BusinessStudio. And as soon as we hit the container, we can see the execution in our local environment:

#TIBFAQS Enabling Remote Debugging for TIBCO BusinessWorks Application on Kubernetes
Remote Debugging execution from our TIBCO BusinesStudio instance

Summary

I hope you find this interesting, and if you are one of those facing this issue now, you have information not to be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just using the hashtag #TIBFAQS that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev
#TIBFAQS Enabling Remote Debugging for TIBCO BusinessWorks Application on Kubernetes

#TIBFAQS: TIBCO BW Configuration at Runtime

#TIBFAQS: TIBCO BW Configuration at Runtime

Discover how the OSGI lcfg command can help you be sure which is the configuration at runtime.

#TIBFAQS: TIBCO BW Configuration at Runtime
Photo by Ferenc Almasi on Unsplash

Knowing the TIBCO BW configuration at runtime is became critical as you always need to know if the latest changes has been applied or just want to check the specific value for a Module Property as part of your development.

When we are talking about applications deployed on the cloud one of the key things is Configuration Management. Especially if we include into the mix things like Kubernetes, Containers, External Configuration Management System things got tricky.

Usual configuration when we are talking about a Kubernetes environment for configuration management is the use of Config Maps or Spring Cloud Config.

When you can upload the configuration in a separate step as deploying the application, you can get into a situation where you are not sure about what is the running configuration that a BusinessWorks application has.

To check TIBCO BW configuration there is an easy way to know exactly the current values:

  • We just need to get inside the container to be able to access the internal OSGI console that allows us to execute administrative commands.
  • We have spoken other times about that API but in case you would like to take a deeper look you just need to check this link:
  • And one of the commands is lcfg that allows knowing which configuration is being used by the application that is running:
curl localhost:8090/bw/framework.json/osgi?command=lcfg

With an output similar to this:

#TIBFAQS: TIBCO BW Configuration at Runtime
Sample output for the lcfg command of a Running BusinessWorks Container Application

Summary

I hope you find this interesting, and if you are one of those facing this issue now, you have information not to be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just using the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev
#TIBFAQS: TIBCO BW Configuration at Runtime

#TIBFAQS: TIBCO BW Impaired Status: How to solve it?

#TIBFAQS: TIBCO BW Impaired Status: How to solve it?

Learn the main reasons behind an Impaired status and how you can perform troubleshooting to identify and solve the error.

#TIBFAQS: TIBCO BW Impaired Status: How to solve it?
Photo by Charles Deluvio on Unsplash

This is another post of the #TIBFAQS series. To remind you what all this about is, you can submit your questions regarding TIBCO developing issues or doubts and try to provide an answer here to try to help the community of TIBCO developers out there.

So, today I am going to start with one of the most common issues when we work with BusinessWorks, and it is when I am going to deploy my application or test it locally, and I get this log trace and nothing after that: TIBCO BW Impaired Status.

#TIBFAQS: TIBCO BW Impaired Status: How to solve it?
Impaired status error message

This is one of the usual situations for a BusinessWorks junior developer and one of the reasons you have more time spent doing troubleshooting. Let’s get some tricks today, so this message will never again stop you in your journey to production.

What is the cause of this error?

This error means that the BusinessWorks runtime is not able to meet all the dependencies among the components to be able to start. As you probably know, BusinessWorks each of the applications’ components is managed independently, and they are referencing each other.

For example, the Application depends on the Application Module and the Shared Module. The Application module can have a dependency on a JMS Connection and so on.

Situations that can raise this error

Let’s take a look now at the situation that can raise this error and solve it.

1.- Missing module or incompatible versions

One usual situation that can lead to this problem is missing modules or incompatible versions of the modules. In that case, the referencable component will wait for a module or a specific version of a module to be started. Still, this module is missing, or it is starting another version.

2.- Not valid shared connections

Another option can be if some of the components are required to establish the connection with other technologies such as JDBC connections, JMS connections, KAFKA connections, or another one of the more than 200 connectors available.

3.- Missing Starter component in the Module Descriptors

The last of the usual suspects here is when you have a Stater component in the Module Descriptors, but this process is not available inside the EAR file that you are deploying. That dependency is never satisfied, and that leads to unlimited Impaired status.

How to detect what component is missing?

To help you in the process of detecting in which situation you are, you have an incredible tool at your disposal which is the command la from the OSGi Console Interface.

This command helps us list the applications deployed in this specific AppNode or container and give us the details of them, including the reason for an Impaired situation.

#TIBFAQS: TIBCO BW Impaired Status: How to solve it?

How to run the OSGi console depends on your deployment model, but you can read all the information about it in the link below:

Summary

I hope you find this interesting to solve your TIBCO BW Impaired status on your apps, and if you are one of those facing this issue now, you have information not to be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just use the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev
#TIBFAQS: TIBCO BW Impaired Status: How to solve it?

#TIBFAQS: Failed to Read Profile from [/tmp/tmp/pcf.substvar]

#TIBFAQS: Failed to Read Profile from [/tmp/tmp/pcf.substvar]
#TIBFAQS: Failed to Read Profile from [/tmp/tmp/pcf.substvar]
Photo by Shahadat Rahman on Unsplash

This is another post of the #TIBFAQS series and only to remind you what all this about is that you can submit your questions regarding TIBCO developing issues or doubts and I try to provide an answer here to try to help the community of TIBCO developers out there.

So, today I am going to start with one of the I think most common issues when we work with BusinessWorks Container Edition and we are deploying into our target environment and it is a trace similar to this one:

#TIBFAQS: Failed to Read Profile from [/tmp/tmp/pcf.substvar]
Error trace regarding I am not able to read the profile

What is the cause of this error?

This error means that the BusinessWorks runtime is not able to read and process the properties file to start the application. So that means that your error is regarding the configuration of the application and not the application itself. So, the good news here: your code seems to be fine at this point.

As probably you know all the TIBCO BW applications use for a very long time an XML file to have the configuration values to start with. This is the file that in the case of BusinessWokrks Container Edition is stored at /tmp/tmp/pcf.substvar and it is populated for several sources depending on how to manage your configuration.

As you know you have several options to manage your configuration in Cloud base environments: Environment variables, Config Maps, Configuration Management Systems such as Spring Cloud Config or Consul.. So it is important that you have a clear understanding of what are you using.

So the error is that the file has something in its content that is not valid, because it is wrong or because it is not able to understand it.

Situations that can raise this error

Let’s take a look now at the situation that can raise this error and how we can solve it.

1.- Incompatible BW-runtime vs EAR versions

Usually, EAR files are compatible with different BusinessWorks runtimes, but this is true when the runtime is more current than the EAR. So I mean, if I generate my application with BWCE 2.5.0 I can run it with runtime 2.5.0, or 2.5.3 or 2.6.0 without any issue, but if I try to run it with an older version like 2.4.2 I can get this error because the EAR file has some “new things” that the runtime is not able to understand.

So it is important to validate that the runtime version that you are using is the expected one and updated it if that is not the case.

2.- XML special characters that need to be escaped

This situation is only true in versions before 2.5.0, but in case you are running an older version, you can also get this error because your property value has an XML character that needs to be escaped. Characters like ‘<’ or ‘&’ and the most used ones to generate this error. If you are using a more updated version you don’t need to escape it because they are automatically escaped.

So depending on the version that you are using update your property’s value accordingly.

Summary

I hope you find this interesting and if you are one of those facing this issue now you have information to not be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just using the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev
#TIBFAQS: Failed to Read Profile from [/tmp/tmp/pcf.substvar]
#TIBFAQS: Failed to read profile from [/tmp/tmp/pcf.substvar]

#TIBFAQS 2022! The Solution to Your TIBCO Development-Based Questions

#TIBFAQS 2022! The Solution to Your TIBCO Development-Based Questions

Improve your knowledge about TIBCO technology and also solve the issues that you are facing in your daily development tasks

Introducing #TIBFAQS: The Solution to Your TIBCO Development-Based Questions
Introducing TIBFAQs by Alex Vazquez

TIBFAQS is here! This new year I would like to start several initiatives, and I hope you can walk with me during this journey. As you may know, I am a TIBCO Architect, so in my daily activities, I got a lot of questions and inquiries about how to do different things with TIBCO technology from TIBCO BusinessWorks to TIBCO EMS or TIBCO Spotfire.

I have noticed that some of these questions are similar from one customer to another, so I would like to use this platform to share all this knowledge to benefit in our daily activities and use the technology most efficiently.

1.- How is this going to work?

I will use some of the common topics that I am aware of in terms of TIBCO development questions and create a periodic article covering it with detail and with a sample application showing the problem and the solution. All the code will be available in my GitHub repository to use for your own reference.

https://github.com/alexandrev

2.- Can I send you my questions?

Yes, sure!! That should be amazing! As I said, I would like to create an engaging community around these posts so we all can benefit from there. So I would like to see your questions and to send them to me you can do it in the following ways:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just using the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev

3.- Where is this going to start?

This will start in late January. The idea is to have at least an article in a bi-weekly periodicity, but that will depend a lot on this initiative’s engagement. The more you share and talk about this initiative to your peers, and the more questions you send to me, the more articles I will create.

4.- What is next?

Since today you can start sending your questions and sharing your feedback about this initiative, and you can follow this blog to wait for the articles to come! Let’s do it together!

#TIBFAQS 2022! The Solution to Your TIBCO Development-Based Questions
Photo by Brett Jordan on Unsplash

Visual Diff Could Be the Missing Piece That You Need in Low-Code Development

Visual Diff Could Be the Missing Piece That You Need in Low-Code Development

Helping you excel using low code in distributed teams and parallel programming

Escalator
Photo by Tomasz Frankowski on Unsplash.

Most enterprises are exploring low-code/no-code development now that the most important thing is to achieve agility on the technology artifacts from different perspectives (development, deployment, and operation).

The benefits of this way of working make this almost a no-brainer decision for most companies. We already covered them in a previous article. Take a look if you have not read it yet.

But we know that all new things come with their own challenges that we need to address and master in order to unleash the full benefits that these new paradigms or technologies are providing. Much like with cloud-native architecture, we need to be able to adapt.

Sometimes it is not the culture that we need to change. Sometimes the technology and the tools also need to evolve to address those challenges and help us on that journey. And this is how Visual Diff came into life.

When you develop using a low-code approach, all the development process is easier. You need to combine different blocks that do the logic you need, and everything is simpler than a bunch of code lines.

Example of low-code development
Low-code development approach using TIBCO BusinessWorks.

But we also need to manage all these artifacts in a repository whereby all of them are focused on source code development. That means that when you are working with those tools at the end, you are not working with a “low-code approach” but rather a source code approach. Things like merging different branches and looking to the version history to know the changes are complex.

And they are complex because they are performed by the repository itself, which is focused on the file changes and the source code that changes. But one of the great benefits of low-code development is that the developer doesn’t need to be aware of the source code generated as part of the visual, faster activity. So, how can we solve that? What can we use to solve that?

Low-code technologies need to advance to take the lead here. For example, this is what TIBCO BusinessWorks has done with the release of their Visual Diff capability.

So, you still have your integration with your source code repository. You can do all the processes and activities you usually need to do in this kind of parallel distributed development. Still, you can also see all those activities from a “low-code” perspective.

That means that when I am taking a look at the version history, I can see the visual artifacts that have been modified. The activities added or deleted are shown there in a meaningful way for low-code development. That closes the loop about how low-code developments can take all the advantages of the modern source code repositories and their flows (GitFlow, GitHub Flow, One Flow, etc.) as well as the advantages of the low-code perspective.

Let’s say there are two options with which you can see how an application has been changed. One is the traditional approach and the other uses the Visual Diff:

Visual Diff of your processes
Option A: Visual Diff of your processes
Same processes but with a Text Comparision approach
Option B: Same processes but with a Text Comparison approach

So, based on this evidence, what do you think is easier to understand? Even if you are a true coder as I am, we cannot deny the ease and benefits of the low-code approach for massive and standard development in the enterprise world.


Summary

No matter how fast we are developing with all the accelerators and frameworks that we have, a well-defined low-code application will be faster than any of us. It is the same battle that we had in the past with the Graphical Interfaces or mouse control versus the keyboard.

We accept that there is a personal preference to choose one or the other, but when we need to decide what is more effective and need to rely on the facts, we cannot be blind to what is in front of us.

I hope you have enjoyed this article. Have a nice day!

Increased agility through modern digital connectivity

Increased agility through modern digital connectivity

Find how TIBCO Cloud Integration can help you increase business agility by connecting all your apps, devices, and data no matter where they are hosted

We live in a world where the number of digital assets that need to be integrated, the types of assets, and where they are hosted are all exploding. We’ve transitioned away from a simple enterprise landscape where all of our systems were hosted in a single datacenter, and the number of systems was small. If you still remember those days, you probably could name all the systems that you maintained. Could you imagine doing that today?

This has changed completely. Businesses today are operating more and more on apps and data rather than on manual, documented processes, and that has increased the demands to have them connected together to support the operations of the business. How does a traditional IT team keep up with all connectivity requests coming from all areas of the business to ensure these assets are fully integrated and working seamlessly?

Additionally, the business environment has changed completely. Today everything is hyper-accelerated. You can no longer wait six months to get your new marketing promotions online, or to introduce new digital services.

This is because markets change constantly over time. At times they grow, and at other times they contract. This forced enterprises to change how they do business rapidly.

So, if we need to summarize everything that we need from an application architecture to make sure that it can help us to meet our business requirements, that word is “agility”. And architectural agility creates business agility

Different IT paradigms have been adopted to help increase architectural agility from different perspectives that provide a quick way to adapt, connect, and offer new capabilities to customers:

  • Infrastructure Agility: Based on cloud adoption, cloud providers offer an agile way to immediately tap into the infrastructure capacity required, allowing for rapid innovation by quickly creating new environments and deploying new services on-demand.
  • Operation & Management Agility: SaaS-based applications allow you to adopt best-of-breed business suites without having to procure and manage the underlying infrastructure, as you do in your on-premises approach. This allows you to streamline and accelerate the operations of your business.
  • Development Agility: Based on the application technologies that create small, highly scalable components of software that can be evolved, deployed, and managed in an autonomous way. This approach embeds integration capabilities directly within deployed applications, making integration no longer a separate layer but something that is built-in inside each component. Microservices, API-led development, and event-driven architecture concepts play an essential role and expand the people involved in the development process.

So, all of these forms of agility help build an application architecture that is highly agile — able to quickly respond quickly to changes in the environment within which it operates. And you can achieve all of them with TIBCO® Cloud Integration (TCI).

TCI is an Integration Platform-as-a-Service (iPaaS), a cloud-based integration solution that makes it extremely easy for you to connect all your assets together no matter where they’re hosted. It is a SaaS offering that runs on both AWS and Microsoft Azure, so you don’t have to manage the underlying infrastructure to make sure the integration assets that are critical to your business are always available and scale to any level of demand.

From the development perspective, TCI provides you all the tools needed for your business to develop and connect all your digital assets — including your apps, data sources, devices, business suites, processes, and SaaS solutions — using the most modern standards within an immersive experience.

Increased agility through modern digital connectivity
Easily access all of your applications within an immersive user experience.

It addresses integration patterns from traditional approaches — such as in data replication — to modern approaches including API-Ied to Event-driven architectures. It also supports the latest connectivity standards such as REST, GraphQL, AsyncAPI, and gRPC. And to reduce the time-to-market of your integrations, it also includes a significant number of pre-packaged connectors that simplify connectivity to legacy and modern business suites, data sources, and more — no matter if they reside in your data center or in the cloud. These connectors are easily accessible within a connector marketplace embedded directly within the user experience to be used across the whole platform.

TCI improves team-based development. With TIBCO® Cloud Mesh, accessible via TCI, your integrators can easily share, discover, and reuse digital assets created across the enterprise within TIBCO Cloud — such as APIs and apps — and utilize them very quickly within integrations in a secure way without the need to worry about technical aspects.

This capability promotes the reuse of existing assets and better collaboration among teams. Combined with pre-packed connectors which are directly accessible within TCI, the development time to introduce new integrations is significantly reduced.

Increased agility through modern digital connectivity
Easily access pre-packaged connectors within an embedded connector marketplace

TCI also expands the number of people in your business that can create integrations, with multiple development experiences that are tailored for different roles providing their own experience and skills. Now not only can integration specialists participate in the integration process, but developers, API product owners, and citizen integrators can as well.

This dramatically increases business agility because your various business units can create integrations in a self-service manner, collaborate to provide solutions even if they span across business units, and reduce their dependencies on overburdened IT teams. This frees up your integration specialists to focus on providing integration best practices for your enterprise and architecting a responsive application architecture.

TCI addresses a number of integration use cases including:

  1. Connecting apps, data, and devices together that reside anywhere (e.g., on-premises, SaaS, private/public cloud)
  2. Designing,, orchestrating, and managing APIs & microservices
  3. Rearchitecting inflexible monolith apps into highly scalable cloud-native apps.
  4. Building event-driven apps that process streams of data (e.g., from IoT devices or Apache Kafka)

TCI also provides detailed insights on the performance and execution status of your integrations so you can optimize them as needed or easily detect and solve any potential issues with them. This ensures that business processes that depend on your integrations are minimally disrupted.

Increased agility through modern digital connectivity
Get at-a-glance views of application execution and performance details.
Increased agility through modern digital connectivity
Drill down for expanded insights on application execution histories and performance trends.

By bringing more people into your integration process, empowering them with an immersive view that helps them seamlessly work together on your integrations, proving capabilities such as TIBCO Cloud Mesh and pre-packaged connectors within a unified connector marketplace that accelerates integration development, your digital business can be connected and reconnected very quickly to respond to changing markets, which greatly increases your business agility.

To experience how easily you can connect all of your digital assets together to boost your business agility, sign up for a free 30-day trial of TIBCO Cloud Integration today.

Sign up for the free trial at https://www.tibco.com/products/cloud-integration

TIBCO Cloud Integration is a service provided within the TIBCO Connected Intelligence Platform, which provides a complete set of capabilities to connect your business.

Kubernetes Batch Processing using TIBCO BW

Kubernetes Batch Processing using TIBCO BW
Kubernetes Batch Processing using TIBCO BusinessWorks
Photo by Lukas Blazek on Unsplash

We all know that in the rise of the cloud-native development and architectures, we’ve seen Kubernetes based platforms as the new standard all focusing on new developments following the new paradigms and best practices: Microservices, Event-Driven Architectures new shiny protocols like GraphQL or gRPC, and so on and so forth.

But we should be aware that most of the enterprises that are today adopting these technologies are not green-field opportunities. They have a lot of systems already running that all new developments in the new cloud-native architectures need to interact so the rules become more complex and there are a lot of shades of gray when we’re creating our Kubernetes application.

And based on my experience when we are talking with the customer is the new cloud architecture most of them bring the batch pattern here. They know this is not a best practice anymore, that they cannot continue to build their systems in a batch fashion trying to do by night what were should be doing in real-time. But, most of the time they need to co-exist with the systems they already have and could be needed this kind of procedure. So we need to find a way to provide Kubernetes Batch Processing.

Also for customers that already have their existing applications using that pattern and they like to move to the new world is better for them that they can do it without needing to re-architect their existing application. Something that probably they will end up doing but at their own pace.

Batch processing patterns in TIBCO development is a quite used pattern and we have customers all over the world with this kind of development used in production for many years. You know, this kind of process that is executed at a specific moment in time (weekly on Monday, each day at 3 PM .. or just each hour to do some regular job).

It’s a straightforward pattern here. Simply add a timer that you can configure when it will be launched and add your logic. That’s it. As simple as that:

Kubernetes Batch Processing using TIBCO BW
Simple Batch Pattern using TIBCO BusinessWorks

But, how this can be translated into a Kubernetes approach? How we can create applications that work with this pattern and still be managed by our platform? So, fortunately, this is something that can be done, and you have also different ways of doing it depending on what you want to achieve.

Mainly today we are going to describe two ways to be able to do that and try to explain the main differences between one and the other so you can know which one you should use depending on your use case. Those two methods I’m going to call it this way: Batch managed by TIBCO Kubernetes and Cron Job API approach


Batch Managed by TIBCO

This is the simplest one. It is exactly the same approach you have in your existing on-premises BusinessWorks application just deployed into the Kubernetes cluster.

So, you don’t need to change anything in your logic you only will have an application that is started by a Timer with their configuration regarding the scheduling inside the scope of the BusinessWorks application and that’s it.

You only need to provide the artifacts needed to deploy your application into Kubernetes like any other TIBCO BusinessWorks app and that’s it. That means that you’re creating a Deployment to launch this component and you will always have a running pod evaluating when the condition is true to launch the process as you have in your on-premises approach.

Cron-Job Kubernetes API

The batch processing pattern is something that is already covered out of the box by the Kubernetes API and that’s why we have the Cron-Job concept. You probably remind the Cron Jobs that we have in our Linux machines. If you’re a developer or a system administrator I’m sure you’ve played with these cronjobs to schedule tasks or commands to be executed at a certain time. If you’re a Windows-based person, this is the Linux equivalent of your Windows Task Scheduler Jobs. Same approach.

Kubernetes Batch Processing using TIBCO BW
crontab in a Linux System

And this is very simple, you only need to create a new artifact using this Cron Job API that mainly says the job to execute and the schedule of that job similar of what we’ve done in the past with the cronjob in our Linux machine:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

The jobs that we can use here should be compliant with a single rule that also applied in the Unix cron jobs as well: The command should end when the job is done.

And this is something that is critical, which means that our container should exit when the job is done to be able to be used inside this approach. That means that we cannot use a “server approach” as we’re doing in the previous approach because in that case, the pod is never-ending. Does that mean that I cannot use a TIBCO BusinessWorks application as part of a Kubernetes Cron Job? Absolutely Not! Let’s see how you can do it.

So, we should focus on two things, the business logic should be able to run as soon as the process start and the container should end as soon as the job is done. Let’s start with the first one.

The first one is easy. Let’s use our previous sample: Simple batch approach that writes a log trace each minute. We need to make it start as soon as the process started but this is pretty Out of the box (OOTB) with TIBCO BusinessWorks we only need to configure that timer to run only once and that’s going to start as soon as the application is running:

Kubernetes Batch Processing using TIBCO BW

So, we already have the first requirement fixed, let’s see with the other one.

We should be able to end the container completely as the process ends and that’s challenging because BusinessWorks application doesn’t behave that way. They’re supposed to be always running. But this also can be sorted.

The only thing that we need is to make use of command at the end of the flow. It’s like an exit command in our shell script or Java code to end the process and the container. To do that, we should add an “Execute External Command” Activity and simply configure it to send a signal to the BusinessWorks process running inside the container.

The signal that we’re going to send is SIGINT that is the similar one that we send when we press CTRL+C in a Terminal to require the process to stop. We’re are going to do the same. To do that we’re going to use the command kill that is shipped in all Unix machines and most of the docker base images as well. And the command kill requires two arguments:

kill -s <SIGNAL_NAME> <PID>
  • SIGNAL_NAME: We already cover that part and we’re going to use the Singal named SIGINT.
  • PID: Regarding the PID we need to provide the PID of the BusinessWorks process running inside container.

The part of finding the PID for that process running BusinessWorks inside the container can be difficult to locate but if you see the running processes inside a BusinessWorks container you can see that this is not so complicated to find:

Kubernetes Batch Processing using TIBCO BW

If we enter inside a running container of a BusinessWorks application we will check that this PID is always the number 1. So we already have everything ready to configure the activity as it is shown below:

Kubernetes Batch Processing using TIBCO BW

And that’s it with these two changes we are able to deploy this using the CronJob API and use it as part of your toolset of application patterns.

Pros and Cons of each approach

As you can imagine there is not a single solution about when to use one of the other because it is going to depend on your use case and your requirements. So I’ll try to enumerate here the main differences between both approaches so you can choose better when it is on your hand:

  • TIBCO Managed approach is faster because the pod is already running and as soon as the condition is met the logic started. Using the Cron Job API requires a warm-up period because the pod starts when the condition is met so some delay can be applied here.
  • TIBCO Managed approach requires the pod to be running all the time so it uses more resources when the condition is not reached. So in case you’re running stateless containers like AWS Fargate or similar Cron JOB API is a better fit.
  • CronJob API is a standard Kubernetes API that means that integrated completely with the ecosystem, rather TIBCO Managed is managed by TIBCO using application configuration settings.
  • TIBCO Managed approach is not aware of other instances of the same application running so you managed to keep a single instance running to avoid running the same logic many times. In the case of the Cron Job API, this is managed by the Kubernetes platform itself.

Prometheus Monitoring in TIBCO Cloud Integration

Prometheus Monitoring in TIBCO Cloud Integration

In previous posts, I’ve explained how to integrate TIBCO BusinessWorks 6.x / BusinessWorks Container Edition (BWCE) applications with Prometheus, one of the most popular monitoring systems for cloud layers. Prometheus is one of the most widely used solutions to monitor your microservices inside a Kubernetes cluster. In this post, I will explain steps to leverage Prometheus for integrating with applications running on TIBCO Cloud Integration (TCI).

TCI is TIBCO’s iPaaS and primarily hides the application management complexity of an app from users. You need your packaged application (a.k.a EAR) and manifest.json — both generated by the product to simply deploy the application.

Isn’t it magical? Yes, it is! As explained in my previous post related to Prometheus integration with BWCE, which allows you to customize your base images, TCI allows integration with Prometheus in a slightly different manner. Let’s walk through the steps.

TCI has its own embedded monitoring tools (shown below) to provide insights into Memory and CPU utilization, plus network throughput, which is very useful.

Prometheus Monitoring in TIBCO Cloud Integration

While the monitoring metrics provided out-of-the-box by TCI are sufficient for most scenarios, there are hybrid connectivity use-cases (application running on-prem and microservices running on your own cluster that could be on a private or public cloud) that might require a unified single-pane view of monitoring.

Step one is to import the Prometheus plugin from the current GitHub location into your BusinessStudio workspace. To do that, you just need to clone the GitHub Repository available here: https://github.com/TIBCOSoftware/bw-tooling OR https://github.com/alexandrev/bw-tooling

Import the Prometheus plugin by choosing Import → Plug-ins and Fragments option and specifying the directory downloaded from the above mentioned GitHub location. (shown below)

Prometheus Monitoring in TIBCO Cloud Integration
Prometheus Monitoring in TIBCO Cloud Integration

Step two involves adding the Prometheus module previously imported to the specific application as shown below:

Prometheus Monitoring in TIBCO Cloud Integration

Step three is just to build the EAR file along with manifest.json.

NOTE: If the EAR doesn’t get generated once you add the Prometheus plugin, please follow the below steps:

  • Export the project with the Prometheus module to a zip file.
  • Remove the Prometheus project from the workspace.
  • Import the project from the zip file generated before.

Before you deploy the BW application on TCI, we need to enable an additional port on TCI to scrape the Prometheus metrics.

Step four Updating manifest.json file.

By default, a TCI app using the manifest.json file only exposes one port to be consumed from outside (related to functional services) and the other to be used internally for health checks.

Prometheus Monitoring in TIBCO Cloud Integration

For Prometheus integration with TCI, we need an additional port listening on 9095, so Prometheus server can access the metrics endpoints to scrape the required metrics for our TCI application.

Note: This document does not cover the details on setting the Prometheus server (it is NOT needed for this PoC) but you can find the relevant information on https://prometheus.io/docs/prometheus/latest/installation/

We need to slightly modify the generated manifest.json file (of BW app) to expose an additional port, 9095 (shown below) .

Prometheus Monitoring in TIBCO Cloud Integration

Also, to tell TCI that we want to enable Prometheus endpoint we need to set a property in the manifest.json file. The property is TCI_BW_CONFIG_OVERRIDES and provide the following value: BW_PROMETHEUS_ENABLE=true, as shown below:

Prometheus Monitoring in TIBCO Cloud Integration

We also need to add an additional line (propertyPrefix) in the manifest.json file as shown below.

Prometheus Monitoring in TIBCO Cloud Integration

Now, we are ready to deploy the BW app on TCI and once it is deployed we can see there are two endpoints

Prometheus Monitoring in TIBCO Cloud Integration

If we expand the Endpoints options on the right (shown above), you can see that one of them is named “prometheus” and that’s our Prometheus metrics endpoint:

Just copy the prometheus URL and append it with /metrics (URL in the below snapshot) — this will display the Prometheus metrics for the specific BW app deployed on TCI.

Note: appending with /metrics is not compulsory, the as-is URL for Prometheus endpoint will also work.

Prometheus Monitoring in TIBCO Cloud Integration

In the list you will find the following kind of metrics to be able to create the most incredible dashboards and analysis based on that kind of information:

  • JVM metrics around memory used, GC performance and thread pools counts
  • CPU usage by the application
  • Process and Activity execution counts by Status (Started, Completed, Failed, Scheduled..)
  • Duration by Activity and Process.

With all this available the information you can create dashboards similar to the one shown below, in this case using Spotfire as the Dashboard tool:

Prometheus Monitoring in TIBCO Cloud Integration

But you can also integrate those metrics with Grafana or any other tool that could read data from Prometheus time-series database.

Prometheus Monitoring in TIBCO Cloud Integration