Grafana Alerting vs Prometheus Alertmanager: Key Differences and When to Use Each

Grafana Alerting vs Prometheus Alertmanager: Key Differences and When to Use Each

Introduction

Grafana Alerting capabilities continue to improve in each new release the GrafanaLabs team does. Especially with the changes done in Grafana 8 and Grafana 9, many questions have been raised regarding its usage, the capabilities supported, and the comparison with other alternatives.

We want to start setting the context about Grafana Alerting based on the usual stack we deployed to improve the observability of our workloads. Grafana can be used for any workload; there is a preference for some specific ones being the most used solution when we talk about Kubernetes workloads.

In this kind of deployment, the stack we usually deploy is Grafana as the visualization tool and Prometheus as the core to gather all metrics, so all responsibilities are differentiated. Grafana draws all the information using its excellent dashboarding capabilities, gathering the information from Prometheus.

Grafana Alerting vs AlertManager: A Comparison of Two Leading Monitoring Tools

When we plan to start including alerts, as we cannot accept that we need to have a specific team just watching dashboards to detect where something is going wrong, we need to implement a way to push alerts.

Alerting capabilities in Grafana have been present since the beginning, but its capabilities in the early stages have been limited to generating graphical alerts focused on the dashboards. Instead of that, Prometheus acting as the brain, includes a side-card component called AlertManager that can handle the creation and notification of any alerts generated from all the information stored in Prometheus.

As main capabilities that Alert Manager provides are the definition of the alerts, a grouping of the alerts, dismiss rules to mute some notifications, and finally, the way to send that alert to any system based on a plugin system and a webhook to be able to extend it to any component available.

Grafana Alerting vs AlertManager: A Comparison of Two Leading Monitoring Tools

So, this is the initial stage, but this has been changed with the latest releases of Grafana in the last months, as commented, and now the barrier between both components is much fuzzier, as we’re going to see.

What are the main capabilities Grafana Alerting provides today?

Grafana Alerting allows you to define Alert rules defining the criteria under which this alert should fire. It can have different queries, conditions, evaluation frequency, and the duration over which the condition is met.

This alert can be generated from any of the sources supported in Grafana, and that’s a very relevant topic as this is not limited to the Prometheus data. With the eclosion of the GrafanaLabs stack with many new products such as Grafana Loki and Grafana Mimir, among others, this is especially relevant.

Once each of the alerts once it fires, you can define a Notification policy to decide where, when, and how each of these alerts is routed to. A notification policy also has a contact point associated with one or more notifiers.

Additionally, you can silence alerts to stop receiving notifications of a specific alert instance and mute alerts when you can define some period where new alerts will not be generated or notified.

All of that with powerful dashboarding capabilities using all the power of the Grafana dashboard features.

Grafana Alerting vs AlertManager: A Comparison of Two Leading Monitoring Tools

 Grafana Alerting vs Prometheus Alert Manager

After reading the previous section probably, you are confused because most of the new features added are very similar to the ones we have available on Prometheus AlertManager.

So, in that case, what tool should we use? Should we replace Prometheus AlertManager and start using Grafana Alerting? Should we use both? As you can imagine, this is one of these questions that doesn’t have clear answers as it will depend a lot on the context and your specific scenario, but let me give you some pointers around it.

  • Grafana Alerting can be very powerful if you are already inside the Grafana stack. If you are already using Grafana Loki (and require to generate alerts from it), Grafana Mimir, or directly Grafana cloud, probably Grafana Alert would provide a better fit for your ecosystem.
  • If you require complex alerts defined with complex queries and calculations, Prometheus AlertManager will provide a much more complex and rich ecosystem to generate your alerts.
  • If you are looking for a SaaS approach, Grafana Alerting is also provided as part of Grafana Cloud, so it can be used without the requirement to be installed in your ecosystem.
  • If you are using Grafana Alerting, you need to consider that the same component serving the dashboards is computing and generating the alerts, which would require additional HA capabilities. It will be a non-evitable relationship between both features (dashboards and alerts). Suppose that doesn’t resonate well with you because the criticality of your dashboard is not the same as the alerts, or you think your dashboard’s usage can affect the alerts’ performance. In that case, Prometheus Alert Manager will provide a better approach as it runs in a specific pod in isolation.
  • At this moment, Grafana Alerting uses a SQL Database to manage duplication among other features, so depending on the number of alerts you need to work on could not be enough in terms of performance, and the usage of the time series database from Prometheus can be a better fit.

Summary

Grafana Alerting is incredible progress on the journey of the Grafana Labs team to provide an end-to-end observability stack with a great fit on the rest of the ecosystem with the option to run it in SaaS mode and focus on ease of use. But there are better options than depending on your needs.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Prometheus Monitoring in TIBCO Cloud Integration

Prometheus Monitoring in TIBCO Cloud Integration

In previous posts, I’ve explained how to integrate TIBCO BusinessWorks 6.x / BusinessWorks Container Edition (BWCE) applications with Prometheus, one of the most popular monitoring systems for cloud layers. Prometheus is one of the most widely used solutions to monitor your microservices inside a Kubernetes cluster. In this post, I will explain steps to leverage Prometheus for integrating with applications running on TIBCO Cloud Integration (TCI).

TCI is TIBCO’s iPaaS and primarily hides the application management complexity of an app from users. You need your packaged application (a.k.a EAR) and manifest.json — both generated by the product to simply deploy the application.

Isn’t it magical? Yes, it is! As explained in my previous post related to Prometheus integration with BWCE, which allows you to customize your base images, TCI allows integration with Prometheus in a slightly different manner. Let’s walk through the steps.

TCI has its own embedded monitoring tools (shown below) to provide insights into Memory and CPU utilization, plus network throughput, which is very useful.

While the monitoring metrics provided out-of-the-box by TCI are sufficient for most scenarios, there are hybrid connectivity use-cases (application running on-prem and microservices running on your own cluster that could be on a private or public cloud) that might require a unified single-pane view of monitoring.

Step one is to import the Prometheus plugin from the current GitHub location into your BusinessStudio workspace. To do that, you just need to clone the GitHub Repository available here: https://github.com/TIBCOSoftware/bw-tooling OR https://github.com/alexandrev/bw-tooling

Import the Prometheus plugin by choosing Import → Plug-ins and Fragments option and specifying the directory downloaded from the above mentioned GitHub location. (shown below)

Prometheus Monitoring in TIBCO Cloud Integration
Prometheus Monitoring in TIBCO Cloud Integration

Step two involves adding the Prometheus module previously imported to the specific application as shown below:

Prometheus Monitoring in TIBCO Cloud Integration

Step three is just to build the EAR file along with manifest.json.

NOTE: If the EAR doesn’t get generated once you add the Prometheus plugin, please follow the below steps:

  • Export the project with the Prometheus module to a zip file.
  • Remove the Prometheus project from the workspace.
  • Import the project from the zip file generated before.

Before you deploy the BW application on TCI, we need to enable an additional port on TCI to scrape the Prometheus metrics.

Step four Updating manifest.json file.

By default, a TCI app using the manifest.json file only exposes one port to be consumed from outside (related to functional services) and the other to be used internally for health checks.

Prometheus Monitoring in TIBCO Cloud Integration

For Prometheus integration with TCI, we need an additional port listening on 9095, so Prometheus server can access the metrics endpoints to scrape the required metrics for our TCI application.

Note: This document does not cover the details on setting the Prometheus server (it is NOT needed for this PoC) but you can find the relevant information on https://prometheus.io/docs/prometheus/latest/installation/

We need to slightly modify the generated manifest.json file (of BW app) to expose an additional port, 9095 (shown below) .

Prometheus Monitoring in TIBCO Cloud Integration

Also, to tell TCI that we want to enable Prometheus endpoint we need to set a property in the manifest.json file. The property is TCI_BW_CONFIG_OVERRIDES and provide the following value: BW_PROMETHEUS_ENABLE=true, as shown below:

Prometheus Monitoring in TIBCO Cloud Integration

We also need to add an additional line (propertyPrefix) in the manifest.json file as shown below.

Prometheus Monitoring in TIBCO Cloud Integration

Now, we are ready to deploy the BW app on TCI and once it is deployed we can see there are two endpoints

Prometheus Monitoring in TIBCO Cloud Integration

If we expand the Endpoints options on the right (shown above), you can see that one of them is named “prometheus” and that’s our Prometheus metrics endpoint:

Just copy the prometheus URL and append it with /metrics (URL in the below snapshot) — this will display the Prometheus metrics for the specific BW app deployed on TCI.

Note: appending with /metrics is not compulsory, the as-is URL for Prometheus endpoint will also work.

Prometheus Monitoring in TIBCO Cloud Integration

In the list you will find the following kind of metrics to be able to create the most incredible dashboards and analysis based on that kind of information:

  • JVM metrics around memory used, GC performance and thread pools counts
  • CPU usage by the application
  • Process and Activity execution counts by Status (Started, Completed, Failed, Scheduled..)
  • Duration by Activity and Process.

With all this available the information you can create dashboards similar to the one shown below, in this case using Spotfire as the Dashboard tool:

Prometheus Monitoring in TIBCO Cloud Integration

But you can also integrate those metrics with Grafana or any other tool that could read data from Prometheus time-series database.

Prometheus Monitoring in TIBCO Cloud Integration

Kubernetes Service Discovery for Prometheus: Dynamic Scraping the Right Way

Kubernetes Service Discovery for Prometheus: Dynamic Scraping the Right Way

In previous posts, we described how to set up Prometheus to work with your TIBCO BusinessWorks Container Edition apps, and you can read more about it here.

In that post, we described that there were several ways to update Prometheus about the services that ready to monitor. And we choose the most simple at that moment that was the static_config configuration which means:

Don’t worry Prometheus, I’ll let you know the IP you need to monitor and you don’t need to worry about anything else.

And this is useful for a quick test in a local environment when you want to test quickly your Prometheus set up or you want to work in the Grafana part to design the best possible dashboard to handle your need.

But, this is not too useful for a real production environment, even more, when we’re talking about a Kubernetes cluster when services are going up & down continuously over time. So, to solve this situation Prometheus allows us to define a different kind of ways to perform this “service discovery” approach. In the official documentation for Prometheus, we can read a lot about the different service discovery techniques but at a high level these are the main service discovery techniques available:

  • azure_sd_configs: Azure Service Discovery
  • consul_sd_configs: Consul Service Discovery
  • dns_sd_configs: DNS Service Discovery
  • ec2_sd_configs: EC2 Service Discovery
  • openstack_sd_configs: OpenStack Service Discovery
  • file_sd_configs: File Service Discovery
  • gce_sd_configs: GCE Service Discovery
  • kubernetes_sd_configs: Kubernetes Service Discovery
  • marathon_sd_configs: Marathon Service Discovery
  • nerve_sd_configs: AirBnB’s Nerve Service Discovery
  • serverset_sd_configs: Zookeeper Serverset Service Discovery
  • triton_sd_configs: Triton Service Discovery
  • static_config: Static IP/DNS for the configuration. No Service Discovery.

And even, it all these options are not enough for you and need something more specific you have an API available to extend the Prometheus capabilities and create your own Service Discovery technique. You can find more info about it here:

But this is not our case, for us, the Kubernetes Service Discovery is the right choice for our approach. So, we’re going to change the static configuration we had in the previous post:

- job_name: 'bwdockermonitoring'
  honor_labels: true
  static_configs:
    - targets: ['phenix-test-project-svc.default.svc.cluster.local:9095']
      labels:
        group: 'prod'

For this Kubernetes configuration

- job_name: 'bwce-metrics'
  scrape_interval: 5s
  metrics_path: /metrics/
  scheme: http
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - default
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_app]
    separator: ;
    regex: (.*)
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: prom
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: 1
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: $1
    action: replace

As you can see this is quite more complex than the previous configuration but it is not as complex as you can think at first glance, let’s review it by different parts.

- role: endpoints
    namespaces:
      names:
      - default

It says that we’re going to use role for endpoints that are created under the default namespace and we’re going to specify the changes we need to do to find the metrics endpoints for Prometheus.

scrape_interval: 5s
 metrics_path: /metrics/
 scheme: http

This says that we’re going to execute the scrape process in a 5 seconds interval, using http on the path /metrics/

And then, we have a relabel_config section:

- source_labels: [__meta_kubernetes_service_label_app]
    separator: ;
    regex: (.*)
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: prom
    replacement: $1
    action: keep

That means that we’d like to keep that label for prometheus:

- source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: 1
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: $1
    action: replace

That means that we want to do a replace of the label value and we can do several things:

  • Rename the label name using the target_label to set the name of the final label that we’re going to create based on the source_labels.
  • Replace the value using the regex parameter to define the regular expression for the original value and the replacement parameter that is going to express the changes that we want to do to this value.

So, now after applying this configuration when we deploy a new application in our Kubernetes cluster, like the project that we can see here:

Automatically we’re going to see an additional target on our job-name configuration “bwce-metrics”

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

Prometheus is becoming the new standard for Kubernetes monitoring and today we are going to cover how we can do Prometheus TIBCO monitoring in Kubernetes.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

We’re living in a world with constant changes and this is even more true in the Enterprise Application world. I’ll not spend much time talking about things you already know, but just say that the microservices architecture approach and the PaaS solutions have been a game-changer for all enterprise integration technologies.

This time I’d like to talk about monitoring and the integration capabilities we have of using Prometheus to monitor our microservices developed under TIBCO technology. I don’t like to spend too much time either talking about what Prometheus is, as you probably already know, but in a summary, this is an open-source distributed monitoring platform that has been the second project released by the Cloud Native Computing Foundation (after Kubernetes itself) and that has been established as a de-facto industry standard for monitoring K8S clusters (alongside with other options in the market like InfluxDB and so on).

Prometheus has a lot of great features, but one of them is that it has connectors for almost everything and that’s very important today because it is so complicated/unwanted/unusual to define a platform with a single product for the PaaS layer. So today, I want to show you how to monitor your TIBCO BusinessWorks Container Edition applications using Prometheus.

Most of the info I’m going to share is available in the bw-tooling GitHub repo, so you can get to there if you need to validate any specific statement.

Ok, are we ready? Let’s start!!

I’m going to assume that we already have a Kubernetes cluster in place and Prometheus installed as well. So, the first step is to enhance the BusinessWorks Container Edition base image to include the Prometheus capabilities integration. To do that we need to go to the GitHub repo page and follow these instructions:

  • Download & unzip the prometheus-integration.zip folder.
  • Open TIBCO BusinessWorks Studio and point it to a new workspace.
  • Right-click in Project Explorer → Import… → select Plug-ins and Fragments → select Import from the directory radio button
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Browse it to prometheus-integration folder (unzipped in step 1)
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Now click Next → Select Prometheus plugin → click Add button → click Finish. This will import the plugin in the studio.
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Now, to create JAR of this plugin so first, we need to make sure to update com.tibco.bw.prometheus.monitor with ‘.’ (dot) in Bundle-Classpath field as given below in META-INF/MANIFEST.MF file.
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Right-click on Plugin → Export → Export…
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Select type as JAR file click Next
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Now Click Next → Next → select radio button to use existing MANIFEST.MF file and browse the manifest file
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!
  • Click Finish. This will generate prometheus-integration.jar

Now, with the JAR already created what we need to do is include it in your own base image. To do that we place the JAR file in the <TIBCO_HOME>/bwce/2.4/docker/resources/addons/jar

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

And we launch the building image command again from the <TIBCO_HOME>/bwce/2.4/docker folder to update the image using the following command (use the version you’re using at the moment)

docker build -t bwce_base:2.4.4 .

So, now we have an image with Prometheus support! Great! We’re close to the finish, we just create an image for our Container Application, in my case, this is going to be a very simple echo service that you can see here.

And we only need to keep these things in particular when we deploy to our Kubernetes cluster:

  • We should set an environment variable with the BW_PROMETHEUS_ENABLE to “TRUE”
  • We should expose the port 9095 from the container to be used by Prometheus to integrate.
Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

Now, we only need to provide this endpoint to the Prometheus scrapper system. There are several ways to do that, but we’re going to focus on the simple one.

We need to change the prometheus.yml to add the following job data:

- job_name: 'bwdockermonitoring'
  honor_labels: true
  static_configs:
    - targets: ['phenix-test-project-svc.default.svc.cluster.local:9095']
      labels:
        group: 'prod'

And after restarting Prometheus we have all the data indexed in the Prometheus database to be used for any dashboard system.

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

In this case, I’m going to use Grafana to do quick dashboard.

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

Each of these graph components is configured based on the metrics that are being scraped by Prometheus TIBCO exporter.

Prometheus TIBCO Monitoring for Containers: Quick and Simple in 5 Minutes!

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

The past month during the KubeCon 2019 Europe in Barcelona OpenTracing announces its merge with OpenCensus project to create a new standard named OpenTelemetry that is going to be live in September 2019.

So, I think that would be awesome to take a look at the capabilities regarding OpenTracing we have available in TIBCO BusinessWorks Container Edition

Today’s world is too complex in terms of how our architectures are defined and managed. New concepts in the last years like containers, microservices, service mesh, give us the option to reach a new level of flexibility, performance, and productivity but also comes with a cost of management we need to deal with.

Years ago architectures were simpler, service was a concept that was starting out, but even then a few issues begin to arise regarding monitoring, tracing, logging and so on. So, in those days everything was solved with a Development Framework that all our services were going to include because all of our services were developed by the same team, same technology, and in that framework, we can make sure things were handled properly.

Now, we rely on standards to do this kind of things, and for example, for Tracing, we rely on OpenTracing. I don’t want to spend time talking about what OpenTracing is where they have a full medium account talking themselves much better than I could ever do, so please take some minutes to read about it.

The only statement I want to do here is the following one:

Tracing is not Logging, and please be sure you understand that.

Tracing is about sampling, it’s like how flows are performing and if everything is worked but it is not about a specific request has been done well for customer ID whatever… that’s logging, no tracing.

So OpenTracing and its different implementations like Jaeger or Zipkin are the way we can implement tracing today in a really easy way, and this is not something that you could only do in your code-based development language, you can do it with our zero-code tools to develop cloud-native applications like TIBCO BusinessWorks Container Edition and that’s what I’d like to show you today. So, let the match, begin…

First thing I’d like to do is to show you the scenario we’re going to implement, and this is going to be the one shown in the image below:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

You are going to have two REST service that is going to call one to each other, and we’re going to export all the traces to Jaeger external component and later we can use its UI to analyze the flow in a graphical and easy way.

So, the first thing we need to do is to develop the services that as you can see in the pictures below are going to be quite easy because this is not the main purpose of our scenario.

Once, we have our docker images based on those applications we can start, but before we launch our applications, we need to launch our Jaeger system you can read all info about how to do it in the link below:

But at the end we only to run the following command:

docker run -d --name jaeger -e COLLECTOR_ZIPKIN_HTTP_PORT=9411  -p 5775:5775/udp  -p 6831:6831/udp  -p 6832:6832/udp  -p 5778:5778  -p 16686:16686  -p 14268:14268  -p 9411:9411  jaegertracing/all-in-one:1.8

And now, we’re ready to launch our applications and the only things we need to do in our developments because as you could see we didn’t do anything strange in our development and it was quite straightforward is to add the following environment variables when we launch our container

BW_JAVA_OPTS=”-Dbw.engine.opentracing.enable=true” -e JAEGER_AGENT_HOST=jaeger -e JAEGER_AGENT_PORT=6831 -e JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778

And… that’s it, we launch our containers with the following commands and wait until applications are up & running

docker run -ti -p 5000:5000 — name provider -e BW_PROFILE=Docker -e PROVIDER_PORT=5000 -e BW_LOGLEVEL=ERROR — link jaeger -e BW_JAVA_OPTS=”-Dbw.engine.opentracing.enable=true” -e JAEGER_AGENT_HOST=jaeger -e JAEGER_AGENT_PORT=6831 -e JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778 provider:1.0
OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained
docker run — name consumer -ti -p 6000:6000 -e BW_PROFILE=Docker — link jaeger — link provider -e BW_JAVA_OPTS=”-Dbw.engine.opentracing.enable=true” -e JAEGER_AGENT_HOST=jaeger -e JAEGER_AGENT_PORT=6831 -e JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778 -e CONSUMER_PORT=6000 -e PROVIDER_HOST=provider consumer:1.0
OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

Once they’re running, let’s generate some requests! To do that I’m going to use a SOAPUI project to generate some stable load for 60 secs, as you can see in the image below:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

And now we’re going to go to the following URL to see the Jaeger UI and we can see the following thing as soon as you click in the Search button

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

And then if we zoom in some specific trace:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

That’s pretty amazing but that’s not all, because you can see if you search in the UI about the data of this traces, you can see technical data from your BusinessWorks Container Edition flows as you can see in the picture below:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

But… what if you want to add your custom tags to those traces? You can do it as well!! Let me explain to you how.

Since BusinessWorks Container Edition 2.4.4 you are going to find a new tab in all your activities named “Tags” where you can add the custom tags that you want this activity to include, for example, a custom id that is going to be propagated through the whole process we can define it as you can see here.

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

And if you take a look at the data we have in the system, you can see all of these traces has this data:

OpenTracing in TIBCO BusinessWorks Container Edition: Tracing with Jaeger Explained

You can take a look at the code in the following GitHub repository: