Discover the differences between two of the most used CRDs from Prometheus Operator and how to use each of them.
ServiceMonitor and PodMonitor are terms that you will start to see more often when talking about using Prometheus. We have covered a lot about Prometheus in the past articles. It is one of the primary references when we talk about monitoring in a cloud-native environment and is specially focused on the Kubernetes ecosystem.
Prometheus has a new deployment model under the Kubernetes Operator Framework in recent times. That has generated several changes in terms of resources and how we configure several aspects of the monitoring of our workloads. Some of these concepts are now managed as Customer Resource Definition (CRD) that are included to simplify the system’s configuration and be more aligned with the capabilities of the Kubernetes platform itself. This is great but, at the same time, changes how we need to use this excellent monitoring tool for cloud-native workloads.
Today, we will cover two of these new CRDs, one of the most relevant ones: ServiceMonitor and PodMonitor. These are the new objects that specify the resources that will be under monitoring scope to the platform, and each of them covers a different type of object, as you can imagine: Services and Pods.
Each of them has its definition file with its particular fields and metadata, and to highlight them, I will present a sample for each of them below:
As you can see, the definitions of the components are very similar and very intuitive, focusing on the selector to detect which pods or services we should monitor and some data regarding the specific target of the monitoring, so Prometheus knows how to scrape them.
If you want to take a look more in detail at any option you can configure on this CRD, I would recommend you to take a look at this URL which includes a detailed field to field documentation of the most common CRDs:
PodMonitor.monitoring.coreos.com/v1
Automatic documentation for your CustomResourceDefinitions.
These components will belong to the definition of your workloads, which means that the creation and maintenance of these objects will be from the application’s developers.
That is great because several reasons:
It will include the Monitoring aspect of the component itself, so you will never forget the add the configuration from a specific component. That means it can be included in the duplicate YAML files or Helm Chart or a Kustomize resources as another needed resource.
It will de-centralize the monitoring configuration making it more agile, and it will progress as the software components do it.
It will reduce the impact on other monitored components as there is no need to act in any standard file or resource, so any different workloads will continue to work as expected.
Both objects are very similar in their purposes as both of them scrape all the endpoints that match the selector that we added. So, in which cases should I use one or the other?
The answer will be straightforward. By default, you will go with a ServiceMonitor because it will provide the metrics from the service itself and each of the endpoints that the service has, so each of the pods that are implementing the service will be discovered and scraped as part of this action.
So, in which cases should I use PodMonitor? Where the workload you are trying to monitor doesn’t act behind a service, so as there is no service defined, you cannot use ServiceMonitor. Do you want some examples of those? Let’s bring some!
Services that interact using other protocols that are not HTTP-based, such as Kafka, SQS/SNS, JMS, or similar ones.
Components such as CronJobs, DaemonSets, or non exposing any incoming connection model.
So I hope this article will help you understand the main difference between those objects and go a little deeper into how the new Prometheus Operator Framework resources work. We will continue covering other aspects in upcoming posts.
Discover how the policy can affect how your data is managed inside Kubernetes Cluster
As you know, everything is fine on Kubernetes until you face a stateful workload and you need to manage its data. All the powerful capabilities that Kubernetes brings to the game face many challenges when we talk about stateful services that require a lot of data.
Most of the things challenges have a solution today. That is why many stateful workloads such as databases and other backend systems also require a lot of knowledge about how to define several things. One of them is the retain policy of the persistent volume.
First, let’s define what the Reclaim Policy is, and to do that, I will use the official definition from the documentation:
When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted.
So, as you can see, we have three options: Retain, Recycle or Delete. Let’s see what the behavior for each of them is.
Retain
That means the data will still be there even if the claim has been deleted. All these policies apply when the original PVC is removed. Before that situation, the data will always remain no matter which policy we use.
So, retain means that even if we delete the PVC, the data will still be there, and it will be stored so that no other PVC can claim that data. Only an administrator can do so with the following flow:
Delete the PersistentVolume.
Manually clean up the data on the associated storage asset accordingly.
Manually delete the associated storage asset.
Delete
That means that as soon as we remove the PVC the PV and the data will be released.
This will simplify the cleanup and housekeeping task of your volumes but at the same time, it increases the possibility that has some data loss because of unexpected behavior. As always, this is a trade-off you need to do.
We always need to remind you that if you try to delete PVC in active use by a Pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any Pods to ensure that no data is lost, at least when some component is still bound to it. On the same policy similar thing happens to the PV. If an admin deletes a PV attached to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC.
Recycle
That means something in the middle works similar to the Delete policy we explained above, but it doesn’t delete the volume itself, but it will remove the content of the PV, so in practice, it will be similar. So, in the end, it will perform a command similar to this rm -rf on the storage artifact itself.
But just for you to be aware, this policy is nowadays deprecated, and you should not use it in your new workloads, but it is still supported so you can find some workloads that are still using it.
Prometheus has included a new capability in the 2.32.0 release to optimize the single pane of glass approach
From the new upcoming release of Prometheus v2.32.0, we will have an important new feature at our disposal: the Agent Mode. And there is a fantastic blog post announcing this feature from our of the rockstar from the Prometheus team: Bartlomiej Plotka, that I recommend reading. I will add a reference section at the end of the article. I will try to summarise some of the most relevant points here.
Another post about Prometheus, the most critical monitoring system in nowadays cloud-native architectures, has its inception in the Borgmon monitoring system created by Google in ancient times (around the 2010–2014 period).
Based on this importance, its usage has been growing incredibly and making its relationship with the Kubernetes ecosystem stronger. We have reached a point that Prometheus is the default option for monitoring in pretty much any scenario that has a Kubernetes workload related to it; some examples are the ones shown below:
Prometheus is the default option, including the Openshift Monitoring System
Prometheus has an Amazon Managed Service at your disposal to be used for your workloads.
Prometheus is included in the Reference Architecture for Cloud-Native Azure Deployments.
Because of this popularity and growth, many different use-cases have raised some improvements that can be done. Some of them are related to specific use-cases such as edge deployment or providing a global view, or a single pane of glass.
Until now, if I have several Prometheus deployments, monitor a specific subset of your workloads because of their resides on different networks or because there are various clusters, you can rely on the remote write capability to aggregate that into a global view approach.
Remote Write is a capability that has existed in Prometheus since its inception. The metrics that Prometheus is scraping can be sent automatically to a different system using their integrations. This can be configured for all the metrics or just a subset. But even with all of these, they are jumping ahead on this capability, which is why they are introducing the Agent mode.
Agent Mode optimizes the remote write use case configuring the Prometheus instance in a specific mode to do this job in an optimized way. That model implies the following configuration:
Disable querying and alerting.
Switch the local storage with a customized TSDB WAL
And the remarkable thing is that everything else is the same, so we will still use the same API, discover capabilities, and related configuration. And what all of this will provide to you? Let’s take a look at the benefits you will get of doing so:
Efficiency: Customised TSDB WAL will keep only the data that could not be sent to the target location; as soon as it succeeds, it will remove that piece of data.
Scalability: It will improve scalability, enabling easier horizontal scalability for ingestion. This is because this agent mode disables some of the reasons auto-stability is complex in normal server-mode Prometheus. A stateful workload makes scalability complex, especially in scale-down scenarios. So this mode will lead to a “more-stateless” workload that will simplify this scenario and be close to the dream of an auto-scalability metric ingestion system.
This feature is available as an experimental flag in the new release, but this was already tested with Grafana Labs’ works, especially on the performance side.
Check out the properties that will let you an optimized use of your disk storage and savings storing your monitoring data
Prometheus has become a standard component in our cloud architectures and Prometheus storage is becoming a critical aspect. So I am going to guess that if you are reading this you already know what Prometheus is. If this is not the case, please take your time to take a look at other articles that I have created:
Prometheus Monitoring for Microservices using TIBCO
We’re living a world with constant changes and this is even more true in the Enterprise Application world. I’ll not spend much time talking about things you already know, but just say that the microservices architecture approach and the PaaS solutions have been a game-changer for all enterprise integration technologies. This time I’d like to […]
Kubernetes Service Discovery for Prometheus
In previous posts, we described how to set up Prometheus to work with your TIBCO BusinessWorks Container Edition apps, and you can read more about it here. In that post, we described that there were several ways to update Prometheus about the services that ready to monitor. And we choose the most simple at that […]
We know that usually when we monitor using Prometheus we have so many exporters available at our disposal and also that each of them exposes a lot of very relevant metrics that we need to track everything we need to and that lead to very intensive usage of the storage available if we do not manage accordingly.
There are two factors that affect this. The first one is to optimize the number of metrics that we are storing and we already provide tips to do that in other articles as the ones shown below:
How it optimize the disk usage in the Prometheus database?
Learn some tricks to analyze and optimize the usage that you are doing of the TSDB and save money on your cloud deployment. Photo by Markus Spiske on Unsplash In previous posts, we discussed how the storage layer worked for Prometheus and how effective it was. But in the current times, we are of cloud computing […]
The other one is how long we store the metrics called the “retention period in Prometheus.” And this property has suffered a lot of changes during the different versions. If you would like to see all the history please take a look at this article from Robust Perception:
How can you control how much history Prometheus keeps?
The main properties that you can configure are the following ones:
storage.tsdb.retention.time: Number of days to store the metrics by default to 15d. This property replaces the deprecated one storage.tsdb.retention.
storage.tsdb.retention.size: You can specify the limit of size to be used. This is not a hard limit but a minimum so please define some margin here. Units supported: B, KB, MB, GB, TB, PB, EB. Ex: “512MB”. This property is experimental so far as you can see in the official documentation:
What about setting this configuration in the operator for Kubernetes? In that case, you also have similar options available in the values.yaml configuration file for the chart as you can see in the image below:
values.yml for the Prometheus Operator Helm Chart
This should help you get an optimized deployment of Prometheus that ensures all the features that Prometheus has but at the same time an optimal use of the resources at your disposal.
Additional to that, you should also check the Managed Service options that some providers have regarding Prometheus, such as the Amazon Managed Services for Prometheus, as you can see in the link below:
Amazon Prometheus Service to Provide More Availability to Your Monitoring Solution
Learn what Amazon Managed Service for Prometheus provides and how you can benefit from it. Photo by Casey Horner on Unsplash Monitoring is one of the hot topics when we talk about cloud-native architectures. Prometheus is a graduated Cloud Native Computing Foundation (CNCF) open-source project and one of the industry-standard solutions when it comes to monitoring your […]
Learn what Amazon Managed Service for Prometheus provides and how you can benefit from it.
Monitoring is one of the hot topics when we talk about cloud-native architectures. Prometheus is a graduated Cloud Native Computing Foundation (CNCF) open-source project and one of the industry-standard solutions when it comes to monitoring your cloud-native deployment, especially when Kubernetes is involved.
Following its own philosophy of providing a managed service for some of the most used open-source projects but fully integrated with the AWS ecosystem, AWS releases a general preview (at the time of writing this article): Amazon Managed Service for Prometheus (AMP).
The first thing is to define what Amazon Managed Service for Prometheus is and what features provide. So, this is the Amazon definition of the service:
A fully managed Prometheus-compatible monitoring service that makes it easy to monitor containerized applications securely and at scale.
And I would like to spend some time on some parts of this sentence.
Fully managed service: So, this will be hosted and handle by Amazon, and we are just going to interact with it using API as we do with other Amazon services like EKS, RDS, MSK, SQS/SNS, and so on.
Prometheus-compatible: So, that means that even if this is not a pure-Prometheus installation, the API is going to be compatible. So the Prometheus clients who can use Grafana or others to get the information from Prometheus will work without changing their interfaces.
Service at-scale: Amazon, as part of the managed service, will take care of the solution’s scalability. You don’t need to define an instance-type or how much RAM or CPU you do need. This is going to be handled by AWS.
So, that sounds perfect. So you can think that you are going to delete your Prometheus server, and it will start using this service. Maybe you are even typing something like helm delete prom… WAIT WAIT!!
Because at this point, this is not going to replace your local Prometheus server, but it will allow the integration with it. So, that means that your Prometheus server is going to act like a scraper for the whole monitoring scalable solution that AMP is providing, something as you can see in the picture below:
Reference Architecture for Amazon Prometheus Service
So, you are still going to need a Prometheus server, that is right, but all the complexity are going to be avoided and leverage to the managed service: Storage configuration, High availability, API optimization, and so on is going to be just provided to you out of the box.
Ingesting information into Amazon Managed Service for Prometheus
At this moment, there is two way to ingest data into the Amazon Prometheus Service:
From an existing Prometheus server using the remote_write capability and configuration, so that means that each series that is scraped by the local Prometheus is going to be sent to the Amazon Prometheus Service.
Using AWS Distro for OpenTelemetry to integrate with this service using the Prometheus Receiver and the AWS Prometheus Remote Write Exporter components to get that.
Summary
So this is a way to provide an enterprise-grade installation leveraging on all the knowledge that AWS has hosting and managing this solution at scale and optimized in terms of performance. You can focus on the components you need to get the metrics ingested into the service.
I am sure this will not be the last movement from AWS in observability and metrics management topics. I am sure they will continue to provide more tools to the developer’s and architects’ hands to define optimized solutions as easily as possible.
Learn some tricks to analyze and optimize the usage that you are doing of the TSDB and save money on your cloud deployment.
In previous posts, we discussed how the storage layer worked for Prometheus and how effective it was. But in the current times, we are of cloud computing we know that each technical optimization is also a cost optimization as well and that is why we need to be very diligent about any option that we use regarding optimization.
We know that usually when we monitor using Prometheus we have so many exporters available at our disposal and also that each of them exposes a lot of very relevant metrics that we need to track everything we need to. But also, we should be aware that there are also metrics that we don’t need at this moment or we don’t plan to use it. So, if we are not planning to use, why do we want to waste disk space storing them?
So, let’s start taking a look at one of the exporters we have in our system. In my case, I would like to use a BusinessWorks Container Application that exposes metrics about its utilization. If you check their metrics endpoint you could see something like this:
# HELP jvm_info JVM version info # TYPE jvm_info gauge jvm_info{version="1.8.0_221-b27",vendor="Oracle Corporation",runtime="Java(TM) SE Runtime Environment",} 1.0 # HELP jvm_memory_bytes_used Used bytes of a given JVM memory area. # TYPE jvm_memory_bytes_used gauge jvm_memory_bytes_used{area="heap",} 1.0318492E8 jvm_memory_bytes_used{area="nonheap",} 1.52094712E8 # HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area. # TYPE jvm_memory_bytes_committed gauge jvm_memory_bytes_committed{area="heap",} 1.35266304E8 jvm_memory_bytes_committed{area="nonheap",} 1.71302912E8 # HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area. # TYPE jvm_memory_bytes_max gauge jvm_memory_bytes_max{area="heap",} 1.073741824E9 jvm_memory_bytes_max{area="nonheap",} -1.0 # HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area. # TYPE jvm_memory_bytes_init gauge jvm_memory_bytes_init{area="heap",} 1.34217728E8 jvm_memory_bytes_init{area="nonheap",} 2555904.0 # HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool. # TYPE jvm_memory_pool_bytes_used gauge jvm_memory_pool_bytes_used{pool="Code Cache",} 3.3337536E7 jvm_memory_pool_bytes_used{pool="Metaspace",} 1.04914136E8 jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 1.384304E7 jvm_memory_pool_bytes_used{pool="G1 Eden Space",} 3.3554432E7 jvm_memory_pool_bytes_used{pool="G1 Survivor Space",} 1048576.0 jvm_memory_pool_bytes_used{pool="G1 Old Gen",} 6.8581912E7 # HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool. # TYPE jvm_memory_pool_bytes_committed gauge jvm_memory_pool_bytes_committed{pool="Code Cache",} 3.3619968E7 jvm_memory_pool_bytes_committed{pool="Metaspace",} 1.19697408E8 jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 1.7985536E7 jvm_memory_pool_bytes_committed{pool="G1 Eden Space",} 4.6137344E7 jvm_memory_pool_bytes_committed{pool="G1 Survivor Space",} 1048576.0 jvm_memory_pool_bytes_committed{pool="G1 Old Gen",} 8.8080384E7 # HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool. # TYPE jvm_memory_pool_bytes_max gauge jvm_memory_pool_bytes_max{pool="Code Cache",} 2.5165824E8 jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0 jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9 jvm_memory_pool_bytes_max{pool="G1 Eden Space",} -1.0 jvm_memory_pool_bytes_max{pool="G1 Survivor Space",} -1.0 jvm_memory_pool_bytes_max{pool="G1 Old Gen",} 1.073741824E9 # HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool. # TYPE jvm_memory_pool_bytes_init gauge jvm_memory_pool_bytes_init{pool="Code Cache",} 2555904.0 jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0 jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0 jvm_memory_pool_bytes_init{pool="G1 Eden Space",} 7340032.0 jvm_memory_pool_bytes_init{pool="G1 Survivor Space",} 0.0 jvm_memory_pool_bytes_init{pool="G1 Old Gen",} 1.26877696E8 # HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool. # TYPE jvm_buffer_pool_used_bytes gauge jvm_buffer_pool_used_bytes{pool="direct",} 148590.0 jvm_buffer_pool_used_bytes{pool="mapped",} 0.0 # HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool. # TYPE jvm_buffer_pool_capacity_bytes gauge jvm_buffer_pool_capacity_bytes{pool="direct",} 148590.0 jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0 # HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool. # TYPE jvm_buffer_pool_used_buffers gauge jvm_buffer_pool_used_buffers{pool="direct",} 19.0 jvm_buffer_pool_used_buffers{pool="mapped",} 0.0 # HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM # TYPE jvm_classes_loaded gauge jvm_classes_loaded 16993.0 # HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution # TYPE jvm_classes_loaded_total counter jvm_classes_loaded_total 17041.0 # HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution # TYPE jvm_classes_unloaded_total counter jvm_classes_unloaded_total 48.0 # HELP bwce_activity_stats_list BWCE Activity Statictics list # TYPE bwce_activity_stats_list gauge # HELP bwce_activity_counter_list BWCE Activity related Counters list # TYPE bwce_activity_counter_list gauge # HELP all_activity_events_count BWCE All Activity Events count by State # TYPE all_activity_events_count counter all_activity_events_count{StateName="CANCELLED",} 0.0 all_activity_events_count{StateName="COMPLETED",} 0.0 all_activity_events_count{StateName="STARTED",} 0.0 all_activity_events_count{StateName="FAULTED",} 0.0 # HELP activity_events_count BWCE All Activity Events count by Process, Activity State # TYPE activity_events_count counter # HELP activity_total_evaltime_count BWCE Activity EvalTime by Process and Activity # TYPE activity_total_evaltime_count counter # HELP activity_total_duration_count BWCE Activity DurationTime by Process and Activity # TYPE activity_total_duration_count counter # HELP bwpartner_instance:total_request Total Request for the partner invocation which mapped from the activities # TYPE bwpartner_instance:total_request counter # HELP bwpartner_instance:total_duration_ms Total Duration for the partner invocation which mapped from the activities (execution or latency) # TYPE bwpartner_instance:total_duration_ms counter # HELP bwce_process_stats BWCE Process Statistics list # TYPE bwce_process_stats gauge # HELP bwce_process_counter_list BWCE Process related Counters list # TYPE bwce_process_counter_list gauge # HELP all_process_events_count BWCE All Process Events count by State # TYPE all_process_events_count counter all_process_events_count{StateName="CANCELLED",} 0.0 all_process_events_count{StateName="COMPLETED",} 0.0 all_process_events_count{StateName="STARTED",} 0.0 all_process_events_count{StateName="FAULTED",} 0.0 # HELP process_events_count BWCE Process Events count by Operation # TYPE process_events_count counter # HELP process_duration_seconds_total BWCE Process Events duration by Operation in seconds # TYPE process_duration_seconds_total counter # HELP process_duration_milliseconds_total BWCE Process Events duration by Operation in milliseconds # TYPE process_duration_milliseconds_total counter # HELP bwdefinitions:partner BWCE Process Events count by Operation # TYPE bwdefinitions:partner counter bwdefinitions:partner{ProcessName="t1.module.item.getTransactionData",ActivityName="FTLPublisher",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="TransactionService",PartnerOperation="GetTransactionsOperation",Location="internal",PartnerMiddleware="MW",} 1.0 bwdefinitions:partner{ProcessName=" t1.module.item.auditProcess",ActivityName="KafkaSendMessage",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="AuditService",PartnerOperation="AuditOperation",Location="internal",PartnerMiddleware="MW",} 1.0 bwdefinitions:partner{ProcessName="t1.module.item.getCustomerData",ActivityName="JMSRequestReply",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="CustomerService",PartnerOperation="GetCustomerDetailsOperation",Location="internal",PartnerMiddleware="MW",} 1.0 # HELP bwdefinitions:binding BW Design Time Repository - binding/transport definition # TYPE bwdefinitions:binding counter bwdefinitions:binding{ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInterface="GetCustomer360:GetDataOperation",Binding="/customer",Transport="HTTP",} 1.0 # HELP bwdefinitions:service BW Design Time Repository - Service definition # TYPE bwdefinitions:service counter bwdefinitions:service{ProcessName="t1.module.sub.item.getCustomerData",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0 bwdefinitions:service{ProcessName="t1.module.sub.item.auditProcess",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0 bwdefinitions:service{ProcessName="t1.module.sub.orchestratorSubFlow",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0 bwdefinitions:service{ProcessName="t1.module.Process",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0 # HELP bwdefinitions:gateway BW Design Time Repository - Gateway definition # TYPE bwdefinitions:gateway counter bwdefinitions:gateway{ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",Endpoint="bwce-demo-mon-orchestrator-bwce",InteractionType="ISTIO",} 1.0 # HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. # TYPE process_cpu_seconds_total counter process_cpu_seconds_total 1956.86 # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. # TYPE process_start_time_seconds gauge process_start_time_seconds 1.604712447107E9 # HELP process_open_fds Number of open file descriptors. # TYPE process_open_fds gauge process_open_fds 763.0 # HELP process_max_fds Maximum number of open file descriptors. # TYPE process_max_fds gauge process_max_fds 1048576.0 # HELP process_virtual_memory_bytes Virtual memory size in bytes. # TYPE process_virtual_memory_bytes gauge process_virtual_memory_bytes 3.046207488E9 # HELP process_resident_memory_bytes Resident memory size in bytes. # TYPE process_resident_memory_bytes gauge process_resident_memory_bytes 4.2151936E8 # HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. # TYPE jvm_gc_collection_seconds summary jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 540.0 jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 4.754 jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 2.0 jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.563 # HELP jvm_threads_current Current thread count of a JVM # TYPE jvm_threads_current gauge jvm_threads_current 98.0 # HELP jvm_threads_daemon Daemon thread count of a JVM # TYPE jvm_threads_daemon gauge jvm_threads_daemon 43.0 # HELP jvm_threads_peak Peak thread count of a JVM # TYPE jvm_threads_peak gauge jvm_threads_peak 98.0 # HELP jvm_threads_started_total Started thread count of a JVM # TYPE jvm_threads_started_total counter jvm_threads_started_total 109.0 # HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers # TYPE jvm_threads_deadlocked gauge jvm_threads_deadlocked 0.0 # HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors # TYPE jvm_threads_deadlocked_monitor gauge jvm_threads_deadlocked_monitor 0.0
As you can see a lot of metrics but I have to be honest I am not using most of them in my dashboards and to generate my alerts. I can use the metrics regarding the application performance for each of the BusinessWorks process and its activities, also the JVM memory performance and number of threads but things like how the JVM GC is working for each of the layers of the JVM (G1 Young Generation, G1 Old Generation) I’m not using them at all.
So, If I show the same metric endpoint highlighting the things that I am not using it would be something like this:
# HELP jvm_info JVM version info # TYPE jvm_info gauge jvm_info{version="1.8.0_221-b27",vendor="Oracle Corporation",runtime="Java(TM) SE Runtime Environment",} 1.0 # HELP jvm_memory_bytes_used Used bytes of a given JVM memory area. # TYPE jvm_memory_bytes_used gauge jvm_memory_bytes_used{area="heap",} 1.0318492E8 jvm_memory_bytes_used{area="nonheap",} 1.52094712E8 # HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area. # TYPE jvm_memory_bytes_committed gauge jvm_memory_bytes_committed{area="heap",} 1.35266304E8 jvm_memory_bytes_committed{area="nonheap",} 1.71302912E8 # HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area. # TYPE jvm_memory_bytes_max gauge jvm_memory_bytes_max{area="heap",} 1.073741824E9 jvm_memory_bytes_max{area="nonheap",} -1.0 # HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area. # TYPE jvm_memory_bytes_init gauge jvm_memory_bytes_init{area="heap",} 1.34217728E8 jvm_memory_bytes_init{area="nonheap",} 2555904.0 # HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool. # TYPE jvm_memory_pool_bytes_used gauge jvm_memory_pool_bytes_used{pool="Code Cache",} 3.3337536E7 jvm_memory_pool_bytes_used{pool="Metaspace",} 1.04914136E8 jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 1.384304E7 jvm_memory_pool_bytes_used{pool="G1 Eden Space",} 3.3554432E7 jvm_memory_pool_bytes_used{pool="G1 Survivor Space",} 1048576.0 jvm_memory_pool_bytes_used{pool="G1 Old Gen",} 6.8581912E7 # HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool. # TYPE jvm_memory_pool_bytes_committed gauge jvm_memory_pool_bytes_committed{pool="Code Cache",} 3.3619968E7 jvm_memory_pool_bytes_committed{pool="Metaspace",} 1.19697408E8 jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 1.7985536E7 jvm_memory_pool_bytes_committed{pool="G1 Eden Space",} 4.6137344E7 jvm_memory_pool_bytes_committed{pool="G1 Survivor Space",} 1048576.0 jvm_memory_pool_bytes_committed{pool="G1 Old Gen",} 8.8080384E7 # HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool. # TYPE jvm_memory_pool_bytes_max gauge jvm_memory_pool_bytes_max{pool="Code Cache",} 2.5165824E8 jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0 jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9 jvm_memory_pool_bytes_max{pool="G1 Eden Space",} -1.0 jvm_memory_pool_bytes_max{pool="G1 Survivor Space",} -1.0 jvm_memory_pool_bytes_max{pool="G1 Old Gen",} 1.073741824E9 # HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool. # TYPE jvm_memory_pool_bytes_init gauge jvm_memory_pool_bytes_init{pool="Code Cache",} 2555904.0 jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0 jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0 jvm_memory_pool_bytes_init{pool="G1 Eden Space",} 7340032.0 jvm_memory_pool_bytes_init{pool="G1 Survivor Space",} 0.0 jvm_memory_pool_bytes_init{pool="G1 Old Gen",} 1.26877696E8 # HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool. # TYPE jvm_buffer_pool_used_bytes gauge jvm_buffer_pool_used_bytes{pool="direct",} 148590.0 jvm_buffer_pool_used_bytes{pool="mapped",} 0.0 # HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool. # TYPE jvm_buffer_pool_capacity_bytes gauge jvm_buffer_pool_capacity_bytes{pool="direct",} 148590.0 jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0 # HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool. # TYPE jvm_buffer_pool_used_buffers gauge jvm_buffer_pool_used_buffers{pool="direct",} 19.0 jvm_buffer_pool_used_buffers{pool="mapped",} 0.0 # HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM # TYPE jvm_classes_loaded gauge jvm_classes_loaded 16993.0 # HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution # TYPE jvm_classes_loaded_total counter jvm_classes_loaded_total 17041.0 # HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution # TYPE jvm_classes_unloaded_total counter jvm_classes_unloaded_total 48.0 # HELP bwce_activity_stats_list BWCE Activity Statictics list # TYPE bwce_activity_stats_list gauge # HELP bwce_activity_counter_list BWCE Activity related Counters list # TYPE bwce_activity_counter_list gauge # HELP all_activity_events_count BWCE All Activity Events count by State # TYPE all_activity_events_count counter all_activity_events_count{StateName="CANCELLED",} 0.0 all_activity_events_count{StateName="COMPLETED",} 0.0 all_activity_events_count{StateName="STARTED",} 0.0 all_activity_events_count{StateName="FAULTED",} 0.0 # HELP activity_events_count BWCE All Activity Events count by Process, Activity State # TYPE activity_events_count counter # HELP activity_total_evaltime_count BWCE Activity EvalTime by Process and Activity # TYPE activity_total_evaltime_count counter # HELP activity_total_duration_count BWCE Activity DurationTime by Process and Activity # TYPE activity_total_duration_count counter # HELP bwpartner_instance:total_request Total Request for the partner invocation which mapped from the activities # TYPE bwpartner_instance:total_request counter # HELP bwpartner_instance:total_duration_ms Total Duration for the partner invocation which mapped from the activities (execution or latency) # TYPE bwpartner_instance:total_duration_ms counter # HELP bwce_process_stats BWCE Process Statistics list # TYPE bwce_process_stats gauge # HELP bwce_process_counter_list BWCE Process related Counters list # TYPE bwce_process_counter_list gauge # HELP all_process_events_count BWCE All Process Events count by State # TYPE all_process_events_count counter all_process_events_count{StateName="CANCELLED",} 0.0 all_process_events_count{StateName="COMPLETED",} 0.0 all_process_events_count{StateName="STARTED",} 0.0 all_process_events_count{StateName="FAULTED",} 0.0 # HELP process_events_count BWCE Process Events count by Operation # TYPE process_events_count counter # HELP process_duration_seconds_total BWCE Process Events duration by Operation in seconds # TYPE process_duration_seconds_total counter # HELP process_duration_milliseconds_total BWCE Process Events duration by Operation in milliseconds # TYPE process_duration_milliseconds_total counter # HELP bwdefinitions:partner BWCE Process Events count by Operation # TYPE bwdefinitions:partner counter bwdefinitions:partner{ProcessName="t1.module.item.getTransactionData",ActivityName="FTLPublisher",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="TransactionService",PartnerOperation="GetTransactionsOperation",Location="internal",PartnerMiddleware="MW",} 1.0 bwdefinitions:partner{ProcessName=" t1.module.item.auditProcess",ActivityName="KafkaSendMessage",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="AuditService",PartnerOperation="AuditOperation",Location="internal",PartnerMiddleware="MW",} 1.0 bwdefinitions:partner{ProcessName="t1.module.item.getCustomerData",ActivityName="JMSRequestReply",ServiceName="GetCustomer360",OperationName="GetDataOperation",PartnerService="CustomerService",PartnerOperation="GetCustomerDetailsOperation",Location="internal",PartnerMiddleware="MW",} 1.0 # HELP bwdefinitions:binding BW Design Time Repository - binding/transport definition # TYPE bwdefinitions:binding counter bwdefinitions:binding{ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInterface="GetCustomer360:GetDataOperation",Binding="/customer",Transport="HTTP",} 1.0 # HELP bwdefinitions:service BW Design Time Repository - Service definition # TYPE bwdefinitions:service counter bwdefinitions:service{ProcessName="t1.module.sub.item.getCustomerData",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0 bwdefinitions:service{ProcessName="t1.module.sub.item.auditProcess",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0 bwdefinitions:service{ProcessName="t1.module.sub.orchestratorSubFlow",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0 bwdefinitions:service{ProcessName="t1.module.Process",ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",} 1.0 # HELP bwdefinitions:gateway BW Design Time Repository - Gateway definition # TYPE bwdefinitions:gateway counter bwdefinitions:gateway{ServiceName="GetCustomer360",OperationName="GetDataOperation",ServiceInstance="GetCustomer360:GetDataOperation",Endpoint="bwce-demo-mon-orchestrator-bwce",InteractionType="ISTIO",} 1.0 # HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. # TYPE process_cpu_seconds_total counter process_cpu_seconds_total 1956.86 # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. # TYPE process_start_time_seconds gauge process_start_time_seconds 1.604712447107E9 # HELP process_open_fds Number of open file descriptors. # TYPE process_open_fds gauge process_open_fds 763.0 # HELP process_max_fds Maximum number of open file descriptors. # TYPE process_max_fds gauge process_max_fds 1048576.0 # HELP process_virtual_memory_bytes Virtual memory size in bytes. # TYPE process_virtual_memory_bytes gauge process_virtual_memory_bytes 3.046207488E9 # HELP process_resident_memory_bytes Resident memory size in bytes. # TYPE process_resident_memory_bytes gauge process_resident_memory_bytes 4.2151936E8 # HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. # TYPE jvm_gc_collection_seconds summary jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 540.0 jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 4.754 jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 2.0 jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.563 # HELP jvm_threads_current Current thread count of a JVM # TYPE jvm_threads_current gauge jvm_threads_current 98.0 # HELP jvm_threads_daemon Daemon thread count of a JVM # TYPE jvm_threads_daemon gauge jvm_threads_daemon 43.0 # HELP jvm_threads_peak Peak thread count of a JVM # TYPE jvm_threads_peak gauge jvm_threads_peak 98.0 # HELP jvm_threads_started_total Started thread count of a JVM # TYPE jvm_threads_started_total counter jvm_threads_started_total 109.0 # HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers # TYPE jvm_threads_deadlocked gauge jvm_threads_deadlocked 0.0 # HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors # TYPE jvm_threads_deadlocked_monitor gauge jvm_threads_deadlocked_monitor 0.0
So, it can be a 50% of the metric endpoint response the part that I’m not using, so, why I am using disk space that I am paying for to storing it? And this is just for a “critical exporter”, one that I try to use as much information as possible, but think about how many exporters do you have and how much information you use for each of them.
Ok, so now the purpose and the motivation of this post are clear, but what we can do about it?
Discovering the REST API
Prometheus has an awesome REST API to expose all the information that you can wish about. If you have ever use the Graphical Interface for Prometheus (shown below) you are using the REST API because this is why is behind it.
Target view of the Prometheus Graphical Interface
We have all the documentation regarding the REST API in the Prometheus official documentation:
But what is this API providing us in terms of the time-series database TSDB that Prometheus is using?
TSDB Admin APIs
We have a specific API to manage the performance of the TSDB database but in order to be able to use it, we need to enable the Admin API. And that is done by providing the following flag where we are launching the Prometheus server --web.enable-admin-api.
If we are using the Prometheus Operator Helm Chart to deploy this we need to use the following item in our values.yaml
## EnableAdminAPI enables Prometheus the administrative HTTP API which includes functionality such as deleting time series. ## This is disabled by default. ## ref: https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis ## enableAdminAPI: true
We have a lot of options enable when we enable this administrative API but today we are going to focus on a single REST operation that is the “stats”. This is the only method related to TSDB that it doesn’t require to enable the Admin API. This operation, as we can read in the Prometheus documentation, returns the following items:
headStats: This provides the following data about the head block of the TSDB:
numSeries: The number of series.
chunkCount: The number of chunks.
minTime: The current minimum timestamp in milliseconds.
maxTime: The current maximum timestamp in milliseconds.
seriesCountByMetricName: This will provide a list of metrics names and their series count.
labelValueCountByLabelName: This will provide a list of the label names and their value count.
memoryInBytesByLabelName This will provide a list of the label names and memory used in bytes. Memory usage is calculated by adding the length of all values for a given label name.
seriesCountByLabelPair This will provide a list of label value pairs and their series count.
To access to that API we need to hit the following endpoint:
GET /api/v1/status/tsdb
So, when I am doing that in my Prometheus deployment I get something similar to this:
We can also check the same information if we use the new and experimental React User Interface on the following endpoint:
/new/tsdb-status
Graphical Visualization of top 10 series count by metric name in the new Prometheus UI
So, with that, you will get the Top 10 series and labels that are inside your time-series database, so in case, some of them are not useful you can just get rid of them using the normal approaches to drop a series or a label. This is great, but what if all the ones shown here are relevant, what can we do about it?
Mmmm, maybe we can use PromQL to monitor this (dogfodding approach). So if we would like to extract the same information but using PromQL we can do it with the following query:
topk(10, count by (__name__)({__name__=~".+"}))
Top 10 of metric series generated and stored in the time series database
And now we have all the power at my hands. For example, let’s take a look not at the 10 more relevant but the 100 more relevants or any other filter that we need to apply. For example, let’s see the metrics regarding with the JVM that we discussed at the beginning. And we will do that with the following PromQL query:
topk(100, count by (__name__)({__name__=~"jvm.+"}))
Top 100 of metric series regarding to JVM metrics
So we can see that we have at least 150 series regarding to metrics that I am not using at all. But let’s do it even better, let’s take a look at the same but group by job names:
topk(10, count by (job,__name__)({__name__=~".+"}))
Result of checking the top 10 metric series count with the job that is generating them
Prometheus is one of the key systems in nowadays cloud architectures. The second graduate project from the Cloud Native Computing Foundation (CNCF) after Kubernetes itself, and is the monitoring solution for excellence in most of the workloads running on Kubernetes.
If you already have used Prometheus for some time, you know that it relies on a Time series database so Prometheus storage is one of the key elements. Based on their own words from the Prometheus official page:
Storage | Prometheus
An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.
Every time series is uniquely identified by its metric name and optional key-value pairs called labels, and that series is similar to the tables in a relational model. And inside each of those series, we have samples that are similar to the tuples. And each of the samples contains a float value and a milliseconds-precision timestamp.
Default on-disk approach
By default, Prometheus uses a local-storage approach storing all those samples on disk. This data is distributed in different files and folders to group different chunks of data.
So, we have folders to create those groups, and by default, they are a two-hour block and can contain one or more files depending on the amount of data ingested in that period of time as each folder contains all the samples for that specific timeline.
Additionally, each folder also has some kind of metadata files that help locate each of the data files’ metrics.
A file is persistent in a complete manner when the block is over, and before that, it keeps in memory and uses a write-ahead log technical to recover the data in case of a crash of the Prometheus server.
So, at a high-level view, the directory structure of a Prometheus server’s data directory will look something like this:
Remote Storage Integration
Default on-disk storage is good and has some limitations in terms of scalability and durability, even considering the performance improvement of the latest version of the TSDB. So, if we’d like to explore other options to store this data, Prometheus provides a way to integrate with remote storage locations.
It provides an API that allows writing samples that are being ingested into a remote URL and, at the same time, be able to read back sample data for that remote URL as shown in the picture below:
As always in anything related to Prometheus, the number of adapters created using this pattern is huge, and it can be seen in the following link in detail:
Integrations | Prometheus
An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.
Summary
Knowing how prometheus storage works is critical to understand how we can optimize their usage to improve the performance of our monitoring solution and provide a cost-efficient deployment.
In the following posts, we’re going to cover how we can optimize the usage of this storage layer, making sure that only the metrics and samples that are important to use are being stored, and also how to analyze which metrics are the ones used most of the time-series database to be able to take good decision about which metrics should be dropped and which ones should be kept.
Find a way to re-define and re-organize the name of your Prometheus metrics to meet your requirements
Prometheus has become the new standard when we’re talking about monitoring our new modern application architecture, and we need to make sure we know all about its options to make sure we can get the best out of it. I’ve been using it for some time until I realized about a feature that I was desperate to know how to do, but I couldn’t find anywhere clearly define. So as I didn’t found it easily, I thought about writing a small article to show you how to do it without needed to spend the same time as I did.
We have plenty of information about how to configure Prometheus and use some of the usual configuration plugins, as we can see on its official webpage [1]. Even I already write about some configuration and using it for several purposes, as you can see also in other posts [2][3][4].
One of these configuration plugins is about relabeling, and this is a great thing. We have that each of the exporters can have its labels and meaning for those, and when you try to manage different technologies or components makes complex that all of them match together even if all of them follow the naming convention that Prometheus has [5].
But I had this situation, and I’m sure you have gone or will go towards that as well, that I have similar metrics for different technologies that for me are the same, and I need to keep them with the same name, but as they belong to other technologies they are not. So I need to find a way to rename the metric, and the great thing is that you can do that.
To do that, you just need to do a metric_relabel configuration. This configuration applies to relabel (as the name already indicates) labels of your prometheus metrics in this case before being ingested but also allow us to use some notable terms to do different things, and one of these notable terms is __name__. __name__ is a particular label that will enable you to rename your prometheus metrics before being ingested in the Prometheus Timeseries Database. And after that point, this will be as it will have that name since the beginning.
How to use that is relatively easy, is as any other relabel process, and I’d like to show you a sample about how to do it.
- source_labels: [__name__]
regex: 'jvm_threads_current'
target_label: __name__
replacement: 'process_thread_count'
Here it is a simple sample to show how we can rename a metric name jvm_threads_current to count the threads inside the JVM machine to do it more generic to be able to include the threads for the process in a process_thread_count prometheus metrics that we can use now as it was the original name.
In previous posts, I’ve explained how to integrate TIBCO BusinessWorks 6.x / BusinessWorks Container Edition (BWCE) applications with Prometheus, one of the most popular monitoring systems for cloud layers. Prometheus is one of the most widely used solutions to monitor your microservices inside a Kubernetes cluster. In this post, I will explain steps to leverage Prometheus for integrating with applications running on TIBCO Cloud Integration (TCI).
TCI is TIBCO’s iPaaS and primarily hides the application management complexity of an app from users. You need your packaged application (a.k.a EAR) and manifest.json — both generated by the product to simply deploy the application.
Isn’t it magical? Yes, it is! As explained in my previous post related to Prometheus integration with BWCE, which allows you to customize your base images, TCI allows integration with Prometheus in a slightly different manner. Let’s walk through the steps.
TCI has its own embedded monitoring tools (shown below) to provide insights into Memory and CPU utilization, plus network throughput, which is very useful.
While the monitoring metrics provided out-of-the-box by TCI are sufficient for most scenarios, there are hybrid connectivity use-cases (application running on-prem and microservices running on your own cluster that could be on a private or public cloud) that might require a unified single-pane view of monitoring.
Import the Prometheus plugin by choosing Import → Plug-ins and Fragments option and specifying the directory downloaded from the above mentioned GitHub location. (shown below)
Step two involves adding the Prometheus module previously imported to the specific application as shown below:
Step three is just to build the EAR file along with manifest.json.
NOTE: If the EAR doesn’t get generated once you add the Prometheus plugin, please follow the below steps:
Export the project with the Prometheus module to a zip file.
Remove the Prometheus project from the workspace.
Import the project from the zip file generated before.
Before you deploy the BW application on TCI, we need to enable an additional port on TCI to scrape the Prometheus metrics.
Step four Updating manifest.json file.
By default, a TCI app using the manifest.json file only exposes one port to be consumed from outside (related to functional services) and the other to be used internally for health checks.
For Prometheus integration with TCI, we need an additional port listening on 9095, so Prometheus server can access the metrics endpoints to scrape the required metrics for our TCI application.
We need to slightly modify the generated manifest.json file (of BW app) to expose an additional port, 9095 (shown below) .
Also, to tell TCI that we want to enable Prometheus endpoint we need to set a property in the manifest.json file. The property is TCI_BW_CONFIG_OVERRIDES and provide the following value: BW_PROMETHEUS_ENABLE=true, as shown below:
We also need to add an additional line (propertyPrefix) in the manifest.json file as shown below.
Now, we are ready to deploy the BW app on TCI and once it is deployed we can see there are two endpoints
If we expand the Endpoints options on the right (shown above), you can see that one of them is named “prometheus” and that’s our Prometheus metrics endpoint:
Just copy the prometheus URL and append it with /metrics (URL in the below snapshot) — this will display the Prometheus metrics for the specific BW app deployed on TCI.
Note: appending with /metrics is not compulsory, the as-is URL for Prometheus endpoint will also work.
In the list you will find the following kind of metrics to be able to create the most incredible dashboards and analysis based on that kind of information:
JVM metrics around memory used, GC performance and thread pools counts
CPU usage by the application
Process and Activity execution counts by Status (Started, Completed, Failed, Scheduled..)
Duration by Activity and Process.
With all this available the information you can create dashboards similar to the one shown below, in this case using Spotfire as the Dashboard tool:
But you can also integrate those metrics with Grafana or any other tool that could read data from Prometheus time-series database.
Prometheus Monitoring for Microservices using TIBCO
We’re living a world with constant changes and this is even more true in the Enterprise Application world. I’ll not spend much time talking about things you already know, but just say that the microservices architecture approach and the PaaS solutions have been a game-changer for all enterprise integration technologies. This time I’d like to […]
In that post, we described that there were several ways to update Prometheus about the services that ready to monitor. And we choose the most simple at that moment that was the static_config configuration which means:
Don’t worry Prometheus, I’ll let you know the IP you need to monitor and you don’t need to worry about anything else.
And this is useful for a quick test in a local environment when you want to test quickly your Prometheus set up or you want to work in the Grafana part to design the best possible dashboard to handle your need.
But, this is not too useful for a real production environment, even more, when we’re talking about a Kubernetes cluster when services are going up & down continuously over time. So, to solve this situation Prometheus allows us to define a different kind of ways to perform this “service discovery” approach. In the official documentation for Prometheus, we can read a lot about the different service discovery techniques but at a high level these are the main service discovery techniques available:
Configuration | Prometheus
An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.
azure_sd_configs: Azure Service Discovery
consul_sd_configs: Consul Service Discovery
dns_sd_configs: DNS Service Discovery
ec2_sd_configs: EC2 Service Discovery
openstack_sd_configs: OpenStack Service Discovery
file_sd_configs: File Service Discovery
gce_sd_configs: GCE Service Discovery
kubernetes_sd_configs: Kubernetes Service Discovery
marathon_sd_configs: Marathon Service Discovery
nerve_sd_configs: AirBnB’s Nerve Service Discovery
serverset_sd_configs: Zookeeper Serverset Service Discovery
triton_sd_configs: Triton Service Discovery
static_config: Static IP/DNS for the configuration. No Service Discovery.
And even, it all these options are not enough for you and need something more specific you have an API available to extend the Prometheus capabilities and create your own Service Discovery technique. You can find more info about it here:
Implementing Custom Service Discovery | Prometheus
An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.
But this is not our case, for us, the Kubernetes Service Discovery is the right choice for our approach. So, we’re going to change the static configuration we had in the previous post:
As you can see this is quite more complex than the previous configuration but it is not as complex as you can think at first glance, let’s review it by different parts.
- role: endpoints
namespaces:
names:
- default
It says that we’re going to use role for endpoints that are created under the default namespace and we’re going to specify the changes we need to do to find the metrics endpoints for Prometheus.
That means that we want to do a replace of the label value and we can do several things:
Rename the label name using the target_label to set the name of the final label that we’re going to create based on the source_labels.
Replace the value using the regex parameter to define the regular expression for the original value and the replacement parameter that is going to express the changes that we want to do to this value.
So, now after applying this configuration when we deploy a new application in our Kubernetes cluster, like the project that we can see here:
Automatically we’re going to see an additional target on our job-name configuration “bwce-metrics”