Find a way to re-define and re-organize the name of your Prometheus metrics to meet your requirements
Prometheus has become the new standard when we’re talking about monitoring our new modern application architecture, and we need to make sure we know all about its options to make sure we can get the best out of it. I’ve been using it for some time until I realized about a feature that I was desperate to know how to do, but I couldn’t find anywhere clearly define. So as I didn’t found it easily, I thought about writing a small article to show you how to do it without needed to spend the same time as I did.
We have plenty of information about how to configure Prometheus and use some of the usual configuration plugins, as we can see on its official webpage [1]. Even I already write about some configuration and using it for several purposes, as you can see also in other posts [2][3][4].
One of these configuration plugins is about relabeling, and this is a great thing. We have that each of the exporters can have its labels and meaning for those, and when you try to manage different technologies or components makes complex that all of them match together even if all of them follow the naming convention that Prometheus has [5].
But I had this situation, and I’m sure you have gone or will go towards that as well, that I have similar metrics for different technologies that for me are the same, and I need to keep them with the same name, but as they belong to other technologies they are not. So I need to find a way to rename the metric, and the great thing is that you can do that.
To do that, you just need to do a metric_relabel configuration. This configuration applies to relabel (as the name already indicates) labels of your prometheus metrics in this case before being ingested but also allow us to use some notable terms to do different things, and one of these notable terms is __name__. __name__ is a particular label that will enable you to rename your prometheus metrics before being ingested in the Prometheus Timeseries Database. And after that point, this will be as it will have that name since the beginning.
How to use that is relatively easy, is as any other relabel process, and I’d like to show you a sample about how to do it.
- source_labels: [__name__]
regex: 'jvm_threads_current'
target_label: __name__
replacement: 'process_thread_count'
Here it is a simple sample to show how we can rename a metric name jvm_threads_current to count the threads inside the JVM machine to do it more generic to be able to include the threads for the process in a process_thread_count prometheus metrics that we can use now as it was the original name.
Apache Kafka seems to be the standard solution in nowadays architecture, but we should focus if it is the right choice for our needs.
Nowadays, we’re in a new age of Event-Driven Architecture, and this is not the first time we’ve lived that. Before microservices and cloud, EDA was the new normal in enterprise integration. Based on different kinds of standards, there where protocols like JMS or AMQP used in broker-based products like TIBCO EMS, Active MQ, or IBM Websphere MQ, so this approach is not something new.
With the rise of microservices architectures and the API lead approach, it seemed that we’ve forgotten about the importance of the messaging systems, and we had to go through the same challenges we saw in the past to come to a new messaging solution to solve that problem. So, we’re coming back to EDA Architecture, pub-sub mechanism, to help us decouple the consumers and producers, moving from orchestration to choreography, and all these concepts fit better in nowaday worlds with more and more independent components that need cooperation and integration.
During this effort, we started to look at new technologies to help us implement that again. Still, with the new reality, we forgot about the heavy protocols and standards like JMS and started to think about other options. And we need to admit that we felt that there is a new king in this area, and this is one of the critical components that seem to be no matter what in today’s architecture: Apache Kafka.
And don’t get me wrong. Apache Kafka is fantastic, and it has been proven for so long, a production-ready solution, performant, with impressive capabilities for replay and powerful API to ease the integration. Apache Kafka has some challenges in this cloud-native world because it doesn’t play so well with some of its rules.
If you have used Apache Kafka for some time, you are aware that there are particular challenges with it. Apache Kafka has an architecture that comes from its LinkedIn days in 2011, where Kubernetes or even Docker and container technologies were not a thing, that makes to run Apache Kafka (purely stateful service) in a container fashion quite complicated. There are improvements using helm charts and operators to ease the journey, but still, it doesn’t feel like pieces can integrate well into that fashion. Another thing is the geo-replication that even with components like MirrorMaker, it is not something used, works smooth, and feels integrated.
Other technologies are trying to provide a solution for those capabilities, and one of them is also another Apache Foundation project that has been donated by Yahoo! and it is named Apache Pulsar.
Don’t get me wrong; this is not about finding a new truth, that single messaging solution that is perfect for today’s architectures: it doesn’t exist. In today’s world, with so many different requirements and variables for the different kinds of applications, one size fits all is no longer true. So you should stop thinking about which messaging solution is the best one, and think more about which one serves your architecture best and fulfills both technical and business requirements.
We have covered different ways for general communication, with several specific solutions for synchronous communication (service mesh technologies and protocols like REST, GraphQL, or gRPC) and different ones for asynchronous communication. We need to go deeper into the asynchronous communication to find what works best for you. But first, let’s speak a little bit more about Apache Pulsar.
Apache Pulsar
Apache Pulsar, as mentioned above, has been developed internally by Yahoo! and donated to the Apache Foundation. As stated on their official website, they are several key points to mention as we start exploring this option:
Pulsar Functions: Easily deploy lightweight compute logic using developer-friendly APIs without needing to run your stream processing engine
Proven in production: Apache Pulsar has run in production at Yahoo scale for over three years, with millions of messages per second across millions of topics
Low latency with durability: Designed for low publish latency (< 5ms) at scale with strong durability guarantees
Geo-replication: Designed for configurable replication between data centers across multiple geographic regions
Multi-tenancy: Built from the ground up as a multi-tenant system. Supports Isolation, Authentication, Authorization, and Quotas
Persistent storages: Persistent message storage based on Apache BookKeeper. Provides IO-level isolation between write and read operations
Client libraries: Flexible messaging models with high-level APIs for Java, C++, Python and GO
Operability: REST Admin API for provisioning, administration, tools, and monitoring. Deploy on bare metal or Kubernetes.
So, as we can see, in its design, Apache Pulsar is addressing some of the main weaknesses of Apache Kafka as Geo-replication and their cloud-native approach.
Apache Pulsar provides support for the pub/sub pattern, but also provides so many capabilities that also place as a traditional queue messaging system with their concept of exclusive topics where only one of the subscribers will receive the message. Also provides interesting concepts and features used in other messaging systems:
Dead Letter Topics: For messages that were not able to be processed by the consumer.
Persistent and Non-Persistent Topics: To decide if you want to persist your messages or not during the transition.
Namespaces: To have a logical distribution of your topics, so an application can be grouped in namespaces as we do, for example, in Kubernetes so we can isolate some applications from the others.
Failover: Similar to exclusive, but when the attached consumer failed to process another takes the chance to process the messages.
Shared: To be able to provide a round-robin approach similar to the traditional queue messaging system where all the subscribers will be attached to the topic, but the only one will receive the message, and it will distribute the load along all of them.
Multi-topic subscriptions: To be able to subscribe to several topics using a regexp (similar to the Subject approach from TIBCO Rendezvous, for example, in the 90s) that has been so powerful and popular.
But also, if you require features from Apache Kafka, you will still have similar concepts as partitioned topics, key-shared topics, and so on. So you have everything at your hand to choose which kind of configuration works best for you and your specific use cases, you also have the option to mix and match.
Apache Pulsar Architecture
Apache Pulsar Architecture is similar to other comparable messaging systems today. As you can see in the picture below from the Apache Pulsar website, those are the main components of the architecture:
Brokers: One or more brokers handles incoming messages from producers, dispatches messages to consumers
So you can see this architecture is also quite similar to the Apache Kafka one again with the addition of a new concept of the BookKeeper Cluster.
Broker in Apache Pulsar are stateless components that mainly will run two pieces
HTTP Server that exposes a REST API for management and is used by consumers and producers for topic lookup.
TCP Server using a binary protocol called dispatcher that is used for all the data transfers. Usually, Messages are dispatched out of a managed ledger cache for performance purposes. But also if this cache grows too big, it will interact with the BookKeeper cluster for persistence reasons.
To support the Global Replication (Geo-Replication), the Brokers manage replicators that tail the entries published in the local region and republish them to the remote regions.
Apache BookKeeper Cluster is used as persistent message storage. Apache BookKeeper is a distributed write-ahead log (WAL) system that manages when messages should be persisted. It also supports horizontal scaling based on the load and multi-log support. Not only messages are persistent but also the cursors that are the consumer position for a specific topic (similar to the offset in Apache Kafka terminology)
Finally, Zookeeper Cluster is used in the same role as Apache Kafka as a metadata configuration storage cluster for the whole system.
Hello World using Apache Pulsar
Let’s see how we can create a quick “Hello World” case using Apache Pulsar as a protocol, and to do that, we’re going to try to implement it in a cloud-native fashion. So we will do a single-node cluster of Apache Pulsar in a Kubernetes installation and deploy a producer application using Flogo technology and a consumer application using Go. Something similar to what you can see in the diagram below:
Diagram about the test case we’re doing
And we’re going to try to keep it simple, so we will just use pure docker this time. So, first of all, just spin up the Apache Pulsar server and to do that we will use the following command:
Now, we need to create simple applications, and for that, Flogo and Go will be used.
Let’s start with the producer, and in this case, we will use the open-source version to create a quick application.
First of all, we will just use the Web UI (dockerized) to do that. Run the command:
docker run -it -p 3303:3303 flogo/flogo-docker eula-accept
And we install a new contribution to enable the Pulsar publisher activity. To do that we will click on the “Install new contribution” button and provide the following URL:
// Receive messages from channel. The channel returns a struct which contains message and the consumer from where
// the message was received. It’s not necessary here since we have 1 single consumer, but the channel could be
// shared across multiple consumers as well
for cm := range channel {
msg := cm.Message
fmt.Printf(“Received message msgId: %v — content: ‘%s’\n”,
msg.ID(), string(msg.Payload()))
consumer.Ack(msg)
}
}
And after running both programs, you can see the following output as you can see, we were able to communicate both applications in an effortless flow.
This article is just a starting point, and we will continue talking about how to use Apache Pulsar in your architectures. If you want to take a look at the code we’ve used in this sample, you can find it here:
Serverless is already here. We know that. Even for companies that are starting their cloud journey moving to container-based platforms, they already know that serverless is in their future view at least for some specific use cases.
Its simplicity without needed to worry about anything related to the platform just focuses on the code and also the economics that comes with them makes this computing model a game-changer.
The mainstream platform for this approach at this moment is AWS Lambda from Amazon, we have read and heard a lot of AWS Lamba, but all the platforms are going to that path. Google with its Google Functions, Microsoft with its Azure. This is not just a thing for public cloud providers, also if you’re running on a private cloud approach you can use this deployment mode using frameworks like KNative or OpenFaaS. If you need more details about it, also we had some articles about this topic:
Deploying Flogo App on OpenFaaS
OpenFaaS is an alternative to enable the serverless approach in your infrastructure when you’re not running in the public cloud and you don’t have available those other options like AWS Lambda Functions or Azure Functions or even in public cloud, you’d like the features and customizations option it provides. OpenFaaS® (Functions as a Service) is […]
All of these platforms are updating and enhancing its option to execute any kind of code that you want on it, and this is what Azure Function just announced some weeks ago, the release of a Custom Handler approach to support any kind of language that you could think about:
Azure Functions custom handlers
Learn to use Azure Functions with any language or runtime version.
And we’re going to test that doing what we’d like best ………..
The first thing we need to do is to create our Flogo Application. I’m going to use Flogo Enterprise to do that, but you can do the same using the Open Source version as well:
We’re going to create a simple application just a Hello World REST service. As you can see here:
The JSON file of the Flogo Application you can find it in the GitHub repository shown below:
GitHub - alexandrev/flogo-azure-function: Sample of Flogo Application running as a Azure Function
Sample of Flogo Application running as a Azure Function - GitHub - alexandrev/flogo-azure-function: Sample of Flogo Application running as a Azure Function
But the main configurations are the following ones:
Server will listen on the port specified by the environment variable FUNCTIONS_HTTPWORKER_PORT
Server will listen for a GET request on the URI /hello-world
Response will be quite simple “Invoking a Flogo App inside Azure Functions”
Now, that we have the JSON file, we only need to start creating the artifacts needed to run an Azure Function.
We will need to install the npm package for the Azure Functions. To do that we need to run the following command:
npm i -g azure-functions-core-tools@3 --unsafe-perm true
It is important to have the “@3” because that way we reference the version that has this new Custom Handler logic to be able to execute it.
We will create a new folder call flogo-func and run the following command inside that folder:
func init --docker
Now we should select node as the runtime to be used and javascriptas the language. That could be something strange because we are going to use neither node, nor dotnet or pythonor powershell. But we will select that to keep it simple as we just try to focus on the Docker approach to do that.
After that we just need to create a new function, and to do that we need to type the following command:
func new
In the CLI-based interface that the Azure Function Core Tools shows us, we just need to select HTTP trigger and provide a name.
In our case, we will use hello-world as the name to keep it similar to what we defined as part of our Flogo Application. We will end up with the following folder structure:
Now we need to open the folder that is has been created and we need to do several things:
First of all, we need to remove the index.js file because we don’t need that file as this will not be a node JS function.
We need to copy the HelloWorld.json (our Flogo application) to the root folder of the function.
We need to change the host.json file to the following content:
Now, we need to generate the engine-windows-amd64.exe and to be able to do that we need to go to the FLOGO_HOME in our machine and go to the folder FLOGO_HOME/flogo/<VERSION>/bin and launch the following command:
./builder-windows-amd64.exe build
And you should get the engine-windows-amd64.exe as the output as you can see in the picture below:
Now, you just need to copy that file inside the folder of your function, and you should have the following folder structure as you can see here:
And once you have that, you just need to run your function to be able to test it locally:
func start
After running that command you should see an output similar to the one shown below:
I’d just like to highlight the startup time for our Flogo application around 15 milliseconds!!! Now, you only need to test it using any browser and just go to the following URL:
http://localhost:7071/api/hello-world
This has been just the first step on our journey but it was the steps needed to be able to run our Flogo application as part of your serverless environment hosted by Microsoft Azure platform!
Add a header to begin generating the table of contents
We all know that in the rise of the cloud-native development and architectures, we’ve seen Kubernetes based platforms as the new standard all focusing on new developments following the new paradigms and best practices: Microservices, Event-Driven Architectures new shiny protocols like GraphQL or gRPC, and so on and so forth.
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
OpenFaaS is an alternative to enable the serverless approach in your infrastructure when you’re not running in the public cloud and you don’t have available those other options like AWS Lambda Functions or Azure Functions or even in public cloud, you’d like the features and customizations option it provides.
OpenFaaS® (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first-class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.
GitHub - openfaas/faas: OpenFaaS - Serverless Functions Made Simple
OpenFaaS - Serverless Functions Made Simple. Contribute to openfaas/faas development by creating an account on GitHub.
There is good content on medium about OpenFaaS so I don’t like to spend so much time on this, but I’d like to leave you here some links for reference:
Running serverless functions on premises using OpenFaas with Kubernetes
Earlier most of companies had monolithic architecture for their cloud applications (all the things in a single package). Nowadays companies…
An Introduction to Serverless DevOps with OpenFaaS | HackerNoon
DevOps isn’t about just doing CI/CD. But a CI/CD pipeline has an important role inside DevOps. I’ve been investing my time on recently and as I started creating multiple functions, I wanted an easy to use and accessible development and delivery flow, in other words a CI/CD pipeline. OpenFaaS One day…
We already have a lot of info about how to run a Flogo Application as a Lambda Function as you can see here:
https://www.youtube.com/watch?v=TysuwbXODQI
But.. what about OpenFaaS? Can we run our Flogo application inside OpenFaaS? Sure! Let me explain to you how.
OpenFaaS is a very customize framework to build zero-scaled functions and it could need some time to get familiar with concepts. Everything is based on watchdogs that are the components listening to the requests are responsible for launching the forks to handle the requests:
We’re going to use the new watchdog named as of-watchdog that is the one expected to be the default one in the feature and all the info is here:
GitHub - openfaas/of-watchdog: Reverse proxy for STDIO and HTTP microservices
Reverse proxy for STDIO and HTTP microservices. Contribute to openfaas/of-watchdog development by creating an account on GitHub.
This watchdog provides several modes, one of them is named HTTP and it is the default one, and is based on some HTTP Forward to the internal server running in the container. That fits perfectly with our Flogo application and means that the only thing we need is to deploy an HTTP Receive Request trigger in our Flogo Application and that’s it.
The only things you need to configure is the method (POST) and the Path (/) to be able to handle the requests. In our case we’re going to do a simple Hello World app as you can see here:
To be able to run this application we need to use several things, and let’s explain it here:
First of all, we need to do the installation of the OpenFaaS environment, I’m going to skip all the details about this process and just point to you to the detailed tutorial about it:
Deploy OpenFaaS on Amazon EKS | Amazon Web Services
We’ve talked about FaaS (Functions as a Service) in Running FaaS on a Kubernetes Cluster on AWS Using Kubeless by Sebastien Goasguen. In this post, Alex Ellis, founder of the OpenFaaS project, walks you through how to use OpenFaaS on Amazon EKS. OpenFaaS is one of the most popular tools in the FaaS…
Now we need to create our template and to do that, we are going to use a Dockerfile template. To create it we’re going to execute:
faas-cli new --lang dockerfile
We’re going to name the function flogo-test. And now we’re going to update the Dockerfile to be like this:
Most of this content is common for any other template using the new of-watchdog and the HTTP mode.
I’d like to highlight the following things:
We use several environment variables to define the behavior:
mode = HTTP to define what we’re going to use this method
upstream_url = URL that we are going to forward the request to
fprocess = OS command that we need to execute, in our case means to run the Flogo App.
Other things are the same as you should do in case you want to run flogo apps in Docker:
Add the engine executable for your platform (UNIX in most cases as the image base is almost always a Linux based)
Add the JSON file of the application that you want to use.
We also need to change the yml file to look like this:
Usually, when you’re developing or running your container application you will get to a moment when something goes wrong. But not in a way you can solve with your logging system and with testing.
A moment when there is some bottleneck, something that is not performing as well as you want, and you’d like to take a look inside. And that’s what we’re going to do. We’re going to watch inside.
Because our BusinessWorks Container Edition provides so great features to do it that you need to use it into your favor because you’re going to thank me for the rest of your life. So, I don’t want to spend one more minute about this. I’d like to start telling you right now.
The first thing we need to do, we need to go inside the OSGi console from the container. So, the first thing we do is to expose the 8090 port as you can see in the picture below
Now, we can expose that port to your host, using the port-forward command
And as you can see it says that statistics has been enabled for echo application, so using that application name we’re going to gather the statistics at the level
And you can see the statistics at the process level where you can see the following metrics:
Process metadata (name, parent process and version)
Total instance by status (create, suspended, failed and executed)
Execution time (total, average, min, max, most recent)
Elapsed time (total, average, min, max, most recent)
And we can get the statistics at the activity level:
And with that, you can detect any bottleneck you’re facing into your application and also be sure which activity or which process is responsible for it. So you can solve it in a quick way.
Prometheus Monitoring for Microservices using TIBCO
We’re living a world with constant changes and this is even more true in the Enterprise Application world. I’ll not spend much time talking about things you already know, but just say that the microservices architecture approach and the PaaS solutions have been a game-changer for all enterprise integration technologies. This time I’d like to […]
In that post, we described that there were several ways to update Prometheus about the services that ready to monitor. And we choose the most simple at that moment that was the static_config configuration which means:
Don’t worry Prometheus, I’ll let you know the IP you need to monitor and you don’t need to worry about anything else.
And this is useful for a quick test in a local environment when you want to test quickly your Prometheus set up or you want to work in the Grafana part to design the best possible dashboard to handle your need.
But, this is not too useful for a real production environment, even more, when we’re talking about a Kubernetes cluster when services are going up & down continuously over time. So, to solve this situation Prometheus allows us to define a different kind of ways to perform this “service discovery” approach. In the official documentation for Prometheus, we can read a lot about the different service discovery techniques but at a high level these are the main service discovery techniques available:
Configuration | Prometheus
An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.
azure_sd_configs: Azure Service Discovery
consul_sd_configs: Consul Service Discovery
dns_sd_configs: DNS Service Discovery
ec2_sd_configs: EC2 Service Discovery
openstack_sd_configs: OpenStack Service Discovery
file_sd_configs: File Service Discovery
gce_sd_configs: GCE Service Discovery
kubernetes_sd_configs: Kubernetes Service Discovery
marathon_sd_configs: Marathon Service Discovery
nerve_sd_configs: AirBnB’s Nerve Service Discovery
serverset_sd_configs: Zookeeper Serverset Service Discovery
triton_sd_configs: Triton Service Discovery
static_config: Static IP/DNS for the configuration. No Service Discovery.
And even, it all these options are not enough for you and need something more specific you have an API available to extend the Prometheus capabilities and create your own Service Discovery technique. You can find more info about it here:
Implementing Custom Service Discovery | Prometheus
An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.
But this is not our case, for us, the Kubernetes Service Discovery is the right choice for our approach. So, we’re going to change the static configuration we had in the previous post:
As you can see this is quite more complex than the previous configuration but it is not as complex as you can think at first glance, let’s review it by different parts.
- role: endpoints
namespaces:
names:
- default
It says that we’re going to use role for endpoints that are created under the default namespace and we’re going to specify the changes we need to do to find the metrics endpoints for Prometheus.
That means that we want to do a replace of the label value and we can do several things:
Rename the label name using the target_label to set the name of the final label that we’re going to create based on the source_labels.
Replace the value using the regex parameter to define the regular expression for the original value and the replacement parameter that is going to express the changes that we want to do to this value.
So, now after applying this configuration when we deploy a new application in our Kubernetes cluster, like the project that we can see here:
Automatically we’re going to see an additional target on our job-name configuration “bwce-metrics”
Flogo Enterprise is so great platform to build your microservices and Out of the box, you’re going to reach an incredible performance number.
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
!– /wp:paragraph –>
But, even with that, we’re working in a world where each milliseconds count and each memory MB count so it is important to know the tools that we have in our hands to tune at a finer-grained level our Flogo Enterprise application.
As you already know, Flogo is built in top of Go programing language, so we are going to differentiate the parameters that belong to the programing language itself, then other parameters that are Flogo specifics.
All these parameters have to be defined as environment variables, so the way to apply these are going to rely on how you set environment variables in your own target platform (Windows, Linux, OSX, Docker, etc…)
Flogo OSS Specific Parameters
You can check all the parameters and it is default values at the Flogo documentation:
Performance Related Settings
FLOGO_LOG_LEVEL: Allows to set at start-up level the log level you want to use for the application execution. The default value is set to “INFO” and it can be increased to DEBUG to do some additional troubleshooting or analysis and set to “WARN” or “ERROR” for production applications that need most performance avoiding printing additional traces.
FLOGO_RUNNER_TYPE: Allows to set at the start-up level the type of the runner and the default value is POOLED.
FLOGO_RUNNER_WORKERS: Allows to set at the start-up level the number of Flogo workers that are going to be executing logic. The default value is 5 and can be increased when you’re running on powerful hardware that has better parallelism capabilities.
FLOGO_RUNNER_QUEUE: Allows to set up at the start-up level the size of the runner queue that is going to keep in memory the requests that are going to be handled by the workers. The default value is 50 and when the number is high it will be also high the memory consumption but the access form the workers to the task will be faster.
Other Settings
FLOGO_CONFIG_PATH: Sets the path of the config JSON file that is going to be used to run the application in case it is not embedded in the binary itself. The default value is flogo.json
FLOGO_ENGINE_STOP_ON_ERROR: Set the behavior of the engine when an internal error occurs. By default is set to true and means that engine will stop as soon as the error occurs.
FLOGO_APP_PROP_RESOLVERS: Set how application properties are going to be gathered for the application to be used. The value is property resolver to use at runtime. By default is None and additional information is included in application properties documentation section.
FLOGO_LOG_DTFORMAT: Set how the dates are going to be displayed in the log traces. The default value is “2006–01–02 15:04:05.000”.
Flogo Enterprise Specific Parameters
Even when all the Project Flogo properties are supported by Flogo Enterprise, the enterprise version includes additional properties that can be used to set additional behaviors of the engine.
FLOGO_HTTP_SERVICE_ PORT: This property set the port where the internal endpoints will be hosted. This internal endpoint is used for healthcheck endpoint as well as metrics exposition and any other internal access that is provided by the engine.
FLOGO_LOG_FORMAT: This property allows us to define the notation format for our log traces. TEXT is the default value but we can use JSON to make our traces to be generated in JSON, for example, to be included in some kind of logging ingestion platform
Go Programing Language Specific Parameters
GOMAXPROCS: Limits the number of operating system threads that can execute user-level Go code simultaneously. There is no limit to the number of threads that can be blocked in system calls on behalf of Go code; those do not count against the GOMAXPROCS limit.
GOTRACEBACK: Controls the amount of output generated when a Go program fails due to an unrecovered panic or an unexpected runtime condition. By default, a failure prints a stack trace for the current goroutine, eliding functions internal to the run-time system and then exits with exit code 2. The failure prints stack traces for all goroutines if there is no current goroutine or the failure is internal to the run-time. GOTRACEBACK=none omits the goroutine stack traces entirely. GOTRACEBACK=single (the default) behaves as described above. GOTRACEBACK=all adds stack traces for all user-created goroutines. GOTRACEBACK=system is like “all” but adds stack frames for run-time functions and shows goroutines created internally by the run-time. GOTRACEBACK=crash is like “system” but crashes in an operating system-specific manner instead of exiting.
GOGC: Sets the initial garbage collection target percentage. A collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. The default is GOGC=100. Setting GOGC=off disables the garbage collector entirely.
Flogo Test is one of the main steps in your CI/CD lifecycle if you are using Flogo. You probably have done it previously in all your other developments like Java developments or even using BusinessWorks 6 using the bw6-maven-plugin:
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
GitHub – TIBCOSoftware/bw6-plugin-maven: Plug-in Code for Apache Maven and TIBCO ActiveMatrix BusinessWorks™
Plug-in Code for Apache Maven and TIBCO ActiveMatrix BusinessWorks™ – GitHub – TIBCOSoftware/bw6-plugin-maven: Plug-in Code for Apache Maven and TIBCO ActiveMatrix BusinessWorks™
So, you’re probably wondering… How this is going to be done by Flogo? Ok!! I’ll tell you.
First of all, you need to keep in mind that Flogo Enterprise is a product that was designed with all those aspects in mind, so you don’t need to worry about it.
Regarding testing, when we need to include inside a CI/CD lifecycle approach, these testing capabilities should meet the following requirements:
It should be defined in some artifacts.
It should be executed automatically
It should be able to check the outcome.
Flogo Enterprise includes by default Testing capabilities in the Web UI where you can not only test your flows using the UI from a debug/troubleshooting perspective but also able to generate the artifacts that are going to allow you to perform more sophisticated testing
So, we need to go to our Web UI and when we’re inside a flow we have a “Start Testing” button:
And we can see all our Launch Configuration change and most important part for this topic be able to export it and download it to your local machine:
Once everything is downloaded and we have the binary for our application we can execute the tests in an automatic way from the CLI using the following command
This is going to generate an output file with the output of the execution test:
And if we open the file we will get the exact same output the flow is returned so we can perform any assert on it
That was easy, right? Let’s do some additional tweaks to avoid your need to go to the Web UI. You can generate the launch configuration using only the CLI.
To do that, you only need to execute the following command:
OAuth 2.0 is a protocol that allows users to authorize third-parties to access their info without needing to know the user credentials. It usually relies on an additional system that acts as Identity Provider where the user is going to authenticate against, and then once authenticate you are provided with some secure piece of information with the user privileges and you can use that piece to authenticate your requests for some period.
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
OAuth v2.0 Authentication Flow Sample
OAuth 2.0 also defines a number of grant flows to adapt to different authentication needs. These authorization grants are the following ones:
Client Credentials
Authorization Code
Resource Owner Password Credentials
The decision to choose one flow or another depends on the needs of the invocation and of course, it is a customer personal decision but as a general way, the approximation showed in the Auth0 documentation is the usual recommendation:
Decision Graph to choose the Authorization Grant to use
These grants are based on JSON Web Token to transmit information between the different parties.
JWT
JSON Web Token is an industry standard for a token generation defined in the RFC 7519 standard. it defines a secure way to submit info between parties like a JSON object.
It is composed of three components (Header, Payload, and Signature) and can be used by a symmetric cipher or with a public/private key exchange mode using RSA or ECDSA.
JWT Composition Sample
Use Case Scenario
So, OAuth V2 and JWT are the usual way to authenticate requests in Microservice world, so it is important to have this standard supported in your framework, and this is perfectly covered in Flogo Enterprise as we’re going to see in this test.
AWS Cognito Setup
We’re going to use AWS Cognito as the Authorization Server, as this is quite easy to set up and it is one of the main actors in this kind of authentication. In Amazon words themselves:
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
In our case, we’re going to do a pretty straightforward setup and the steps down are shown below:
Create a User Pool named “Flogo”
Create an App Client with a generated secret named “TestApplication”
Create a Resource Server named “Test” with identifier “http://flogo.test1” and with a scope named “echo”
Resource Server Configuration
Set the following App Client Settings as shown in the image below with the following details:
Cognito User Pool selected as Enable Identity Provider
Client credentials used as Allowed OAuth Flows
http://flogo.test1/echo selected as an enabled scope
App Client Setting Configuration
And that’s all the configuration needed at the Cognito level, and now let’s go back to Flogo Enterprise Web UI.
Flogo Set-up
Now, we are going to create a REST Service and we’re going to skip all steps regarding how to create a REST service using Flogo but you can take a look at the detailed steps in the flow below:
Building your First Flogo Application
In the previous post, we introduce Flogo technology like one of the things in the cloud-native development industry and now we’re going to build our first Flogo Application and try to cover all the options that we describe in the previous post. NOTE .- If you’re new to Flogo and you didn’t read the previous post […]
We’re going to create an echo service hosted at localhost:9999/hello/ that received a path parameter after the hello that is the name we’d like to greet, and we’re going to establish the following restrictions:
We’re going to check the presence of a JWT Token
We’re going to validate the JWT Token
We’re going to check that the JWT included the http://flogo.test1/echo
In case, everything is OK, we’re going to execute the service, in case some prerequisite is not meet we’re going to return a 401 Unauthorized error.
We’re going to create two flows:
Main flow that is going to have the REST Service
Subflow to do all the validations regarding JWT.
MainFlow is quite straightforward is going to receive the request, execute the subflow and depending on their output is going to execute the service or not:
So, all important logic is inside the other flow that is going to do all the JWT Validation, and its structure is the one showed below:
We’re going to use the JWT activity hosted in GitHub available at the link shown below:
flogo-components/activity/jwt at master · ayh20/flogo-components
Contribute to ayh20/flogo-components development by creating an account on GitHub.
NOTE: If you don’t remember how to install a Flogo Enterprise extension take a look at the link below:
Installing Extensions in Flogo Enterprise
In previous posts, we’ve talked about capabilities of Flogo and how to build our first Flogo application, so at this moment if you’ve read both of them you have a clear knowledge about what Flogo provides and how easy is to create applications in Flogo. But in those capabilities, we’ve spoken about that one of […]
And after that, the configuration is quite easy, as the activity allows you to choose the action you want to do with the token. In our case, “Verify” and we provided the token, the algorithm (in our case “RS256”) and the public key we’re going to use for validating the signature of the token:
Test
Now, we’re going to launch the Flogo Enterprise Application from the binary generated:
And now, if we try to do an execution to the endpoint without providing any token we get the expected 401 code response
So, to get the access token first thing we need to do is to send a request to AWS Cognito endpoint (https://flogotest.auth.eu-west-2.amazoncognito.com/oauth2/token) using our app credentials:
And this token has all the info from the client and its permission, you can check it in the jwt.io webpage:
And to finally test it, we only need to add it to the first request we tried as we can see in the picture below:
Resources
GitHub – alexandrev/flogo-jwt-sample-medium: JWT Support in Flogo Enterprise Post Resource
JWT Support in Flogo Enterprise Post Resource. Contribute to alexandrev/flogo-jwt-sample-medium development by creating an account on GitHub.