Bash hacks knowledge is one of the ways to improve your performance. We spend many hours inside of them and have been developing patterns and habits when we log inside a computer. For example, if you have two people with a similar skill level and provide the same task to do, probably, they’re different tools aa nd different comet to the same path.
And that’s because the number of options available to do any task is so significant that each of us learns one or two way to do something, and we sticks to them, and we just automate those, so we’re not thinking when we’re typing them.
So, the idea today is to provide a list of some commands that I use all the time that probably you’re aware of, but for me are like time savings each day of my work life. So, let’s start with those.
1.- CTRL + R
This command is my predliect bash hack. It is the one I use all the time; as soon as I log into a remote machine that is new or just coming back to them, I use it for pretty much everything. Unfortunately, this command only allows you to search into the command history.
It helps you autocomplete based on the commands you’re already typing. It’s like the same thing as typing history | grep <something> but just faster and more natural for me.
This bash hack also allows me to autocomplete paths that I don’t remember the exact subfolder name or whatever. I do this tricky command every two weeks to clean some process memory or apply some configuration. Or just to troubleshoot in which machine I’m logging at a specific moment.
2.- find + action
Find command is something we all know, but most of you probably use it as a limited functionality from all the ones available from the find command. And that’s it because that command is incredible. It provides so many features. But this time, I’m just going to cover one specific topic. Actions after locating the files that we’re looking for.
We usually use the find command to find files or folders, which is evident for a command that it’s called that way. But also it allows us to add the -execparameter to concatenate an action to be done for each of the files that match your criteria, for example, something like this
Find all the yaml files in your current folder and just rename them to move them to a different folder. You can do it directly with this command:
So simple and so helpful bash hack. Just like our bash command to CTRL-Z. The command cd -allows us to go back to the previous folder we’re in.
So valuable for we got wrong the folder that we like to go, just to switch between two folders and so on quickly. It’s like going back to your browser or the CTRL -Z in your Word processor.
Wrap up
I hope you love this command as much as I do, and if you already know it, please let me in the responses to the bash hacks that are the most relevant for you daily. It can be because you use it all the time as I do with these or because even if you’re not using it so many times, the times that you do, it saves you a massive amount of time!
Learn how to manage observability requirements as part of your microservice ecosystem
“May you live in interesting times” is the English translation of the Chinese curse, and this couldn’t be more true as a description of the times that we’re living regarding our application architecture and application development.
All the changes from the cloud-native approach, including all the new technologies that come with it like containers, microservices, API, DevOps, and so on has transformed the situation entirely for any architecture, developer, or system administration.
It’s something similar if you went to bed in 2003, and you wake up in 2020 all the changes, all the new philosophies, but also all the unique challenges that come with the changes and new capabilities are things that we need to deal with today.
I think we all can agree the present is polyglot in terms of application development. Today is not expected for any big company or enterprise to find an available technology, an available language to support all their in-house products. Today we all follow and agree on the “the right tool for the right job principle” to try to create our toolset of technologies that we are going to use to solve different use cases or patterns that you need to face.
But that agreement and movement also come with its challenge regarding things that we usually don’t think about like Tracing and Observability in general.
When we use a single technology, everything is more straightforward. To define a common strategy to trace your end to end flows is easy; you only need to embed the logic into your common development framework or library all your developments are using. Probably define a typical header architecture with all the data that you need to be able to effectively trace all the requests and define a standard protocol to send all those traces to a standard system that can store and correlate all of them and explain the end to end flow. But try to move that to a polyglot ecosystem: Should I write my framework or library for each language or technology I’d need to use, or I can also use in the future? Does that make sense?
But not only that, should I slow the adoption of a new technology that can quickly help the business because I need to provide from a shared team this kind of standard components? That is the best case that I have enough people that know how the internals of my framework work and have the skills in all the languages that we’re adopting to be able to do it quickly and in an efficient way. It seems unlikely, right?
So, to new challenges also new solutions. I’m already have been talking about Service Mesh regarding the capabilities that provide from a communication perspective, and if you don’t remember you can take a look at those posts:
Integrating Istio with BWCE Applications
Introduction Services Mesh is one the “greatest new thing” in our PaaS environments. No matter if you’re working with K8S, Docker Swarm, pure-cloud with EKS or AWS, you’ve heard and probably tried to know how can be used this new thing that has so many advantages because it provides a lot of options in handling […]
But it also provides capabilities from other perspectives and Tracing, and Observability is one of them. Because when we cannot include those features in any technology, we need to use, we can do it in a general technology that is supported by all of them, and that’s the case with Service Mesh.
As Service Mesh is the standard way to communicate synchronously, your microservices in an east-west communication fashion covering the service-to-service communication. So, if you’re able to include in that component also the tracing capability, you can have an end-to-end tracing without needed to implement anything in each of the different technologies that you can use to implement the logic that you need, so, you’ve been changing from Figure A to Figure B in the picture below:
In-App Tracing logic implementation vs. Service Mesh Tracing Support
And that what most of the Service Mesh technologies are doing. For example, Istio, as one of the default choices when it comes to Service Mesh, includes an implementation of the OpenTracing standard that allows integration with any tool that supports the standard to be able to collect all the tracing information for any technology that is used to communicate across the mesh.
So, that mind-change allows us to easily integrates different technologies without needed any exceptional support of those standards for any specific technology. Does that mean that the implementation of those standards for those technologies is not required? Not at all, that is still relevant, because the ones that also support those standards can provide even more insights. After all, the Service Mesh only knows part of the information that is the flow that’s happening outside of each technology. It’s something similar to a black-box approach. But also adding the support for each technology to the same standard provides an additional white-box approach as you can see graphically in the image below:
Merging White Box Tracing Data and Black Box Tracing Data
We already talked about the compliance of some technologies with the OpenTracing standard like TIBCO BusinessWorks Container Edition that you can remember it here:
OpenTracing support in TIBCO BusinessWorks Container Edition
The past month during the KubeCon 2019 Europe in Barcelona OpenTracing announces its merge with OpenCensus project to create a new standard named OpenTelemetry that is going to be live in September 2019. So, I think that would be awesome to take a look at the capabilities regarding OpenTracing we have available in TIBCO BusinessWorks […]
So, also, the support from these technologies of the industry standards is needed and even a competitive advantage because without needing to develop your tracing framework, you’re able to achieve a Complete Tracing Data approach additional to that is already provided by the Service Mesh level itself.
Find a way to re-define and re-organize the name of your Prometheus metrics to meet your requirements
Prometheus has become the new standard when we’re talking about monitoring our new modern application architecture, and we need to make sure we know all about its options to make sure we can get the best out of it. I’ve been using it for some time until I realized about a feature that I was desperate to know how to do, but I couldn’t find anywhere clearly define. So as I didn’t found it easily, I thought about writing a small article to show you how to do it without needed to spend the same time as I did.
We have plenty of information about how to configure Prometheus and use some of the usual configuration plugins, as we can see on its official webpage [1]. Even I already write about some configuration and using it for several purposes, as you can see also in other posts [2][3][4].
One of these configuration plugins is about relabeling, and this is a great thing. We have that each of the exporters can have its labels and meaning for those, and when you try to manage different technologies or components makes complex that all of them match together even if all of them follow the naming convention that Prometheus has [5].
But I had this situation, and I’m sure you have gone or will go towards that as well, that I have similar metrics for different technologies that for me are the same, and I need to keep them with the same name, but as they belong to other technologies they are not. So I need to find a way to rename the metric, and the great thing is that you can do that.
To do that, you just need to do a metric_relabel configuration. This configuration applies to relabel (as the name already indicates) labels of your prometheus metrics in this case before being ingested but also allow us to use some notable terms to do different things, and one of these notable terms is __name__. __name__ is a particular label that will enable you to rename your prometheus metrics before being ingested in the Prometheus Timeseries Database. And after that point, this will be as it will have that name since the beginning.
How to use that is relatively easy, is as any other relabel process, and I’d like to show you a sample about how to do it.
- source_labels: [__name__]
regex: 'jvm_threads_current'
target_label: __name__
replacement: 'process_thread_count'
Here it is a simple sample to show how we can rename a metric name jvm_threads_current to count the threads inside the JVM machine to do it more generic to be able to include the threads for the process in a process_thread_count prometheus metrics that we can use now as it was the original name.
Apache Kafka seems to be the standard solution in nowadays architecture, but we should focus if it is the right choice for our needs.
Nowadays, we’re in a new age of Event-Driven Architecture, and this is not the first time we’ve lived that. Before microservices and cloud, EDA was the new normal in enterprise integration. Based on different kinds of standards, there where protocols like JMS or AMQP used in broker-based products like TIBCO EMS, Active MQ, or IBM Websphere MQ, so this approach is not something new.
With the rise of microservices architectures and the API lead approach, it seemed that we’ve forgotten about the importance of the messaging systems, and we had to go through the same challenges we saw in the past to come to a new messaging solution to solve that problem. So, we’re coming back to EDA Architecture, pub-sub mechanism, to help us decouple the consumers and producers, moving from orchestration to choreography, and all these concepts fit better in nowaday worlds with more and more independent components that need cooperation and integration.
During this effort, we started to look at new technologies to help us implement that again. Still, with the new reality, we forgot about the heavy protocols and standards like JMS and started to think about other options. And we need to admit that we felt that there is a new king in this area, and this is one of the critical components that seem to be no matter what in today’s architecture: Apache Kafka.
And don’t get me wrong. Apache Kafka is fantastic, and it has been proven for so long, a production-ready solution, performant, with impressive capabilities for replay and powerful API to ease the integration. Apache Kafka has some challenges in this cloud-native world because it doesn’t play so well with some of its rules.
If you have used Apache Kafka for some time, you are aware that there are particular challenges with it. Apache Kafka has an architecture that comes from its LinkedIn days in 2011, where Kubernetes or even Docker and container technologies were not a thing, that makes to run Apache Kafka (purely stateful service) in a container fashion quite complicated. There are improvements using helm charts and operators to ease the journey, but still, it doesn’t feel like pieces can integrate well into that fashion. Another thing is the geo-replication that even with components like MirrorMaker, it is not something used, works smooth, and feels integrated.
Other technologies are trying to provide a solution for those capabilities, and one of them is also another Apache Foundation project that has been donated by Yahoo! and it is named Apache Pulsar.
Don’t get me wrong; this is not about finding a new truth, that single messaging solution that is perfect for today’s architectures: it doesn’t exist. In today’s world, with so many different requirements and variables for the different kinds of applications, one size fits all is no longer true. So you should stop thinking about which messaging solution is the best one, and think more about which one serves your architecture best and fulfills both technical and business requirements.
We have covered different ways for general communication, with several specific solutions for synchronous communication (service mesh technologies and protocols like REST, GraphQL, or gRPC) and different ones for asynchronous communication. We need to go deeper into the asynchronous communication to find what works best for you. But first, let’s speak a little bit more about Apache Pulsar.
Apache Pulsar
Apache Pulsar, as mentioned above, has been developed internally by Yahoo! and donated to the Apache Foundation. As stated on their official website, they are several key points to mention as we start exploring this option:
Pulsar Functions: Easily deploy lightweight compute logic using developer-friendly APIs without needing to run your stream processing engine
Proven in production: Apache Pulsar has run in production at Yahoo scale for over three years, with millions of messages per second across millions of topics
Low latency with durability: Designed for low publish latency (< 5ms) at scale with strong durability guarantees
Geo-replication: Designed for configurable replication between data centers across multiple geographic regions
Multi-tenancy: Built from the ground up as a multi-tenant system. Supports Isolation, Authentication, Authorization, and Quotas
Persistent storages: Persistent message storage based on Apache BookKeeper. Provides IO-level isolation between write and read operations
Client libraries: Flexible messaging models with high-level APIs for Java, C++, Python and GO
Operability: REST Admin API for provisioning, administration, tools, and monitoring. Deploy on bare metal or Kubernetes.
So, as we can see, in its design, Apache Pulsar is addressing some of the main weaknesses of Apache Kafka as Geo-replication and their cloud-native approach.
Apache Pulsar provides support for the pub/sub pattern, but also provides so many capabilities that also place as a traditional queue messaging system with their concept of exclusive topics where only one of the subscribers will receive the message. Also provides interesting concepts and features used in other messaging systems:
Dead Letter Topics: For messages that were not able to be processed by the consumer.
Persistent and Non-Persistent Topics: To decide if you want to persist your messages or not during the transition.
Namespaces: To have a logical distribution of your topics, so an application can be grouped in namespaces as we do, for example, in Kubernetes so we can isolate some applications from the others.
Failover: Similar to exclusive, but when the attached consumer failed to process another takes the chance to process the messages.
Shared: To be able to provide a round-robin approach similar to the traditional queue messaging system where all the subscribers will be attached to the topic, but the only one will receive the message, and it will distribute the load along all of them.
Multi-topic subscriptions: To be able to subscribe to several topics using a regexp (similar to the Subject approach from TIBCO Rendezvous, for example, in the 90s) that has been so powerful and popular.
But also, if you require features from Apache Kafka, you will still have similar concepts as partitioned topics, key-shared topics, and so on. So you have everything at your hand to choose which kind of configuration works best for you and your specific use cases, you also have the option to mix and match.
Apache Pulsar Architecture
Apache Pulsar Architecture is similar to other comparable messaging systems today. As you can see in the picture below from the Apache Pulsar website, those are the main components of the architecture:
Brokers: One or more brokers handles incoming messages from producers, dispatches messages to consumers
So you can see this architecture is also quite similar to the Apache Kafka one again with the addition of a new concept of the BookKeeper Cluster.
Broker in Apache Pulsar are stateless components that mainly will run two pieces
HTTP Server that exposes a REST API for management and is used by consumers and producers for topic lookup.
TCP Server using a binary protocol called dispatcher that is used for all the data transfers. Usually, Messages are dispatched out of a managed ledger cache for performance purposes. But also if this cache grows too big, it will interact with the BookKeeper cluster for persistence reasons.
To support the Global Replication (Geo-Replication), the Brokers manage replicators that tail the entries published in the local region and republish them to the remote regions.
Apache BookKeeper Cluster is used as persistent message storage. Apache BookKeeper is a distributed write-ahead log (WAL) system that manages when messages should be persisted. It also supports horizontal scaling based on the load and multi-log support. Not only messages are persistent but also the cursors that are the consumer position for a specific topic (similar to the offset in Apache Kafka terminology)
Finally, Zookeeper Cluster is used in the same role as Apache Kafka as a metadata configuration storage cluster for the whole system.
Hello World using Apache Pulsar
Let’s see how we can create a quick “Hello World” case using Apache Pulsar as a protocol, and to do that, we’re going to try to implement it in a cloud-native fashion. So we will do a single-node cluster of Apache Pulsar in a Kubernetes installation and deploy a producer application using Flogo technology and a consumer application using Go. Something similar to what you can see in the diagram below:
Diagram about the test case we’re doing
And we’re going to try to keep it simple, so we will just use pure docker this time. So, first of all, just spin up the Apache Pulsar server and to do that we will use the following command:
Now, we need to create simple applications, and for that, Flogo and Go will be used.
Let’s start with the producer, and in this case, we will use the open-source version to create a quick application.
First of all, we will just use the Web UI (dockerized) to do that. Run the command:
docker run -it -p 3303:3303 flogo/flogo-docker eula-accept
And we install a new contribution to enable the Pulsar publisher activity. To do that we will click on the “Install new contribution” button and provide the following URL:
// Receive messages from channel. The channel returns a struct which contains message and the consumer from where
// the message was received. It’s not necessary here since we have 1 single consumer, but the channel could be
// shared across multiple consumers as well
for cm := range channel {
msg := cm.Message
fmt.Printf(“Received message msgId: %v — content: ‘%s’\n”,
msg.ID(), string(msg.Payload()))
consumer.Ack(msg)
}
}
And after running both programs, you can see the following output as you can see, we were able to communicate both applications in an effortless flow.
This article is just a starting point, and we will continue talking about how to use Apache Pulsar in your architectures. If you want to take a look at the code we’ve used in this sample, you can find it here:
Serverless is already here. We know that. Even for companies that are starting their cloud journey moving to container-based platforms, they already know that serverless is in their future view at least for some specific use cases.
Its simplicity without needed to worry about anything related to the platform just focuses on the code and also the economics that comes with them makes this computing model a game-changer.
The mainstream platform for this approach at this moment is AWS Lambda from Amazon, we have read and heard a lot of AWS Lamba, but all the platforms are going to that path. Google with its Google Functions, Microsoft with its Azure. This is not just a thing for public cloud providers, also if you’re running on a private cloud approach you can use this deployment mode using frameworks like KNative or OpenFaaS. If you need more details about it, also we had some articles about this topic:
Deploying Flogo App on OpenFaaS
OpenFaaS is an alternative to enable the serverless approach in your infrastructure when you’re not running in the public cloud and you don’t have available those other options like AWS Lambda Functions or Azure Functions or even in public cloud, you’d like the features and customizations option it provides. OpenFaaS® (Functions as a Service) is […]
All of these platforms are updating and enhancing its option to execute any kind of code that you want on it, and this is what Azure Function just announced some weeks ago, the release of a Custom Handler approach to support any kind of language that you could think about:
Azure Functions custom handlers
Learn to use Azure Functions with any language or runtime version.
And we’re going to test that doing what we’d like best ………..
The first thing we need to do is to create our Flogo Application. I’m going to use Flogo Enterprise to do that, but you can do the same using the Open Source version as well:
We’re going to create a simple application just a Hello World REST service. As you can see here:
The JSON file of the Flogo Application you can find it in the GitHub repository shown below:
GitHub - alexandrev/flogo-azure-function: Sample of Flogo Application running as a Azure Function
Sample of Flogo Application running as a Azure Function - GitHub - alexandrev/flogo-azure-function: Sample of Flogo Application running as a Azure Function
But the main configurations are the following ones:
Server will listen on the port specified by the environment variable FUNCTIONS_HTTPWORKER_PORT
Server will listen for a GET request on the URI /hello-world
Response will be quite simple “Invoking a Flogo App inside Azure Functions”
Now, that we have the JSON file, we only need to start creating the artifacts needed to run an Azure Function.
We will need to install the npm package for the Azure Functions. To do that we need to run the following command:
npm i -g azure-functions-core-tools@3 --unsafe-perm true
It is important to have the “@3” because that way we reference the version that has this new Custom Handler logic to be able to execute it.
We will create a new folder call flogo-func and run the following command inside that folder:
func init --docker
Now we should select node as the runtime to be used and javascriptas the language. That could be something strange because we are going to use neither node, nor dotnet or pythonor powershell. But we will select that to keep it simple as we just try to focus on the Docker approach to do that.
After that we just need to create a new function, and to do that we need to type the following command:
func new
In the CLI-based interface that the Azure Function Core Tools shows us, we just need to select HTTP trigger and provide a name.
In our case, we will use hello-world as the name to keep it similar to what we defined as part of our Flogo Application. We will end up with the following folder structure:
Now we need to open the folder that is has been created and we need to do several things:
First of all, we need to remove the index.js file because we don’t need that file as this will not be a node JS function.
We need to copy the HelloWorld.json (our Flogo application) to the root folder of the function.
We need to change the host.json file to the following content:
Now, we need to generate the engine-windows-amd64.exe and to be able to do that we need to go to the FLOGO_HOME in our machine and go to the folder FLOGO_HOME/flogo/<VERSION>/bin and launch the following command:
./builder-windows-amd64.exe build
And you should get the engine-windows-amd64.exe as the output as you can see in the picture below:
Now, you just need to copy that file inside the folder of your function, and you should have the following folder structure as you can see here:
And once you have that, you just need to run your function to be able to test it locally:
func start
After running that command you should see an output similar to the one shown below:
I’d just like to highlight the startup time for our Flogo application around 15 milliseconds!!! Now, you only need to test it using any browser and just go to the following URL:
http://localhost:7071/api/hello-world
This has been just the first step on our journey but it was the steps needed to be able to run our Flogo application as part of your serverless environment hosted by Microsoft Azure platform!
We’re living in an age where technologies are switching standards are changing all the time. You forget to read Medium/Stackoverflow/Reddit and you found there are at least five (5) new industry standards that are taking the place of the existing ones that you know (those that have been releasing something like a year ago 🙂 ).
Do you still remember the old ages when SOAP was the unbeatable format? How much time did we spend building our SOAP Services in our enterprises? REST replace it as the new standard.. but just a few years and we’re back in a new battle just for synchronous communication: gRPC, GraphQL, are here to conquer everything again. It is crazy, huh?
But the situation is similar to asynchronous communication. Asynchronous communication has been here for a long time. Even, a long time before the terms Event-Driven Architecture or Streaming was really a “cool” term or a thing you be aware of.
We’ve been using these patterns for so long in our companies. Big enterprises have been using this model into their enterprise integrations for so long. Pub/Sub based protocols and technologies like TIBCO Rendezvous has been using since the late 90, and then we also incorporate more standards approaches like JMS using a different kind of servers to have all these event-based communications.
But now with the cloud-native revolution, the need for distributed computing, more agility, more scalability, centralized solutions are not valid anymore, and we’ve seen an explosion in the number of options to communicate based on these patterns.
You could think that this is the same situation as we were discussing at the beginning of this article regarding REST predominance and new cutting-edge technologies trying to replace it, but this is something quite different. Because experience has told us that a single size doesn’t fit all.
You cannot find a single technology or component that can provide all the communication needs that you need for all your use-cases. You can name any technology or protocol that you want: Kafka, Pulsar, JMS, MQTT, AMQP, Thrift, FTL, and so on.
Think about each of them and you probably will find some use-cases that one technology plays better than the others, so it makes no sense to just trying to find a single technology solution to cover all the needs. What it is needed is more a polyglot approach when you have different technologies that play well together and use the one that works best for your use case (the right tool for the right job approach) as we’re doing for the different technologies we’re deploying in our cluster.
Probably we’re not going to use the same technology to do a Machine Learning based Microservice, than a Streaming Application, right? The same principle applies here.
But the problem here when we try to talk about different technologies playing together is about standardization. If we think about REST, gRPC, or GraphQL even that they’re different they play based some common grounds. They rely on the same base HTTP protocol for a standard so it is easy to support all of them in the same architecture.
But this is not true with the technologies about Asynchronous Communication. And I’d like to focus on standardization and specification today. And that’s what AsyncAPI Initiative is trying to solve. And to define what AsyncAPI is I’d like to use their own words from their official website:
AsyncAPI is an open source initiative that seeks to improve the current state of Event-Driven Architectures (EDA). Our long-term goal is to make working with EDA’s as easy as it is to work with REST APIs. That goes from documentation to code generation, from discovery to event management. Most of the processes you apply to your REST APIs nowadays would be applicable to your event-driven/asynchronous APIs too.
So, their goal is to provide a set of tools to have a better world in all those EDA architectures that all companies have or starting to have at this moment and everything pivots around one thing: The OpenAPI Specification.
Similar to the OpenAPI specification it allows us to define a common interface for our EDA Interfaces and the most important part is that this is multi-channel. So the same specification can be used for your MQTT-based API or your Kafka API. Let’s take a look at how it looks like this AsyncAPI Specification:
As you can see it is very similar to the OpenAPI 3.0 and they already done that with the purpose to ease the transition between OpenAPI 3.0 and AsyncAPI and also to try to join both worlds together: It is more about just API, no matter if they’re synchronous or asynchronous and provide the same benefits regarding the ecosystem from one to the other.
Show me the code!!
But let’s stop talking and let’s start coding and to do that I’d like to use one of the tools that in my view has the greater support for AsyncAPI, and that’s it Project Flogo.
Probably you remember some of the different posts I’ve been done regarding Project Flogo and TIBCO Flogo Enterprise as a great technology to use for your microservices development (low-code/all-code approach, Golang based, a lot of connectors and open source extensions as well).
But today we’re going to use it to create our first AsyncAPI compliant microservice. And we’re going to rely on that because it provides a set of extensions to support the AsyncAPI initiative as you can see here:
GitHub – project-flogo/asyncapi: Flogo extensions to support AsyncAPI
Flogo extensions to support AsyncAPI. Contribute to project-flogo/asyncapi development by creating an account on GitHub.
So the first thing that we’re going to do is to create our AsyncAPI definition and to do it simpler, we’re going to use the sample one that we have available in the OpenAsync API with a simple change: We’re going to change from AMQP protocol to Kafka protocol because this is cool these days, isn’t it? 😉
asyncapi: '2.0.0'
info:
title: Hello world application
version: '0.1.0'
servers:
production:
url: broker.mycompany.com
protocol: kafka
description: This is "My Company" broker.
security:
- user-password: []
channels:
hello:
publish:
message:
$ref: '#/components/messages/hello-msg'
goodbye:
publish:
message:
$ref: '#/components/messages/goodbye-msg'
components:
messages:
hello-msg:
payload:
type: object
properties:
name:
type: string
sentAt:
$ref: '#/components/schemas/sent-at'
goodbye-msg:
payload:
type: object
properties:
sentAt:
$ref: '#/components/schemas/sent-at'
schemas:
sent-at:
type: string
description: The date and time a message was sent.
format: datetime
securitySchemes:
user-password:
type: userPassword
As you can see something simple. Two operations “hello” and “goodbye” with easy payload:
name: Name that we’re going to use for the greeting.
sentAt: The date and time a message was sent.
So the first thing we’re going to do is to create a Flogo Application that complies to that AsyncAPI specification:
git clone https://github.com/project-flogo/asyncapi.git
cd asyncapi/
go install
Now we have the generator installer so we only need to execute and provide our YML as the input in the following command:
And we will create a HelloWorld application for us, that we need to tweak a little bit. Only to make you be up & running quickly, I’m just sharing the code in my GitHub repository that you can borrow from them (But I really encourage you to take the time to take a look at the code to see the beauty of the Flogo App Development 🙂 )
Now, that we already have the app, we have just a simple dummy application that allows us to receive the message that complies with the specification, and in our case just log the payload, which can be our starting point to build our new Event-Driven Microservices compliant with AsyncAPI.
So, let’s try it but to do so, we need a few things. First of all, we need a Kafka server running and to do that in a quick way we’re going to leverage on the following docker-compose.yml file:
And to run that we just need to fire the following command from the same folder we have this file named as docker-compose.yml:
docker-compose up -d
And after doing that, we just need a sample application and what better that use Flogo again to create it but this time, let’s use the Graphical Viewer to create it right away:
Simple Flogo Application to send a AsyncAPI complaint-message each minute using Kafka as a protocol
So we need just to configure the Publish Kafka activity to provide the broker (localhost:9092), the topic (hello) and the message :
{
"name": "hello world",
"sentAt": "2020-04-24T00:00:00"
}
And that’s it! Let’s run it!!!:
First we start the AsyncAPI Flogo Microservice:
Async API Flogo Microservices Started!
And then we just launch the tester, that is going to send the same message each minute, as you can see in the picture below:
Sample Tester sending sample messages
And each time we sent that message, is going to be received in our Async API Flogo Microservice:
So, I hope this first introduction to the AsyncAPI world has been of the interest of you, but don’t forget to take a look at more resources in their own website:
GitHub – project-flogo/asyncapi: Flogo extensions to support AsyncAPI
Flogo extensions to support AsyncAPI. Contribute to project-flogo/asyncapi development by creating an account on GitHub.
GitHub – alexandrev/asyncapi-flogo-test: This is the material that supports the post on Medium regarding “Welcome to the AsyncAPI Revoluton” and provide the flogo code needed to run the same sample that is being shown in the article
This is the material that supports the post on Medium regarding "Welcome to the AsyncAPI Revoluton" and provide the flogo code needed to run the same sample that is being shown in the art…
Add a header to begin generating the table of contents
We all know that in the rise of the cloud-native development and architectures, we’ve seen Kubernetes based platforms as the new standard all focusing on new developments following the new paradigms and best practices: Microservices, Event-Driven Architectures new shiny protocols like GraphQL or gRPC, and so on and so forth.
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
In previous posts, I’ve explained how to integrate TIBCO BusinessWorks 6.x / BusinessWorks Container Edition (BWCE) applications with Prometheus, one of the most popular monitoring systems for cloud layers. Prometheus is one of the most widely used solutions to monitor your microservices inside a Kubernetes cluster. In this post, I will explain steps to leverage Prometheus for integrating with applications running on TIBCO Cloud Integration (TCI).
TCI is TIBCO’s iPaaS and primarily hides the application management complexity of an app from users. You need your packaged application (a.k.a EAR) and manifest.json — both generated by the product to simply deploy the application.
Isn’t it magical? Yes, it is! As explained in my previous post related to Prometheus integration with BWCE, which allows you to customize your base images, TCI allows integration with Prometheus in a slightly different manner. Let’s walk through the steps.
TCI has its own embedded monitoring tools (shown below) to provide insights into Memory and CPU utilization, plus network throughput, which is very useful.
While the monitoring metrics provided out-of-the-box by TCI are sufficient for most scenarios, there are hybrid connectivity use-cases (application running on-prem and microservices running on your own cluster that could be on a private or public cloud) that might require a unified single-pane view of monitoring.
Import the Prometheus plugin by choosing Import → Plug-ins and Fragments option and specifying the directory downloaded from the above mentioned GitHub location. (shown below)
Step two involves adding the Prometheus module previously imported to the specific application as shown below:
Step three is just to build the EAR file along with manifest.json.
NOTE: If the EAR doesn’t get generated once you add the Prometheus plugin, please follow the below steps:
Export the project with the Prometheus module to a zip file.
Remove the Prometheus project from the workspace.
Import the project from the zip file generated before.
Before you deploy the BW application on TCI, we need to enable an additional port on TCI to scrape the Prometheus metrics.
Step four Updating manifest.json file.
By default, a TCI app using the manifest.json file only exposes one port to be consumed from outside (related to functional services) and the other to be used internally for health checks.
For Prometheus integration with TCI, we need an additional port listening on 9095, so Prometheus server can access the metrics endpoints to scrape the required metrics for our TCI application.
We need to slightly modify the generated manifest.json file (of BW app) to expose an additional port, 9095 (shown below) .
Also, to tell TCI that we want to enable Prometheus endpoint we need to set a property in the manifest.json file. The property is TCI_BW_CONFIG_OVERRIDES and provide the following value: BW_PROMETHEUS_ENABLE=true, as shown below:
We also need to add an additional line (propertyPrefix) in the manifest.json file as shown below.
Now, we are ready to deploy the BW app on TCI and once it is deployed we can see there are two endpoints
If we expand the Endpoints options on the right (shown above), you can see that one of them is named “prometheus” and that’s our Prometheus metrics endpoint:
Just copy the prometheus URL and append it with /metrics (URL in the below snapshot) — this will display the Prometheus metrics for the specific BW app deployed on TCI.
Note: appending with /metrics is not compulsory, the as-is URL for Prometheus endpoint will also work.
In the list you will find the following kind of metrics to be able to create the most incredible dashboards and analysis based on that kind of information:
JVM metrics around memory used, GC performance and thread pools counts
CPU usage by the application
Process and Activity execution counts by Status (Started, Completed, Failed, Scheduled..)
Duration by Activity and Process.
With all this available the information you can create dashboards similar to the one shown below, in this case using Spotfire as the Dashboard tool:
But you can also integrate those metrics with Grafana or any other tool that could read data from Prometheus time-series database.
OpenFaaS is an alternative to enable the serverless approach in your infrastructure when you’re not running in the public cloud and you don’t have available those other options like AWS Lambda Functions or Azure Functions or even in public cloud, you’d like the features and customizations option it provides.
OpenFaaS® (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first-class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.
GitHub - openfaas/faas: OpenFaaS - Serverless Functions Made Simple
OpenFaaS - Serverless Functions Made Simple. Contribute to openfaas/faas development by creating an account on GitHub.
There is good content on medium about OpenFaaS so I don’t like to spend so much time on this, but I’d like to leave you here some links for reference:
Running serverless functions on premises using OpenFaas with Kubernetes
Earlier most of companies had monolithic architecture for their cloud applications (all the things in a single package). Nowadays companies…
An Introduction to Serverless DevOps with OpenFaaS | HackerNoon
DevOps isn’t about just doing CI/CD. But a CI/CD pipeline has an important role inside DevOps. I’ve been investing my time on recently and as I started creating multiple functions, I wanted an easy to use and accessible development and delivery flow, in other words a CI/CD pipeline. OpenFaaS One day…
We already have a lot of info about how to run a Flogo Application as a Lambda Function as you can see here:
https://www.youtube.com/watch?v=TysuwbXODQI
But.. what about OpenFaaS? Can we run our Flogo application inside OpenFaaS? Sure! Let me explain to you how.
OpenFaaS is a very customize framework to build zero-scaled functions and it could need some time to get familiar with concepts. Everything is based on watchdogs that are the components listening to the requests are responsible for launching the forks to handle the requests:
We’re going to use the new watchdog named as of-watchdog that is the one expected to be the default one in the feature and all the info is here:
GitHub - openfaas/of-watchdog: Reverse proxy for STDIO and HTTP microservices
Reverse proxy for STDIO and HTTP microservices. Contribute to openfaas/of-watchdog development by creating an account on GitHub.
This watchdog provides several modes, one of them is named HTTP and it is the default one, and is based on some HTTP Forward to the internal server running in the container. That fits perfectly with our Flogo application and means that the only thing we need is to deploy an HTTP Receive Request trigger in our Flogo Application and that’s it.
The only things you need to configure is the method (POST) and the Path (/) to be able to handle the requests. In our case we’re going to do a simple Hello World app as you can see here:
To be able to run this application we need to use several things, and let’s explain it here:
First of all, we need to do the installation of the OpenFaaS environment, I’m going to skip all the details about this process and just point to you to the detailed tutorial about it:
Deploy OpenFaaS on Amazon EKS | Amazon Web Services
We’ve talked about FaaS (Functions as a Service) in Running FaaS on a Kubernetes Cluster on AWS Using Kubeless by Sebastien Goasguen. In this post, Alex Ellis, founder of the OpenFaaS project, walks you through how to use OpenFaaS on Amazon EKS. OpenFaaS is one of the most popular tools in the FaaS…
Now we need to create our template and to do that, we are going to use a Dockerfile template. To create it we’re going to execute:
faas-cli new --lang dockerfile
We’re going to name the function flogo-test. And now we’re going to update the Dockerfile to be like this:
Most of this content is common for any other template using the new of-watchdog and the HTTP mode.
I’d like to highlight the following things:
We use several environment variables to define the behavior:
mode = HTTP to define what we’re going to use this method
upstream_url = URL that we are going to forward the request to
fprocess = OS command that we need to execute, in our case means to run the Flogo App.
Other things are the same as you should do in case you want to run flogo apps in Docker:
Add the engine executable for your platform (UNIX in most cases as the image base is almost always a Linux based)
Add the JSON file of the application that you want to use.
We also need to change the yml file to look like this:
Usually, when you’re developing or running your container application you will get to a moment when something goes wrong. But not in a way you can solve with your logging system and with testing.
A moment when there is some bottleneck, something that is not performing as well as you want, and you’d like to take a look inside. And that’s what we’re going to do. We’re going to watch inside.
Because our BusinessWorks Container Edition provides so great features to do it that you need to use it into your favor because you’re going to thank me for the rest of your life. So, I don’t want to spend one more minute about this. I’d like to start telling you right now.
The first thing we need to do, we need to go inside the OSGi console from the container. So, the first thing we do is to expose the 8090 port as you can see in the picture below
Now, we can expose that port to your host, using the port-forward command
And as you can see it says that statistics has been enabled for echo application, so using that application name we’re going to gather the statistics at the level
And you can see the statistics at the process level where you can see the following metrics:
Process metadata (name, parent process and version)
Total instance by status (create, suspended, failed and executed)
Execution time (total, average, min, max, most recent)
Elapsed time (total, average, min, max, most recent)
And we can get the statistics at the activity level:
And with that, you can detect any bottleneck you’re facing into your application and also be sure which activity or which process is responsible for it. So you can solve it in a quick way.