Flogo Test is one of the main steps in your CI/CD lifecycle if you are using Flogo. You probably have done it previously in all your other developments like Java developments or even using BusinessWorks 6 using the bw6-maven-plugin:
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
GitHub – TIBCOSoftware/bw6-plugin-maven: Plug-in Code for Apache Maven and TIBCO ActiveMatrix BusinessWorks™
Plug-in Code for Apache Maven and TIBCO ActiveMatrix BusinessWorks™ – GitHub – TIBCOSoftware/bw6-plugin-maven: Plug-in Code for Apache Maven and TIBCO ActiveMatrix BusinessWorks™
So, you’re probably wondering… How this is going to be done by Flogo? Ok!! I’ll tell you.
First of all, you need to keep in mind that Flogo Enterprise is a product that was designed with all those aspects in mind, so you don’t need to worry about it.
Regarding testing, when we need to include inside a CI/CD lifecycle approach, these testing capabilities should meet the following requirements:
It should be defined in some artifacts.
It should be executed automatically
It should be able to check the outcome.
Flogo Enterprise includes by default Testing capabilities in the Web UI where you can not only test your flows using the UI from a debug/troubleshooting perspective but also able to generate the artifacts that are going to allow you to perform more sophisticated testing
So, we need to go to our Web UI and when we’re inside a flow we have a “Start Testing” button:
And we can see all our Launch Configuration change and most important part for this topic be able to export it and download it to your local machine:
Once everything is downloaded and we have the binary for our application we can execute the tests in an automatic way from the CLI using the following command
This is going to generate an output file with the output of the execution test:
And if we open the file we will get the exact same output the flow is returned so we can perform any assert on it
That was easy, right? Let’s do some additional tweaks to avoid your need to go to the Web UI. You can generate the launch configuration using only the CLI.
To do that, you only need to execute the following command:
OAuth 2.0 is a protocol that allows users to authorize third-parties to access their info without needing to know the user credentials. It usually relies on an additional system that acts as Identity Provider where the user is going to authenticate against, and then once authenticate you are provided with some secure piece of information with the user privileges and you can use that piece to authenticate your requests for some period.
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
OAuth v2.0 Authentication Flow Sample
OAuth 2.0 also defines a number of grant flows to adapt to different authentication needs. These authorization grants are the following ones:
Client Credentials
Authorization Code
Resource Owner Password Credentials
The decision to choose one flow or another depends on the needs of the invocation and of course, it is a customer personal decision but as a general way, the approximation showed in the Auth0 documentation is the usual recommendation:
Decision Graph to choose the Authorization Grant to use
These grants are based on JSON Web Token to transmit information between the different parties.
JWT
JSON Web Token is an industry standard for a token generation defined in the RFC 7519 standard. it defines a secure way to submit info between parties like a JSON object.
It is composed of three components (Header, Payload, and Signature) and can be used by a symmetric cipher or with a public/private key exchange mode using RSA or ECDSA.
JWT Composition Sample
Use Case Scenario
So, OAuth V2 and JWT are the usual way to authenticate requests in Microservice world, so it is important to have this standard supported in your framework, and this is perfectly covered in Flogo Enterprise as we’re going to see in this test.
AWS Cognito Setup
We’re going to use AWS Cognito as the Authorization Server, as this is quite easy to set up and it is one of the main actors in this kind of authentication. In Amazon words themselves:
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
In our case, we’re going to do a pretty straightforward setup and the steps down are shown below:
Create a User Pool named “Flogo”
Create an App Client with a generated secret named “TestApplication”
Create a Resource Server named “Test” with identifier “http://flogo.test1” and with a scope named “echo”
Resource Server Configuration
Set the following App Client Settings as shown in the image below with the following details:
Cognito User Pool selected as Enable Identity Provider
Client credentials used as Allowed OAuth Flows
http://flogo.test1/echo selected as an enabled scope
App Client Setting Configuration
And that’s all the configuration needed at the Cognito level, and now let’s go back to Flogo Enterprise Web UI.
Flogo Set-up
Now, we are going to create a REST Service and we’re going to skip all steps regarding how to create a REST service using Flogo but you can take a look at the detailed steps in the flow below:
Building your First Flogo Application
In the previous post, we introduce Flogo technology like one of the things in the cloud-native development industry and now we’re going to build our first Flogo Application and try to cover all the options that we describe in the previous post. NOTE .- If you’re new to Flogo and you didn’t read the previous post […]
We’re going to create an echo service hosted at localhost:9999/hello/ that received a path parameter after the hello that is the name we’d like to greet, and we’re going to establish the following restrictions:
We’re going to check the presence of a JWT Token
We’re going to validate the JWT Token
We’re going to check that the JWT included the http://flogo.test1/echo
In case, everything is OK, we’re going to execute the service, in case some prerequisite is not meet we’re going to return a 401 Unauthorized error.
We’re going to create two flows:
Main flow that is going to have the REST Service
Subflow to do all the validations regarding JWT.
MainFlow is quite straightforward is going to receive the request, execute the subflow and depending on their output is going to execute the service or not:
So, all important logic is inside the other flow that is going to do all the JWT Validation, and its structure is the one showed below:
We’re going to use the JWT activity hosted in GitHub available at the link shown below:
flogo-components/activity/jwt at master · ayh20/flogo-components
Contribute to ayh20/flogo-components development by creating an account on GitHub.
NOTE: If you don’t remember how to install a Flogo Enterprise extension take a look at the link below:
Installing Extensions in Flogo Enterprise
In previous posts, we’ve talked about capabilities of Flogo and how to build our first Flogo application, so at this moment if you’ve read both of them you have a clear knowledge about what Flogo provides and how easy is to create applications in Flogo. But in those capabilities, we’ve spoken about that one of […]
And after that, the configuration is quite easy, as the activity allows you to choose the action you want to do with the token. In our case, “Verify” and we provided the token, the algorithm (in our case “RS256”) and the public key we’re going to use for validating the signature of the token:
Test
Now, we’re going to launch the Flogo Enterprise Application from the binary generated:
And now, if we try to do an execution to the endpoint without providing any token we get the expected 401 code response
So, to get the access token first thing we need to do is to send a request to AWS Cognito endpoint (https://flogotest.auth.eu-west-2.amazoncognito.com/oauth2/token) using our app credentials:
And this token has all the info from the client and its permission, you can check it in the jwt.io webpage:
And to finally test it, we only need to add it to the first request we tried as we can see in the picture below:
Resources
GitHub – alexandrev/flogo-jwt-sample-medium: JWT Support in Flogo Enterprise Post Resource
JWT Support in Flogo Enterprise Post Resource. Contribute to alexandrev/flogo-jwt-sample-medium development by creating an account on GitHub.
In previous posts, we’ve talked a lot about all the capabilities of Flogo and how to do different kinds of things with Flogo, and one message was always underlined there: performance, performance, performance… but how flogo performance is compared with other modern programming languages?
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
We’ve covered the basics of Flogo Enterprise development in the previous articles, but there is an important topic that hasn’t been discussed for now, and this is flogo error handling. We always think everything is going to work the way we plan to, and everything is going to go down the green path, but most of the time they’re not, so you need to prepare your flows to be ready to handle these situations.
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
/wp:paragraph –>
If you’re used to TIBCO BusinessWorks Development you’re going to get used to it because most of the ways to do things are pretty much the same, so any kind of logic you apply to your developments can be applied here. Let’s cover the basics first.
Error Handler
A flogo error handler is the main way Flogo developers handle the issues that happen in development and this has been this way for all versions from the first one until Flogo Enterprise 2.6.0 because until that moment it was the only way to do it.
The error handler is a different error flow that is going to be invoked when something is wrong similar to a catch flow in TIBCO BusinessWorks development. When you click the Error Handler button you’re going to enter into a new flow with a predefined starter named error as you can see in the picture below:
Data that is going to be starting this flow is the activity name that failed and failing message, and then you can do any handle logic you need to do until the flow is returned. Activities that you could use and logic is exactly the same you can use in your main flow
Error Condition
Since Flogo Enterprise 2.6.0 a new way to handle error has been included, as we said until that point any error generates an invocation of the error handler, but this doesn’t cover every use case scenario when an error happens.
As Flogo Enterprise starts with a simple microservice use-case scenario error handle was enough to handle it, but now as the power and features Flogo provides have been increasing new methods to catch and act when an error happens are needed to cover these new scenarios. So, now when you create a branch condition you can choose three options: Success, Success with Condition, and Error.
To do that, you only need to click on the engine button that appears in the branch creation from the activity that is generating the error:
And you could choose one of the three situations as you can see in the image below:
As branches are accumulative we can have an activity with different branches of different types:
But you can only add one Error Type branch for each activity if you try to add another field is going to be disabled so you can not go to be able to do it:
Throw Error Activity
All this content has been focused on how to handle errors when it happens, but you can also need the other way around to be able to detect something do some checks and decide that this should be handled as an error.
To do that you have the Throw Error Activity where you can pass the data of the error you want to handle in a two-key element interface one for the error message and another for error data as you can see in the picture below:
Reuse is one of the most important capabilities in Application Development and this is something that has been implemented so great in Flogo Enterprise that you’re going to be amazed how great it is. Let’s take a look at Flogo Subflows
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
!– /wp:paragraph –>
In Flogo it doesn’t exist the concept of Subflow when you create an application, you only can create flows. If you remember Flow was created based on Triggers and Actions:
So, what is subflow in Flogo? A Flow without the triggers. That simple. So, it means that I need to create a new flow, but removing any trigger or using a special trigger so this could be reused in a different flow? No. Not needed.
Any flow can be reused as subflows as is. Only change when you invoke it as a subflow the triggers are not going to be executed. That simple. So, yes, you can reuse any flow you’ve created without doing anything at all. Let’s see how
Remember the application that we’ve created in the past post about GraphQL:
We have a flow named Mutation_asignUser where we need to gather the info from the user and the company, the same thing we were already doing in the following flows Query_currentUser and Query_company. So, how easy is to reuse them? Let’s take a look at Mutation_asignUser in more detail:
Mutation_asignUser with highlighted subflows usage
In the activities with the red box around is where we’re calling our “subflows” and this is so easy as include an activity called “Start a SubFlow”
This activity is going to ask for the flow, that it could be any flow you have in your application (not flows from other applications can be included as subflow)
And once you select the flow, input and output are going to be populated based on the flow interface. So easy, right?
Let’s hack it! and start creating more Flogo apps!!!!
The past month during the KubeCon 2019 Europe in Barcelona OpenTracing announces its merge with OpenCensus project to create a new standard named OpenTelemetry that is going to be live in September 2019.
OpenTracing, OpenCensus Merge into a Single New Project, OpenTelemetry - The New Stack
Two open source projects that have been instrumental in providing metrics for cloud native operations have merged into a single project. The fusion of Google’s OpenCensus and the Cloud Native Computing Foundation’s OpenTracing will be known as OpenTelemetry, and will be managed by the CNCF. The idea…
So, I think that would be awesome to take a look at the capabilities regarding OpenTracing we have available in TIBCO BusinessWorks Container Edition
Today’s world is too complex in terms of how our architectures are defined and managed. New concepts in the last years like containers, microservices, service mesh, give us the option to reach a new level of flexibility, performance, and productivity but also comes with a cost of management we need to deal with.
Years ago architectures were simpler, service was a concept that was starting out, but even then a few issues begin to arise regarding monitoring, tracing, logging and so on. So, in those days everything was solved with a Development Framework that all our services were going to include because all of our services were developed by the same team, same technology, and in that framework, we can make sure things were handled properly.
Now, we rely on standards to do this kind of things, and for example, for Tracing, we rely on OpenTracing. I don’t want to spend time talking about what OpenTracing is where they have a full medium account talking themselves much better than I could ever do, so please take some minutes to read about it.
Distributed Tracing in 10 Minutes
With the intrinsic concurrency and asynchrony of modern software applications, distributed tracing has become part of the table stakes for effective monitoring. That said, instrumenting a system for…
The only statement I want to do here is the following one:
Tracing is not Logging, and please be sure you understand that.
Tracing is about sampling, it’s like how flows are performing and if everything is worked but it is not about a specific request has been done well for customer ID whatever… that’s logging, no tracing.
So OpenTracing and its different implementations like Jaeger or Zipkin are the way we can implement tracing today in a really easy way, and this is not something that you could only do in your code-based development language, you can do it with our zero-code tools to develop cloud-native applications like TIBCO BusinessWorks Container Edition and that’s what I’d like to show you today. So, let the match, begin…
First thing I’d like to do is to show you the scenario we’re going to implement, and this is going to be the one shown in the image below:
You are going to have two REST service that is going to call one to each other, and we’re going to export all the traces to Jaeger external component and later we can use its UI to analyze the flow in a graphical and easy way.
So, the first thing we need to do is to develop the services that as you can see in the pictures below are going to be quite easy because this is not the main purpose of our scenario.
Once, we have our docker images based on those applications we can start, but before we launch our applications, we need to launch our Jaeger system you can read all info about how to do it in the link below:
Getting started
Get up and running with Jaeger in your local environment
But at the end we only to run the following command:
And now, we’re ready to launch our applications and the only things we need to do in our developments because as you could see we didn’t do anything strange in our development and it was quite straightforward is to add the following environment variables when we launch our container
And… that’s it, we launch our containers with the following commands and wait until applications are up & running
docker run -ti -p 5000:5000 — name provider -e BW_PROFILE=Docker -e PROVIDER_PORT=5000 -e BW_LOGLEVEL=ERROR — link jaeger -e BW_JAVA_OPTS=”-Dbw.engine.opentracing.enable=true” -e JAEGER_AGENT_HOST=jaeger -e JAEGER_AGENT_PORT=6831 -e JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778 provider:1.0
docker run — name consumer -ti -p 6000:6000 -e BW_PROFILE=Docker — link jaeger — link provider -e BW_JAVA_OPTS=”-Dbw.engine.opentracing.enable=true” -e JAEGER_AGENT_HOST=jaeger -e JAEGER_AGENT_PORT=6831 -e JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778 -e CONSUMER_PORT=6000 -e PROVIDER_HOST=provider consumer:1.0
Once they’re running, let’s generate some requests! To do that I’m going to use a SOAPUI project to generate some stable load for 60 secs, as you can see in the image below:
And now we’re going to go to the following URL to see the Jaeger UI and we can see the following thing as soon as you click in the Search button
And then if we zoom in some specific trace:
That’s pretty amazing but that’s not all, because you can see if you search in the UI about the data of this traces, you can see technical data from your BusinessWorks Container Edition flows as you can see in the picture below:
But… what if you want to add your custom tags to those traces? You can do it as well!! Let me explain to you how.
Since BusinessWorks Container Edition 2.4.4 you are going to find a new tab in all your activities named “Tags” where you can add the custom tags that you want this activity to include, for example, a custom id that is going to be propagated through the whole process we can define it as you can see here.
And if you take a look at the data we have in the system, you can see all of these traces has this data:
You can take a look at the code in the following GitHub repository:
GitHub - alexandrev/bwce-opentracing-customtags
Contribute to alexandrev/bwce-opentracing-customtags development by creating an account on GitHub.
I don’t want to make a full article about what GraphQL is and the advantages that include comparing with REST and so. Especially when you have so many articles in medium about that topic, so please take a look at the following ones:
A Beginner’s Guide to GraphQL
One of the most commonly discussed terms today is the API. A lot of people don’t know exactly what an API is. Basically, API stands for…
So, in summary, GraphQL it’s a different protocol to define your API interfaces with another approach in mind. So, let’s see how can include this kind of interfaces in our Flogo flows. We’re going to play with the following GraphQL Schema that is going to define our API
If this is the first time you see a GraphQL schema, let me give you some clarifications:
Schema is split into three parts: Queries, Mutations, and Model
Queries are GET-style request to get information. In our case, we have two queries currentUser and company.
Mutations are POST/PUT-style request to modify information. In our case, we have three mutations: registerUser, registerCompany, asignUser.
Model is the different objects and types our queries and mutations interact with.
So, now we are going to do the hard work in Flogo Environment, and we start creating a new application that we’re going to call GraphQL_Sample_1
Creation form of the new app named GraphQL_Sample_V1
Now, we have an empty application. Until this point it wasn’t hard, right? Ok, let’s see how it goes. Now we are going to create a new flow, and we can choose between three options:
Empty flow
Swagger specification
GraphQL Schema
So we choose the option from GraphQL Schema and we upload the file and it generates a skeleton of all the flows needed to support this specification. A new flow is going to be generated for each of the queries and each of the mutations we’ve defined, as we have two (2) queries and three (3) mutations, so five (5) flows in total as you can see in the picture below:
Autogenerated flows based on GraphQL schema
As you can see flows have been generated following the following naming convention:
<TYPE>_<name>
Where <TYPE> can be either Mutation or Query and <name> is the name this component has in the GraphQL Schema.
Now, everything is done regarding GraphQL part, and we only need to provide content to each of the flows. In this sample, I’m going to rely on a PostgreSQL database to store all the info regarding users and companies, but the content is going to be very straight forward.
Query_currentUser: This flow is going to ask for the customer data to the PostgreSQL to return its data and the company he belongs to. In case he doesn’t belong to anyone, we gather only the data user and in case it is not present to return an empty object.
Query_currentUser flow
Query_company: This flow is going to ask for the company data to the PostgreSQL to return and in case it is not present to return an empty object.
Mutation_registerUser: This flow is going to insert a user in the database and in case its mail already exists is going to return the existing data to the consumer.
Mutation_registerCompany: This flow is going to insert a company in the database and in case its name already exists is going to return the existing data to the consumer.
Mutation_asignUser: This flow is going to assign a user into the company to do that is going to return the user data based on its email and same thing with the company and update the PostgreSQL activities based on that situation.
Ok, now we have our app already built, let’s see how we can test it and play with the GraphQL API we’ve built. So…its show-time!!!!!
First, we’re going to build the app. As you probably know you can choose for different kinds of builts: docker image or OS-based package. In my case, I’m going to generate a Windows build to ease all the process, but you can choose whatever feels good to you.
To do that we go to the menu app and click in the Build and choose the Windows option:
Build option to windows
And once the built has done we’re going to have a new EXE file in our Download folder. Yes, that easy! And now, how to launch it? Even easier… just execute the EXE file and … it’s done!!
GraphQL Flogo app running in Windows console
As you can see in the picture above we’re listening requests in port 7879 in the /graphql path. So, let’s open a Postman client and start sending requests to it!!
And we’re going to start with the queries and to be able to return data I’ve inserted in the database a sample record with test@test.com email so, If now I try to recover it, I can do this:
In previous posts, we’ve talked about capabilities of Flogo and how to build our first Flogo application, so at this moment if you’ve read both of them you have a clear knowledge about what Flogo provides and how easy is to create applications in Flogo.
But in those capabilities, we’ve spoken about that one of the strengths from Flogo is how easy is to extend the default capabilities Flogo provides. Flogo Extensions allows increasing the integration capabilities from the product as well as the compute capabilities and they’re built using Go. You can create a different kind of extensions:
Triggers: Mechanism to activate a Flogo flow (usually known as Starters)
Activities/Actions: Implementation logic that you can use inside your Flogo flows.
There are different kinds of extensions depending on how they’re provided and the scope they have.
TIBCO Flogo Enterprise Connectors: These are the connectors provided directly from TIBCO Software for the customers that are using TIBCO Flogo Enterprise. They are release using TIBCO eDelivery like all the other products and components from TIBCO.
Flogo Open Source Extensions: These are the extensions developed by the Community and that is usually stored in GitHub repositories or any other control version system that is publicly available.
TIBCO Flogo Enterprise Custom Extensions: These are the equivalent extensions to Flogo OpenSource Extensions but build to be used in TIBCO Flogo Enterprise or TIBCO Cloud Integration (iPaaS from TIBCO) and that follows the requirements defined by Flogo Enterprise Documentation and provides a little more configuration options about how this is displayed in UI.
Installation using TIBCO Web UI
We’re going to cover in this article how to work with all of them in our environment and you’re going to see that the procedure is pretty much the same but the main difference is how to get the deployable object.
We need to install some extension and for our case, we’re going to use both kinds of extensions possible: A connector provided by TIBCO for connecting to GitHub and an open source activity build for the community to manage the File operations.
First, we’re going to start with the GitHub connector and we’re going to use the Flogo Connector for GitHub, that is going to be downloaded through TIBCO eDelivery as you did with Flogo Enterprise. Once you have the ZIP file, you need to add it to your installation and to do that, we’re going to go to the Extensions page
And we’re going to click in Upload and provide the ZIP file we’ve downloaded with the GitHub connector
We click on the button “Upload and compile” and wait until the compilation process finishes and after that, we should notice that we have an additional trigger available as you can see in the picture below:
So, we already have our GitHub trigger, but we need our File activities and now we’re going to do the same exercise but with a different connector. In this case, we’re going to use an open source activity that is hosted in the Leon Stigter GitHub repository. And we’re going to download the full flogo-components repository and upload that ZIP file to the Extensions page as we did before:
We’re going to extract the full repository and go to the path activity and generate a zip file from the folder named “writetofile” and that is the ZIP file that we’re going to upload it to our Extension page:
Repository structure is pretty much the same for all this kind of open source repositories, they usually have the name flogo-components and inside they have two main folders:
activity: Folder that is grouping all different activities that are available in this repository.
trigger: Folder that is grouping all different triggers that are available in this repository.
Each of these folders is going to have a folder for each of the activities and triggers that are being implemented in this repository like you can see in the picture below:
And each of them is to have the same structure:
activity.json: That is going to describe the model of the activity (name, description, author, input settings, output settings)
activity.go: Holds all the programming code in Go to build the capability the activity exposes.
activity_test.go : Holds all tests the activity has to be ready to be used by other developers and users.
NOTE: Extensions for TIBCO Flogo Enterprise have an additional file named activity.ts that is a TypeScript file that defines UI validations that it should be done for the activity.
And once we have the file we can upload it the same way we did with the previous extension.
Using CLI to Install
Also, If we’re using the Flogo CLI we can still install it using directly the URL to the activity folder without needed to provide the zip file. To do that we need to enable Installer Manager using the following command:
<FLOGO_HOME>/tools/installmgr.bat
And that is going to build a Docker image that represents a CLI tool with the following commands:
Installer Manager usage menu
Install: Install Flogo Enterprise, Flogo Connectors, Services, etc in the current installation directory.
Uninstall: Uninstall Flogo Enterprise, Flogo Connectors, Services from the current installation directory.
TIBCO Connector installation using Install Manager CLI
And this process can be used with an official Connector as well as an OSS Extension
OSS Extension Installation using Install Manager CLI
Probes are how we’re able to say to Kubernetes that everything inside the pod is working as expected. Kubernetes has no way to know what’s happening inside at the fine-grained and has no way to know for each container if it is healthy or not, that’s why they need help from the container itself.
First, you can think that you can do it with the entrypoint component of your Dockerfile as you only specify one command to run inside each container, so check if that process is running, and that means that everything is healthy? Ok… fair enough…
But, is this true always? A running process at the OS/container level means that everything is working fine? Let’s think about the Oracle database for a minute, imagine that you have an issue with the shared memory and it keeps in an initializing status forever, K8S is going to check the command, it is going to find that is running and says to the whole cluster: Ok! Don’t worry! Database is working perfectly, go ahead and send your queries to it!!
This could happen with similar components like a web server or even with an application itself, but it is too common when you have servers that can handle deployments on it, like BusinessWorks Container Edition itself. And that’ why this is very important for us as developers and even more important for us as administrators. So, let’s start!
The first thing we’re going to do is to build a BusinessWorks Container Edition Application, as this is not the main purpose of this article, we’re going to use the same ones I’ve created for the BusinessWorks Container Edition — Istio Integration that you could find here.
So, this is a quite simple application that exposes a SOAP Web Service. All applications in BusinessWorks Container Edition (as well as in BusinessWorks Enterprise Edition) has its own status, so you can ask them if they’re Running or not, that something the BusinessWorks Container internal “engine” (NOTE: We’re going to use the word engine to simplify when we’re talking about the internals of BWCE. In detail, the component that knows the status of the application is the internal AppNode the container starts, but let’s keep it simple for now)
Kubernetes Probes
In Kubernetes, exists the “probe” concept to perform health check to your container. This is performed by configuring liveness probes or readiness probes.
Liveness probe: Kubernetes uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress.
Readiness probe: Kubernetes uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balance
Even when there are two types of probes for BusinessWorks Container Edition both are handling the same way, the idea is the following one: As long as the application is Running, you can start sending traffic and when it is not running we need to restart the container, so that makes it simpler for us.
Implementing Probes
Each BusinessWorks Container Edition application that is started has an out of the box way to know if it is healthy or not. This is done by a special endpoint published by the engine itself:
http://localhost:7777/_ping/
So, if we have a normal BusinessWorks Container Edition application deployed on our Kubernetes cluster as we had for the Istio integration we have logs similar to these ones:
Staring traces of a BusinessWorks Container Edition Application
As you can see logs says that the application is started. So, as we can’t launch a curl request from the inside the container (as we haven’t exposed the port 7777 to the outside yet and curl is not installed in the base image), the first thing we’re going to do is to expose it to the rest of the cluster.
To do that we change our Deployment.yml file that we have used to this one:
Deployment.yml file with the 7777 port exposed
Now, we can go to any container in the cluster that has “curl” installed or any other way to launch a request like this one with the HTTP 200 code and the message “Application is running”.
Successful execution of _ping endpoint
NOTE: If you forget the last / and try to invoke _ping instead of _ping/ you’re going to get an HTTP 302 Found code with the final location as you can see here:
HTTP 302 code execution were pointing to _ping instead of _ping/
Ok, let’s see what happens if now we stop the application. To do that we’re going to go inside the container and use the OSGi console.
To do that once you’re inside the container you execute the following command:
ssh -p 1122 equinox@localhost
It is going to ask for credentials and use the default password ‘equinox’. After that is going to give you the chance to create a new user and you can use whatever credentials work for you. In my example, I’m going to use admin / adminadmin (NOTE: Minimum length for a password is eight (8) characters.
If we execute frwk:la is going to show the applications deployed, in our case the only one, as it should be in BusinessWorks Container Edition application:
To stop it, we are going to execute the following command to list all the OSGi bundle we have at the moment running in the system:
frwk:lb
Now, we find the bundles that belong to our application (at least two bundles (1 per BW Module and another for the Application)
Showing bundles inside the BusinessWorks Container Application
And now we can stop it using felix:stop <ID>, so in my case, I need to execute the following commands:
stop “603”
stop “604”
Commands to stop the bundles that belong to the application
And now the application is stopped
OSGi console showing the application as Stopped
So, if now we try to launch the same curl command as we executed before, we’re getting the following output:
Failed execution of ping endpoint when Application is stopped
As you can see an HTTP 500 Error which means something is not fine. If now we try to start again the application using the start bundle command (equivalent to the stop bundle command that we used before) for both bundles of the application, you are going to see that the application says is running again:
And the command has the HTTP 200 output as it should have and the message “Application us running”
So, now, after knowing how the _ping/ endpoint works we only need to add it to our deployment.yml file from Kubernetes. So we modified again our deployment file to be something like this:
NOTE: It’s quite important the presence of initialDelaySeconds parameter to make sure the application has the option to start before start executing the probe. In case you don’t put this value you can get a Reboot Loop in your container.
NOTE: Example shows port 7777 as an exported port but this is only needed for the steps we’ve done before and you will not be needed in a real production environment.
So now we deploy again the YML file and once we get the application running we’re going to try the same approach, but now as we have the probes defined as soon as I stop the application containers has going to be restarted. Let’s see!
As you can see in the picture above after the application is Stopped the container has been restarted and because of that, we’ve got expelled from inside the container.
So, that’s all, I hope that helps you to set up your probes and in case you need more details, please take a look at the Kubernetes documentation about httpGet probes to see all the configuration and option that you can apply to them.