Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture

CNCF-sponsored service Mesh Linkerd provides a lot of needed features in nowadays microservices architectures.

Linkerd as the Solution to Solve your Communication Challenges in the Microservice Architecture
Photo by Diz Play on Unsplash

If you are reading this, probably, you are already aware of the challenges that come with a microservices architecture. It could be because you are reading about those or even because you are challenging them right now in your own skin.

One of the most common challenges is network and communication. With the eclosion of many components that need communication and the ephemeral approach of the cloud-native developments, many new features are a need when in the past were just a nice-to-have.

Concepts like service registry and service discovery, service authentication, dynamic routing policies, and circuit breaker patterns are no longer things that all the cool companies are doing but something basic to master the new microservice architecture as part of a cloud-native architecture platform, and here is where the Service Mesh project is increasing its popularity as a solution for most of this challenges and providing these features that are needed.

If you remember, a long time ago, I already cover that topic to introduce Istio as one of the options that we have:

But this project created by Google and IBM is not the only option that you have to provide those capabilities. As part of the Cloud Native Computing Foundation (CNCF), the Linkerd project provides similar features.

How to install Linkerd

To start using Linkerd, the first thing that we need to do is to install the software and to do that. We need to do two installations, one on the Kubernetes server and another on the host.

To install on the host, you need to go to the releases page and download the edition for your OS and install it.

I am using a Windows-based system in my sample, so I use chocolatey to install the client. After doing so, I can see the version of the CLI typing the following command:

linkerd version

And you will get an output that will say something similar to this:

PS C:\WINDOWS\system32> linkerd.exe version
Client version: stable-2.8.1
Server version: unavailable

Now we need to do the installation on the Kubernetes server, and to do so, we use the following command:

linkerd install | kubectl apply -f -

And you will get an output similar to this one:

PS C:\WINDOWS\system32> linkerd install | kubectl apply -f -
namespace/linkerd created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created
serviceaccount/linkerd-identity created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-controller created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-controller created
serviceaccount/linkerd-controller created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created
serviceaccount/linkerd-destination created
role.rbac.authorization.k8s.io/linkerd-heartbeat created
rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
serviceaccount/linkerd-heartbeat created
role.rbac.authorization.k8s.io/linkerd-web created
rolebinding.rbac.authorization.k8s.io/linkerd-web created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-web-admin created
serviceaccount/linkerd-web created
customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/trafficsplits.split.smi-spec.io created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-prometheus created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-prometheus created
serviceaccount/linkerd-prometheus created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
serviceaccount/linkerd-proxy-injector created
secret/linkerd-proxy-injector-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator created
serviceaccount/linkerd-sp-validator created
secret/linkerd-sp-validator-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-tap created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-tap-admin created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-tap created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-tap-auth-delegator created
serviceaccount/linkerd-tap created
rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-tap-auth-reader created
secret/linkerd-tap-tls created
apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created
podsecuritypolicy.policy/linkerd-linkerd-control-plane created
role.rbac.authorization.k8s.io/linkerd-psp created
rolebinding.rbac.authorization.k8s.io/linkerd-psp created
configmap/linkerd-config created
secret/linkerd-identity-issuer created
service/linkerd-identity created
deployment.apps/linkerd-identity created
service/linkerd-controller-api created
deployment.apps/linkerd-controller created
service/linkerd-dst created
deployment.apps/linkerd-destination created
cronjob.batch/linkerd-heartbeat created
service/linkerd-web created
deployment.apps/linkerd-web created
configmap/linkerd-prometheus-config created
service/linkerd-prometheus created
deployment.apps/linkerd-prometheus created
deployment.apps/linkerd-proxy-injector created
service/linkerd-proxy-injector created
service/linkerd-sp-validator created
deployment.apps/linkerd-sp-validator created
service/linkerd-tap created
deployment.apps/linkerd-tap created
configmap/linkerd-config-addons created
serviceaccount/linkerd-grafana created
configmap/linkerd-grafana-config created
service/linkerd-grafana created
deployment.apps/linkerd-grafana created

Now we can check that the installation has been done properly using the command:

linkerd check

And if everything has been done properly, you will get an output like this one:

PS C:\WINDOWS\system32> linkerd check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API
linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist
linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

Then we can see the dashboard from Linkerd using the following command:

linkerd dashboard
Dashboard initial web page after a clean Linkerd installation

Deployment of the apps

We will use the same apps that we use some time ago to deploy istio, so if you want to remember what they are doing, you need to look again at that article.

I have uploaded the code to my GitHub repository, and you can find it here: https://github.com/alexandrev/bwce-linkerd-scenario

To deploy, you need to have your docker images pushed to a docker registry, and I will use Amazon ECR as the docker repository that I am going to use.

So I need to build and push those images with the following commands:

docker build -t provider:1.0 .
docker tag provider:1.0 938784100097.dkr.ecr.eu-west-2.amazonaws.com/provider-linkerd:1.0
docker push 938784100097.dkr.ecr.eu-west-2.amazonaws.com/provider-linkerd:1.0
docker build -t consumer:1.0 .
docker tag consumer:1.0 938784100097.dkr.ecr.eu-west-2.amazonaws.com/consumer-linkerd:1.0
docker push 938784100097.dkr.ecr.eu-west-2.amazonaws.com/consumer-linkerd:1.0

And after that, we are going to deploy the images on the Kubernetes cluster:

kubectl apply -f .\provider.yaml
kubectl apply -f .\consumer.yaml

And now we can see those apps in the Linkerd Dashboard on the default namespace:

Image showing the provider and consumer app as linked applications

And now, we can reach the consumer endpoint using the following command:

kubectl port-forward pod/consumer-v1-6cd49d6487-jjm4q 6000:6000

And if we reach the endpoint, we got the expected reply from the provider.

Sample response provided by the provider

And in the dashboard, we can see the stats of the provider:

Linkerd dashboard showing the stats of the flow

Also, linked by default provided a Grafana dashboard where you can see more metrics you can get there using the grafana link that the dashboard has.

Grafana link on the Linkerd Dashboard

When you enter that, you could see something like the dashboard shown below:

Grafana dashboard showing the linkerd statistics

Summary

With all this process, we have seen how easily we can deploy a linkerd service mesh in our Kubernetes cluster and how applications can integrate and interact with them. In the next posts, we will dive into the most advanced features that will help us in the new challenges that come with the Microservices architecture.

Event Streaming, API and Data Integration: 3 Pillars You Should Master On the Cloud

Event Streaming, API, and Data are the three musketeers that cover all the aspects of mastering integration in the cloud.

Event Streaming, API, and Data are the three musketeers that cover all the aspects of mastering integration in the cloud.
Photo by Simon Rae on Unsplash

Enterprise Application Integration has been one of the most challenging IT landscape topics since the beginning of time. As soon as the number of systems and applications in big corporations started and grows, this becomes an issue we should address. This process’s efficiency will also define what companies succeed and which ones will fail as the cooperation between applications becomes critical to respond at the pace that the business was demanding.

I usually like to use the “road analogy” to define this:

It doesn’t matter if you have the fastest cars if you don’t have proper roads you will not get anywhere

This situation generates a lot of investments from the companies. Also, a lot of vendors and products were launched to support that situation. Some solutions are starting to emerge: EAI, ESB, SOA, Middleware, Distributed Integration Platforms, Cloud-Native solution, and iPaaS.

Each of the approaches provides a solution for existing challenges. As long as the rest of the industry was evolving, the solutions changed to adapt to the new reality (containers, microservices, DevOps, API-led, Event-Driven..)

So, what is the situation today? Today is extended the misconception that integration is the same as API and also that API is asynchronous HTTP based (REST, gRPC, GraphQL) API. But it is much more than this.

Photo by Tolga Ulkan on Unsplash

1.- API

API-led is key to the integration solution for sure, especially focus on the philosophical approach behind it. Each component that we create today is created with a collaboration in mind to work with existing and future components to benefit the business in an easy and agile way. This transcends the protocol discussion completely.

API covers all different kinds of solutions from existing REST API to AsyncAPI to cover the event-based API.

2.- Event Streaming

Asynchronous communication is needed because the patterns and the requirements when you are talking about big enterprises and different applications make this essential. Requirements like pub-sub approach to increase independence among services and apps, control-flow to manage the execution of high-demanding flows that can exceed the throttling for applications, especially when talking about SaaS solutions.

So, you can think that this is a very opinionated view, but at the same time, this is something that most of the providers in this space have realized based on their actions:

  • AWS release SNS/SQS, its first messaging system, as its only solution.
  • Nov 2017 AWS releases Amazon MQ, another queue messaging system to cover the scenarios that SQS cannot cover.
  • May 2019 AWS releases Amazon MSK, a managed service for Kafka solutions to provide streaming data distribution and processing capabilities.

And that situation is because when we move away from smaller applications when we are migrating from a monolith approach to a micro-service application, more patterns and more requirements are needed, and here is. In contrast, integration solutions have shown in the past,t this is critical for integration solutions.

3.- Data Integration

Usually, when we talk about integration, we talk about Enterprise Application Integration because we have this past bias. Even I use this term to cover this topic, EAI, because we usually refer to these solutions. But since the last years, we are more focused on the data distribution amount the company rather than how applications integrated because what is really important is the data they are exchanging and how we can transform this raw data into insights that we can use to know better our customers or optimize our process or discover new opportunities based on that.

Until recently, this part was handled apart from the integration solutions. You probably rely on a focused ETL (Extract-Transform-Load) that helps to move the data from one database to another or a different kind of storage like a Data Warehouse so your Data Scientist can work with them.

But again, agility has made that this needs to change, and all the principles integration has in terms of providing more agility to the business is also applied to how we exchange data. We try to avoid the data’s technical move and try to ease the access and the right organization on this data. Data Virtualization and Data Streaming are the core capabilities that address and handle those challenges providing an optimized solution for how the data is distributed.

Summary

My main expectation with this article is to make you aware that when you are thinking about integrating your application, this is much more than the REST API that you are exposing, maybe using some API Gateway, and the needs can be very different. The strongest your integration platform is, the stronger your business will be.

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer

SetApp provides a fantastic set of tools that can boost your productivity as a Software Developer.

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer
Photo by Tianyi Ma on Unsplash

I just switched the PC laptop that I used extensively for the last years for a new Macbook Pro last week, and I have entered into a new world. I have used OS X environments in the past. I had my first Macbook in 2008 and my second one in 2016, so I am not new to the OS X ecosystem, but even with that, things change quickly in the App industry, especially in the last four years.

So, when I face the login screen of Big Sur in front of me, I just wondered about how I can equip myself, and I remember SetApp. I discovered SetApp a long time ago because one of the main podcasters I listen to, Emilio Cano, is a very supportive fan of SetApp and uses any chance he has to talk about its benefits.

So I decided to give it a try, and I could not be happier for doing so. But before I start talking about the apps, I would like to give a summary of what SetApp is, and I will use their own words from their official website:

Setapp is a cross-platform suite of apps, solving your daily tasks. Stay in your flow, anywhere.

So, It is like a Netflix for Apps, you pay a monthly subscription, and you have access to paid apps automatically, and they keep adding new ones to their repository so you can use them.

As a Software Developer, I try to focus this post on the apps that help me in my daily job, and here are the three (3) ones that help me more:

1.- Shimo — An Awesome VPN Client

On these days of remote work, we will need to connect to several VPN each day to access your company environment or even customer environment. If you are like me that you work daily for several customers switching from one customer VPN to another is your daily task, and if you can do it fast, you optimize your time.

Shimo is a VPN client who supports all the main protocols that companies used: Cisco, Juniper, OpenVPN … all you can need.

VPN options that Shimo provides to you (screenshot by the author)

You can connect to more than one VPN if they are not overlapping, and you can also access a quick way to connect or disconnect to any VPN from the MenuBar.

2.- Paste — The Ultimate Clipboard

This is an app that is key for any developer and for any person who uses a computer. Paste is just how the clipboard should be. It is an enhanced clipboard with a history, so you can go back and select something that you copied yesterday and you need to recover.

And let’s be honest, as Software Developer, one of our main tricks is the CTRL+C, CTRL+V. It can be needed for everything: a snippet of code a colleague shared with you or the UNIX command that you always forget about it or recover the username that somebody shared with you using an email or Slack.

Screenshot from Paste taken by the author

3.- DevUtils

This is a clear choice. A tool that is named DevUtils should be on this list. But, what DevUtils is? It is a collection of all these tools you always look on the internet to do simple but usual tasks.

Tasks like encode or decode from base64, a regular expression tester, UNIX time converter, JSON formatter, JWT debugger, and much more… How many times do you google to do one of these tasks? How much time can you save just having that in your dock all the time? The answer is simple a lot!!

Screenshot from DevUtils taken by the Author

Summary

There are a lot more apps in the catalog from SetApp. When writing this article, the number goes up to 210 apps that cover all the aspects of your life, including one of the best-selling apps in the App Store. But I would like to focus on the ones that I use most in my life as a Software Developer, and if you are like me, you will find it awesome!

#TIBFAQS: TIBCO BW Impaired Status: How to solve it?

Learn the main reasons behind an Impaired status and how you can perform troubleshooting to identify and solve the error.

#TIBFAQS: TIBCO BW Impaired Status: How to solve it?
Photo by Charles Deluvio on Unsplash

This is another post of the #TIBFAQS series. To remind you what all this about is, you can submit your questions regarding TIBCO developing issues or doubts and try to provide an answer here to try to help the community of TIBCO developers out there.

So, today I am going to start with one of the most common issues when we work with BusinessWorks, and it is when I am going to deploy my application or test it locally, and I get this log trace and nothing after that: TIBCO BW Impaired Status.

Impaired status error message

This is one of the usual situations for a BusinessWorks junior developer and one of the reasons you have more time spent doing troubleshooting. Let’s get some tricks today, so this message will never again stop you in your journey to production.

What is the cause of this error?

This error means that the BusinessWorks runtime is not able to meet all the dependencies among the components to be able to start. As you probably know, BusinessWorks each of the applications’ components is managed independently, and they are referencing each other.

For example, the Application depends on the Application Module and the Shared Module. The Application module can have a dependency on a JMS Connection and so on.

Situations that can raise this error

Let’s take a look now at the situation that can raise this error and solve it.

1.- Missing module or incompatible versions

One usual situation that can lead to this problem is missing modules or incompatible versions of the modules. In that case, the referencable component will wait for a module or a specific version of a module to be started. Still, this module is missing, or it is starting another version.

2.- Not valid shared connections

Another option can be if some of the components are required to establish the connection with other technologies such as JDBC connections, JMS connections, KAFKA connections, or another one of the more than 200 connectors available.

3.- Missing Starter component in the Module Descriptors

The last of the usual suspects here is when you have a Stater component in the Module Descriptors, but this process is not available inside the EAR file that you are deploying. That dependency is never satisfied, and that leads to unlimited Impaired status.

How to detect what component is missing?

To help you in the process of detecting in which situation you are, you have an incredible tool at your disposal which is the command la from the OSGi Console Interface.

This command helps us list the applications deployed in this specific AppNode or container and give us the details of them, including the reason for an Impaired situation.

How to run the OSGi console depends on your deployment model, but you can read all the information about it in the link below:

Summary

I hope you find this interesting to solve your TIBCO BW Impaired status on your apps, and if you are one of those facing this issue now, you have information not to be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just use the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev

Top 4 Unexpected Benefits Of My Runner’s Life

Let me walk you through the benefits

Photo by Jenny Hill on Unsplash

I’ve never been a runner in my life until recently. I was always a team sports guy. I always enjoy competition and teamwork, so running was not a thing, but at some point, I needed to do something to change my life.

The reason for that is clear, after gaining more than 20 kg in my college days and adult life, it was affecting my health both physically and mentally.

Now, after two marathons, several half-marathons, and a lot of 10k races, I can tell you my experience and especially some of the unexpected benefits after joining this movement that probably you are not aware of yet if you are not into this or if you already are I hope reading this can put a smile in your face.

#1 You Will Have Your Own Moment To Find Yourself

Running is a sport that you can practice by yourself and pretty much anywhere and everywhere. The only thing that you need to start is a pair of sneakers and your motivation to go out and start running.

That means that is when you are on your own in a private moment with your own mind. I discovered that this is a great moment to organize my head. Not only to release all the stress from my daily life and embrace this moment of my own to be happier but also to be healthier.

I do not have a clear routine regarding this topic or any meditation technique that I do. It depends on what I need at each moment.

Some days I need to listen to music to release everything after a bad day at work or to charge my batteries to start my day fully, or I need to listen to my podcasts in an intimate way that I can focus much more on their content than I can probably do in my normal life.

But other days, I use that time to focus on solving a problem that I have at work (how to solve this production problem, how to implement these feature that I was not clear about, how to focus this presentation that I need to do…) or even to think about my life and goals to the future.

#2 You Will Be Part Of A Community

Runners are part of the same community, and no one can change that. Even people like myself that always run alone and are not part of any Runner’s club that meet together to do the defined training belong to this brotherhood/sisterhood.

It is a strange experience when you get up to go to do our morning running, and you meet for a few seconds another person doing the same thing as you are doing. It is a strange connection that feels with that person. Probably you even greet him/her even if you don’t know it just because you both know you are part of the same secret club and you share something in common.

This is even greater if you join other runner communities like Runners Publications, Podcast, or Runner’s Club, as I was saying. It is not important the pace that you have or how many marathons you have done. As soon as you put on your sneakers and hit the road, you are part of the club. You have joined the force.

#3 You Will Feel The Competition Spirit

It is nothing comparable with the feeling of running in a big venue. I have the opportunity to do two marathons until the pandemia put all of us on hold, and it was in Madrid 2019 and Valencia 2019, and the experience was the best of my life. All the public supporting you all the way, all the other runners suffering with you and at the same time enjoying each of the km from the start to the end.

All the people you meet every 10km you find each other Sunday morning when the rest of the people are just getting up and brewing their coffee, and you are on the road trying to beat your personal best.

Because each time you run, you are trying to beat you. You are trying to be the best version of yourself, which helps you in all the aspects of your life.

#4 You Will Find Another Way To Discover The City

I usually do two kinds of training sessions depending on my schedule and what my body and mind are demanding for that session. Some of them that are specific training (series, fartlek, and so on) require to be done on a specific location, but most of the time, I just run without the need to be on a specific place and help me discover the city in a very different way.

I follow what I called “the green-lights path,” so I started without any specific itinerary in mind. I follow the “green lights” on the zebra crossing because I hate to stop when I am running, so that makes me run across the street that I have never been on (it is incredible the few you know about a city that you lived in) or if you are in another city because leisure or business reasons it is an incredible way to discover a city in a very different way to fully embrace and connect with it and feeling its soul in a very different way.

Summary

I hope these highlights will seed inside of you if you were thinking about giving it a try to the Runner’s Life, and probably at this moment, you are looking at the sneakers you have in your wardrobe waiting to be part of your routine since now on. And if you are already a Runner, I hope you enjoy and maybe agree with some of this as the great benefits usually not talking about when we read about running benefits.

3 Unusual Tools To Boost Your Developer Productivity

A non-VS Code list for software engineers

Tent in the desert
Photo by Clarisse Meyer on Unsplash.

This is not going to be one of those articles about tools that can help you develop code faster. If you’re interested in that, you can check out my previous articles regarding VS Code extensions, linters, and other tools that make your life as a developer easier.

My job is not only about software development but also about solving issues that my customers have. While their issues can be code-related, they can be an operation error or even a design problem.

I usually tend to define my role as a lone ranger. I go out there without knowing what I will face, and I need to be ready to adapt, solve the problem, and make customers happy. This experience has helped me to develop a toolchain that is important for doing that job.

Let’s dive in!


1. MobaXterm

This is the best tool to manage different connections to different servers (SSH access for a Linux server, RDP for a Windows server, etc.). Here are some of its key features:

  • Graphical SSH port-forwarding for those cases when you need to connect to a server you don’t have direct access to.
  • Easy identity management to save the passwords for the different servers. You can organize them hierarchically for ease of access, especially when you need to access so many servers for different environments and even different customers.
  • SFTP automatic connection when you connect to an SSH server. It lets you download and upload files as easily as dropping files there.
  • Automatical X11 forwarding so you can launch graphical applications from your Linux servers without needing to configure anything or use other XServers like XMing.
MobaXterm in action
MobaXterm in action

2. Beyond Compare

There are so many tools to compare files, and I think I have used all of them — from standalone applications like WinMerge, Meld, Araxis, KDiff, and others to extensions for text editors like VS Code and Notepad++.

However, none of those can compare to the one and only Beyond Compare.

I covered Beyond Compare when I started working on software engineering in 2010, and it is a tool that comes with me on each project I have. I use it every day. So, what makes this tool different from the rest?

It is simply the best tool to make any comparison because it does not just compare text and folders. It does that perfectly, but at the same time, it also compares ZIP files while browsing the content, JAR files, and so on. This is very important when we’d like to check if two JAR files that are uploaded in DEV and PROD are the same version of the tool or to know if a ZIP file has the right content when it is uploaded.

Beyond Compare in action
Beyond Compare 4

3. Vi Editor

This is the most important one — the best text editor of all time — and it is available pretty much on every server.

It is a command-line text editor with a huge number of shortcuts that allows you to be very productive when you are inside a server checking logs and configuration files to see where the problem is.

For a long time, I have had a Vi cheat sheet printed out to make sure I can master the most important shortcuts and thus increase my productivity while fighting inside the enemy lines (the customer’s servers).

Vi text editor
VIM — Vi improved the ultimate text editor.

#TIBFAQS: Failed to Read Profile from [/tmp/tmp/pcf.substvar]

Photo by Shahadat Rahman on Unsplash

This is another post of the #TIBFAQS series and only to remind you what all this about is that you can submit your questions regarding TIBCO developing issues or doubts and I try to provide an answer here to try to help the community of TIBCO developers out there.

So, today I am going to start with one of the I think most common issues when we work with BusinessWorks Container Edition and we are deploying into our target environment and it is a trace similar to this one:

Error trace regarding I am not able to read the profile

What is the cause of this error?

This error means that the BusinessWorks runtime is not able to read and process the properties file to start the application. So that means that your error is regarding the configuration of the application and not the application itself. So, the good news here: your code seems to be fine at this point.

As probably you know all the TIBCO BW applications use for a very long time an XML file to have the configuration values to start with. This is the file that in the case of BusinessWokrks Container Edition is stored at /tmp/tmp/pcf.substvar and it is populated for several sources depending on how to manage your configuration.

As you know you have several options to manage your configuration in Cloud base environments: Environment variables, Config Maps, Configuration Management Systems such as Spring Cloud Config or Consul.. So it is important that you have a clear understanding of what are you using.

So the error is that the file has something in its content that is not valid, because it is wrong or because it is not able to understand it.

Situations that can raise this error

Let’s take a look now at the situation that can raise this error and how we can solve it.

1.- Incompatible BW-runtime vs EAR versions

Usually, EAR files are compatible with different BusinessWorks runtimes, but this is true when the runtime is more current than the EAR. So I mean, if I generate my application with BWCE 2.5.0 I can run it with runtime 2.5.0, or 2.5.3 or 2.6.0 without any issue, but if I try to run it with an older version like 2.4.2 I can get this error because the EAR file has some “new things” that the runtime is not able to understand.

So it is important to validate that the runtime version that you are using is the expected one and updated it if that is not the case.

2.- XML special characters that need to be escaped

This situation is only true in versions before 2.5.0, but in case you are running an older version, you can also get this error because your property value has an XML character that needs to be escaped. Characters like ‘<’ or ‘&’ and the most used ones to generate this error. If you are using a more updated version you don’t need to escape it because they are automatically escaped.

So depending on the version that you are using update your property’s value accordingly.

Summary

I hope you find this interesting and if you are one of those facing this issue now you have information to not be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just using the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev
#TIBFAQS: Failed to read profile from [/tmp/tmp/pcf.substvar]

AWS Prometheus Service to Provide More Availability to Your Monitoring Solution

Learn what Amazon Managed Service for Prometheus provides and how you can benefit from it.

AWS Prometheus Service to Provide More Availability to Your Monitoring Solution
Photo by Casey Horner on Unsplash

Monitoring is one of the hot topics when we talk about cloud-native architectures. Prometheus is a graduated Cloud Native Computing Foundation (CNCF) open-source project and one of the industry-standard solutions when it comes to monitoring your cloud-native deployment, especially when Kubernetes is involved.

Following its own philosophy of providing a managed service for some of the most used open-source projects but fully integrated with the AWS ecosystem, AWS releases a general preview (at the time of writing this article): Amazon Managed Service for Prometheus (AMP).

The first thing is to define what Amazon Managed Service for Prometheus is and what features provide. So, this is the Amazon definition of the service:

A fully managed Prometheus-compatible monitoring service that makes it easy to monitor containerized applications securely and at scale.

And I would like to spend some time on some parts of this sentence.

  • Fully managed service: So, this will be hosted and handle by Amazon, and we are just going to interact with it using API as we do with other Amazon services like EKS, RDS, MSK, SQS/SNS, and so on.
  • Prometheus-compatible: So, that means that even if this is not a pure-Prometheus installation, the API is going to be compatible. So the Prometheus clients who can use Grafana or others to get the information from Prometheus will work without changing their interfaces.
  • Service at-scale: Amazon, as part of the managed service, will take care of the solution’s scalability. You don’t need to define an instance-type or how much RAM or CPU you do need. This is going to be handled by AWS.

So, that sounds perfect. So you can think that you are going to delete your Prometheus server, and it will start using this service. Maybe you are even typing something like helm delete prom… WAIT WAIT!!

Because at this point, this is not going to replace your local Prometheus server, but it will allow the integration with it. So, that means that your Prometheus server is going to act like a scraper for the whole monitoring scalable solution that AMP is providing, something as you can see in the picture below:

Reference Architecture for Amazon Prometheus Service

So, you are still going to need a Prometheus server, that is right, but all the complexity are going to be avoided and leverage to the managed service: Storage configuration, High availability, API optimization, and so on is going to be just provided to you out of the box.

Ingesting information into Amazon Managed Service for Prometheus

At this moment, there is two way to ingest data into the Amazon Prometheus Service:

  • From an existing Prometheus server using the remote_write capability and configuration, so that means that each series that is scraped by the local Prometheus is going to be sent to the Amazon Prometheus Service.
  • Using AWS Distro for OpenTelemetry to integrate with this service using the Prometheus Receiver and the AWS Prometheus Remote Write Exporter components to get that.

Summary

So this is a way to provide an enterprise-grade installation leveraging on all the knowledge that AWS has hosting and managing this solution at scale and optimized in terms of performance. You can focus on the components you need to get the metrics ingested into the service.

I am sure this will not be the last movement from AWS in observability and metrics management topics. I am sure they will continue to provide more tools to the developer’s and architects’ hands to define optimized solutions as easily as possible.

Why GraphQL? 3 Clear Benefits Explained

3 benefits of using GraphQL in your API that you should take into consideration.

GraphQL API Why Should You Use GraphQL for your APIs?
Photo by Mika Baumeister on Unsplash

We all know that APIs are the new standard when we develop any piece of software. All the latest paradigm approaches are based on a distributed amount of components created with a collaborative approach in mind that they need to work together to provide more value to the whole ecosystem.

Talking about the technical part, an API has become a synonym for using REST/JSON to expose those APIs as a new standard. But this is not the only option even in the synchronous request/reply world, and we are starting to see a shift in this by-default selection of REST as the only choice in this area.

GraphQL has emerged as an alternative that works as well since Facebook introduced it in 2015. During these five years of existence, its adoption is growing outside Facebook walls, but this is still far from the general public uses as the following Google Trends graph shows

Google Trend graph showing interest in REST vs. GraphQL in the last five years

But I think this is a great moment to look again and the benefits that GraphQL can provide to your APIs in your ecosystem. You can start a new year by introducing a technology that can provide you and your enterprise with clear benefits. So, let’s take a look at them.

1.- More flexible style to meet different client profile needs.

I want to start this point with a small jump to the past when REST was introduced. REST was not always the standard we use to create our API or Web Services, as we called it at that point. A W3C standard, SOAP, was the leader of that, and REST replaces it, focusing on several points.

However, the weight of the protocol much lighter than SOAP makes a difference, especially when mobile devices start to be part of the ecosystem.

That is the situation today, and GraphQL is an additional step further on that approach and the perspective of being more flexible. GraphQL allows each customer to decide what part of the data they would like to use the same interface for different applications. Each of them will still have an optimized approach because they can decide what they like to obtain at each time.

2.- More loosely coupled approach with the service provider

Another important topic is the dependency between the consumer of the API and the provider. We all know that different paradigms like microservices are focus on that approach. We aim to get much independence as possible among our components.

REST is not providing a big link between the components that is true. Still, the interface is fixed at the same time, so that means each time we modify that interface by adding a new field or changing one, we can affect the consumer even if they do not need that field for anything.

GraphQL, by its feature of selecting the fields that I would like to obtain, makes much easier the evolution of the API itself much and at the same time provides much more independence for the components because only the changes that have a clear impact on the data that a client needs can generate an effect on them but the rest it is completely transparent form them.

3.- More structured and defined specification

One of the aspects that defined the rise of REST as a wide-used protocol is the lack of standards to structure and define its behavior. We had several attempts using RAML or even just “samples as specification”, swagger, and finally an OpenAPI specification. But that time of “unstructured” situation generates that REST API can be done in very different ways.

Each developer or service provider can generate REST API with a different approach and philosophy that generates noise and is difficult to standardize. GraphQL is based on a GraphQL Schema that defines the type managed by the API and the operations that you can do with it in two main groups: queries and mutations. That allows that all the GraphQL APIs, no matter who is developing them, follow the same philosophy as it is already included in the core of the specification itself.

Summary

After reading this article, you are probably saying, so that means that I should remove all my REST API and start building everything in GraphQL. And my answer to that is …. NO!

The goal of this article if that you are aware of the benefits that different way to define API is providing to you so you can add them to your toolbelt, so next time that you create an API to think about these topics described here and reach to a conclusion that is: mmm I think GraphQL is the better pick for this specific situation or the other way around, I am not going to get any benefits on this specific API, so I rather use REST.

The idea is that you now know to apply to your specific case and choose based on that because no better than yourself to decide what is best for your use case.

How to know the biggest publications in Medium to share your articles?

Follow these tips to select the right publication for your article to make sure you guarantee the best impact.

Photo by Daria Nepriakhina on Unsplash

When we are writing an article in Medium, and we would like to share it with our audience, we all know that using Publication is the best way to do it. A Publication is like a media-site or a traditional magazine that hosts articles regarding a topic and share it with its subscriber. The bigger the publication, the more people will be notified that your article is available, and the better the chance that your articles are shown on the home page of the Medium users. So select the right publication is crucial in this process.

Find the followers of any publication.

The first thing we need to do to know the number of followers for each publication is to select the publication that we would like to check. For example, I am going to use one of the main ones as it is The Innovation. If you go to their main page: https://medium.com/the-innovation

You will not see the number of followers directly on their menu bar as it is shown in the image below:

Main page for The Innovation Medium publication

Some other publications can show their follower number in the top bar menu but this is not happening for all publications. But you still have the option to know the number of followers of each publication doing this small trick. If you have the main URL for a publication, if you add the /latest to that URL, you will be on a page that always shows the same layout for all publication, and as part of that layout, you will find on the right sidebar the number of followers:

The Latest page for The Innovation publication showing the number of followers

So, now you can check the number of followers of each publication available in Medium, and also it will help to know how long it has been since its last post was published. If the publication is not publishing anything new, your story will never reach all the followers it has, so also check the date of the last post available on the same page.

Find the biggest publications today.

We now know how to know the number of followers for each publication, but that requires publication to publication checking that to get that number, and this is tedious. But even worse, we need to know the publication that we would like to check, and I don’t think anyone in the Medium space knows all the publications to make sure they have checked all the relevant ones. So, how can we do that faster and better? Easy. Here is the magic trick you were waiting for.

Just go to this URL: https://medium.com/search/publications?q=*. This URL is going to send all the publications (* acts as a wildcard that says “everything”), so all the publications are going to be shown, and they are going to be sorted based on the number of publishers, as you can see here:

Publication search results sorting by Followers

To show the number of followers, I grab the first 20 and do the trick on step one to get the following table:

Top 14 publications in Medium sorted by the number of followers they have

Find the right one for you

Now that we have all the data in our hands, we just need to select the one suitable for the article that we would like to submit based on the topic that we are covering and the perspective and style that we use to generate our story. It is very recommended to look at the creator guidelines each publication has to adapt the content to what the publication’s audience is expected. That way, we maximize our success with the audience.