Sysstat Metrics and Tools: How to Get an Awesome Graphical Analysis?

Sysstat Metrics and Tools: How to Get an Awesome Graphical Analysis?

Discover SARChart and kSAR as critical utilities to be part of your toolbelt for administration or troubleshooting

There was a time when we didn’t have public cloud providers providing us with a bunch of kinds of services and a whole platform and experience unified, covering all the aspects of our technical needs when we talked about an IT environment and sysstat metrics were key there.

There was a time when AWS Cloud Watch, Azure Monitor, Prometheus were not a thing, and we need to deal with Linux servers without a complete portal providing all the metrics that we could need.

There was a time… that it is still the present for so many customers and organizations all over the world and they still need to deal with this situation, and probably you face this situation now or even in the future. So, let’s see what we can do regarding that.

Introducing sysstat

For several decades the standard way to extract the usage metrics from a Linux server was sysstat. Based on the words on its official web page, this is what sysstat is:

The sysstat utilities are a collection of performance monitoring tools for Linux. These include sar, sadf, mpstat, iostat, tapestat, pidstat, cifsiostat and sa tools

Sysstat Metrics and Tools: How to Get an Awesome Graphical Analysis?
Sysstat is an ancient but reliable piece of software that its owner continue to update even today.. but keeping the same webpage since the beginning 🙂

Sysstat is old but powerful, and it has so many options that have to save my life in a lot of customers and provide a lot of handy information that I needed at that time. But today, I am going to talk about a specific utility from the whole lot, that is sar.

sar is the command to be able to query the performance metrics for an existing machine. Just typing the command sar is enough to start seeing awesome things. That will give you the CPU metrics for the whole day for each of the CPUs that your machine has and also split depending on the kind of usage (user, system, idle, all).

Sysstat Metrics and Tools: How to Get an Awesome Graphical Analysis?
Execution of command sar in a local machine

But these metrics are not only the things that you can get. Other options available

  • sar -r: Provide memory metrics
  • sar -q: Provide the load metrics.
  • sar -n: Provide the network metrics.
  • sar -A: Provides ALL the metrics.
  • sar -f /var/log/sysstat/sa[day-of-the-month]: It will provide metrics for the day of the month instead of the current day.

There are a lot of options more that you can use on your daily basis, so if you need something concrete, take a look at the man page for the sar command:

But we are all visual people, right? It is true that seeing trends and evolutions is more complex in text mode and also seeing only daily data at a time. So take a look at the options to handle that challenge:

kSAR

Sysstat Metrics and Tools: How to Get an Awesome Graphical Analysis?
Logo from the kSAR application (https://sourceforge.net/projects/ksar/)

Java-based developed frontend using Swing library to represents the data from sar visually. It is a portable one, so you need the JAR file to execute it. And you can invoke it in several ways:

  • Providing the file you got from a machine that you executed the sar command.
  • Connecting using SSH to a remote machine and running the command that you need.
Sysstat Metrics and Tools: How to Get an Awesome Graphical Analysis?
Graphical visualization of the sar metrics using kSAR

SARChart

What about when you are on a machine that you don’t have the rights to install any application, even a portable one as kSAR is, or maybe you only have your tablet available? In that case, we have SARChart.

Sysstat Metrics and Tools: How to Get an Awesome Graphical Analysis?
Homepage of the SARChart application (https://sarchart.dotsuresh.com/)

SARChart is a web application that provides a graphical analysis of the sar files. So you only need to upload the file to get a complete graphical and well-looked analysis of your data covering all its aspects. Additionally, all the work is done at the client level without sending any of your data to any server.

Sysstat Metrics and Tools: How to Get an Awesome Graphical Analysis?
CPU usage analysis provided by SARChart

Summary

I hope you find these tools interesting if you didn’t know about them, and I also hope that they can help you with your daily work or at least be part of your toolset to be at your disposal when you need them.

Linkerd Service Mesh Explained: Solving Microservice Communication Challenges

Linkerd Service Mesh Explained: Solving Microservice Communication Challenges

CNCF-sponsored service Mesh Linkerd provides a lot of needed features in nowadays microservices architectures.

If you are reading this, probably, you are already aware of the challenges that come with a microservices architecture. It could be because you are reading about those or even because you are challenging them right now in your own skin.

One of the most common challenges is network and communication. With the eclosion of many components that need communication and the ephemeral approach of the cloud-native developments, many new features are a need when in the past were just a nice-to-have.

Concepts like service registry and service discovery, service authentication, dynamic routing policies, and circuit breaker patterns are no longer things that all the cool companies are doing but something basic to master the new microservice architecture as part of a cloud-native architecture platform, and here is where the Service Mesh project is increasing its popularity as a solution for most of this challenges and providing these features that are needed.

If you remember, a long time ago, I already cover that topic to introduce Istio as one of the options that we have:

But this project created by Google and IBM is not the only option that you have to provide those capabilities. As part of the Cloud Native Computing Foundation (CNCF), the Linkerd project provides similar features.

How to install Linkerd

To start using Linkerd, the first thing that we need to do is to install the software and to do that. We need to do two installations, one on the Kubernetes server and another on the host.

To install on the host, you need to go to the releases page and download the edition for your OS and install it.

I am using a Windows-based system in my sample, so I use chocolatey to install the client. After doing so, I can see the version of the CLI typing the following command:

linkerd version

And you will get an output that will say something similar to this:

PS C:\WINDOWS\system32> linkerd.exe version
Client version: stable-2.8.1
Server version: unavailable

Now we need to do the installation on the Kubernetes server, and to do so, we use the following command:

linkerd install | kubectl apply -f -

And you will get an output similar to this one:

PS C:\WINDOWS\system32> linkerd install | kubectl apply -f -
namespace/linkerd created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created
serviceaccount/linkerd-identity created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-controller created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-controller created
serviceaccount/linkerd-controller created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created
serviceaccount/linkerd-destination created
role.rbac.authorization.k8s.io/linkerd-heartbeat created
rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
serviceaccount/linkerd-heartbeat created
role.rbac.authorization.k8s.io/linkerd-web created
rolebinding.rbac.authorization.k8s.io/linkerd-web created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-web-admin created
serviceaccount/linkerd-web created
customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/trafficsplits.split.smi-spec.io created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-prometheus created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-prometheus created
serviceaccount/linkerd-prometheus created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
serviceaccount/linkerd-proxy-injector created
secret/linkerd-proxy-injector-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator created
serviceaccount/linkerd-sp-validator created
secret/linkerd-sp-validator-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-tap created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-tap-admin created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-tap created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-tap-auth-delegator created
serviceaccount/linkerd-tap created
rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-tap-auth-reader created
secret/linkerd-tap-tls created
apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created
podsecuritypolicy.policy/linkerd-linkerd-control-plane created
role.rbac.authorization.k8s.io/linkerd-psp created
rolebinding.rbac.authorization.k8s.io/linkerd-psp created
configmap/linkerd-config created
secret/linkerd-identity-issuer created
service/linkerd-identity created
deployment.apps/linkerd-identity created
service/linkerd-controller-api created
deployment.apps/linkerd-controller created
service/linkerd-dst created
deployment.apps/linkerd-destination created
cronjob.batch/linkerd-heartbeat created
service/linkerd-web created
deployment.apps/linkerd-web created
configmap/linkerd-prometheus-config created
service/linkerd-prometheus created
deployment.apps/linkerd-prometheus created
deployment.apps/linkerd-proxy-injector created
service/linkerd-proxy-injector created
service/linkerd-sp-validator created
deployment.apps/linkerd-sp-validator created
service/linkerd-tap created
deployment.apps/linkerd-tap created
configmap/linkerd-config-addons created
serviceaccount/linkerd-grafana created
configmap/linkerd-grafana-config created
service/linkerd-grafana created
deployment.apps/linkerd-grafana created

Now we can check that the installation has been done properly using the command:

linkerd check

And if everything has been done properly, you will get an output like this one:

PS C:\WINDOWS\system32> linkerd check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API
linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist
linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

Then we can see the dashboard from Linkerd using the following command:

linkerd dashboard
Linkerd Service Mesh Explained: Solving Microservice Communication Challenges
Dashboard initial web page after a clean Linkerd installation

Deployment of the apps

We will use the same apps that we use some time ago to deploy istio, so if you want to remember what they are doing, you need to look again at that article.

I have uploaded the code to my GitHub repository, and you can find it here: https://github.com/alexandrev/bwce-linkerd-scenario

To deploy, you need to have your docker images pushed to a docker registry, and I will use Amazon ECR as the docker repository that I am going to use.

So I need to build and push those images with the following commands:

docker build -t provider:1.0 .
docker tag provider:1.0 938784100097.dkr.ecr.eu-west-2.amazonaws.com/provider-linkerd:1.0
docker push 938784100097.dkr.ecr.eu-west-2.amazonaws.com/provider-linkerd:1.0
docker build -t consumer:1.0 .
docker tag consumer:1.0 938784100097.dkr.ecr.eu-west-2.amazonaws.com/consumer-linkerd:1.0
docker push 938784100097.dkr.ecr.eu-west-2.amazonaws.com/consumer-linkerd:1.0

And after that, we are going to deploy the images on the Kubernetes cluster:

kubectl apply -f .\provider.yaml
kubectl apply -f .\consumer.yaml

And now we can see those apps in the Linkerd Dashboard on the default namespace:

Linkerd Service Mesh Explained: Solving Microservice Communication Challenges
Image showing the provider and consumer app as linked applications

And now, we can reach the consumer endpoint using the following command:

kubectl port-forward pod/consumer-v1-6cd49d6487-jjm4q 6000:6000

And if we reach the endpoint, we got the expected reply from the provider.

Linkerd Service Mesh Explained: Solving Microservice Communication Challenges
Sample response provided by the provider

And in the dashboard, we can see the stats of the provider:

Linkerd Service Mesh Explained: Solving Microservice Communication Challenges
Linkerd dashboard showing the stats of the flow

Also, linked by default provided a Grafana dashboard where you can see more metrics you can get there using the grafana link that the dashboard has.

Linkerd Service Mesh Explained: Solving Microservice Communication Challenges
Grafana link on the Linkerd Dashboard

When you enter that, you could see something like the dashboard shown below:

Linkerd Service Mesh Explained: Solving Microservice Communication Challenges
Grafana dashboard showing the linkerd statistics

Summary

With all this process, we have seen how easily we can deploy a linkerd service mesh in our Kubernetes cluster and how applications can integrate and interact with them. In the next posts, we will dive into the most advanced features that will help us in the new challenges that come with the Microservices architecture.

Event Streaming, APIs, and Data Integration: The 3 Core Pillars of Cloud Integration

Event Streaming, APIs, and Data Integration: The 3 Core Pillars of Cloud Integration

Event Streaming, API, and Data are the three musketeers that cover all the aspects of mastering integration in the cloud.

Enterprise Application Integration has been one of the most challenging IT landscape topics since the beginning of time. As soon as the number of systems and applications in big corporations started and grows, this becomes an issue we should address. This process’s efficiency will also define what companies succeed and which ones will fail as the cooperation between applications becomes critical to respond at the pace that the business was demanding.

I usually like to use the “road analogy” to define this:

It doesn’t matter if you have the fastest cars if you don’t have proper roads you will not get anywhere

This situation generates a lot of investments from the companies. Also, a lot of vendors and products were launched to support that situation. Some solutions are starting to emerge: EAI, ESB, SOA, Middleware, Distributed Integration Platforms, Cloud-Native solution, and iPaaS.

Each of the approaches provides a solution for existing challenges. As long as the rest of the industry was evolving, the solutions changed to adapt to the new reality (containers, microservices, DevOps, API-led, Event-Driven..)

So, what is the situation today? Today is extended the misconception that integration is the same as API and also that API is asynchronous HTTP based (REST, gRPC, GraphQL) API. But it is much more than this.

Event Streaming, APIs, and Data Integration: The 3 Core Pillars of Cloud Integration
Photo by Tolga Ulkan on Unsplash

1.- API

API-led is key to the integration solution for sure, especially focus on the philosophical approach behind it. Each component that we create today is created with a collaboration in mind to work with existing and future components to benefit the business in an easy and agile way. This transcends the protocol discussion completely.

API covers all different kinds of solutions from existing REST API to AsyncAPI to cover the event-based API.

2.- Event Streaming

Asynchronous communication is needed because the patterns and the requirements when you are talking about big enterprises and different applications make this essential. Requirements like pub-sub approach to increase independence among services and apps, control-flow to manage the execution of high-demanding flows that can exceed the throttling for applications, especially when talking about SaaS solutions.

So, you can think that this is a very opinionated view, but at the same time, this is something that most of the providers in this space have realized based on their actions:

  • AWS release SNS/SQS, its first messaging system, as its only solution.
  • Nov 2017 AWS releases Amazon MQ, another queue messaging system to cover the scenarios that SQS cannot cover.
  • May 2019 AWS releases Amazon MSK, a managed service for Kafka solutions to provide streaming data distribution and processing capabilities.

And that situation is because when we move away from smaller applications when we are migrating from a monolith approach to a micro-service application, more patterns and more requirements are needed, and here is. In contrast, integration solutions have shown in the past,t this is critical for integration solutions.

3.- Data Integration

Usually, when we talk about integration, we talk about Enterprise Application Integration because we have this past bias. Even I use this term to cover this topic, EAI, because we usually refer to these solutions. But since the last years, we are more focused on the data distribution amount the company rather than how applications integrated because what is really important is the data they are exchanging and how we can transform this raw data into insights that we can use to know better our customers or optimize our process or discover new opportunities based on that.

Until recently, this part was handled apart from the integration solutions. You probably rely on a focused ETL (Extract-Transform-Load) that helps to move the data from one database to another or a different kind of storage like a Data Warehouse so your Data Scientist can work with them.

But again, agility has made that this needs to change, and all the principles integration has in terms of providing more agility to the business is also applied to how we exchange data. We try to avoid the data’s technical move and try to ease the access and the right organization on this data. Data Virtualization and Data Streaming are the core capabilities that address and handle those challenges providing an optimized solution for how the data is distributed.

Summary

My main expectation with this article is to make you aware that when you are thinking about integrating your application, this is much more than the REST API that you are exposing, maybe using some API Gateway, and the needs can be very different. The strongest your integration platform is, the stronger your business will be.

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer

SetApp provides a fantastic set of tools that can boost your productivity as a Software Developer.

I just switched the PC laptop that I used extensively for the last years for a new Macbook Pro last week, and I have entered into a new world. I have used OS X environments in the past. I had my first Macbook in 2008 and my second one in 2016, so I am not new to the OS X ecosystem, but even with that, things change quickly in the App industry, especially in the last four years.

So, when I face the login screen of Big Sur in front of me, I just wondered about how I can equip myself, and I remember SetApp. I discovered SetApp a long time ago because one of the main podcasters I listen to, Emilio Cano, is a very supportive fan of SetApp and uses any chance he has to talk about its benefits.

So I decided to give it a try, and I could not be happier for doing so. But before I start talking about the apps, I would like to give a summary of what SetApp is, and I will use their own words from their official website:

Setapp is a cross-platform suite of apps, solving your daily tasks. Stay in your flow, anywhere.

So, It is like a Netflix for Apps, you pay a monthly subscription, and you have access to paid apps automatically, and they keep adding new ones to their repository so you can use them.

As a Software Developer, I try to focus this post on the apps that help me in my daily job, and here are the three (3) ones that help me more:

1.- Shimo — An Awesome VPN Client

On these days of remote work, we will need to connect to several VPN each day to access your company environment or even customer environment. If you are like me that you work daily for several customers switching from one customer VPN to another is your daily task, and if you can do it fast, you optimize your time.

Shimo is a VPN client who supports all the main protocols that companies used: Cisco, Juniper, OpenVPN … all you can need.

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer
VPN options that Shimo provides to you (screenshot by the author)

You can connect to more than one VPN if they are not overlapping, and you can also access a quick way to connect or disconnect to any VPN from the MenuBar.

2.- Paste — The Ultimate Clipboard

This is an app that is key for any developer and for any person who uses a computer. Paste is just how the clipboard should be. It is an enhanced clipboard with a history, so you can go back and select something that you copied yesterday and you need to recover.

And let’s be honest, as Software Developer, one of our main tricks is the CTRL+C, CTRL+V. It can be needed for everything: a snippet of code a colleague shared with you or the UNIX command that you always forget about it or recover the username that somebody shared with you using an email or Slack.

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer
Screenshot from Paste taken by the author

3.- DevUtils

This is a clear choice. A tool that is named DevUtils should be on this list. But, what DevUtils is? It is a collection of all these tools you always look on the internet to do simple but usual tasks.

Tasks like encode or decode from base64, a regular expression tester, UNIX time converter, JSON formatter, JWT debugger, and much more… How many times do you google to do one of these tasks? How much time can you save just having that in your dock all the time? The answer is simple a lot!!

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer
Screenshot from DevUtils taken by the Author

Summary

There are a lot more apps in the catalog from SetApp. When writing this article, the number goes up to 210 apps that cover all the aspects of your life, including one of the best-selling apps in the App Store. But I would like to focus on the ones that I use most in my life as a Software Developer, and if you are like me, you will find it awesome!

How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)

How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)

Learn the main reasons behind an Impaired status and how you can perform troubleshooting to identify and solve the error.

#TIBFAQS: TIBCO BW Impaired Status: How to solve it?
Photo by Charles Deluvio on Unsplash

This is another post of the #TIBFAQS series. To remind you what all this about is, you can submit your questions regarding TIBCO developing issues or doubts and try to provide an answer here to try to help the community of TIBCO developers out there.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

So, today I am going to start with one of the most common issues when we work with BusinessWorks, and it is when I am going to deploy my application or test it locally, and I get this log trace and nothing after that: TIBCO BW Impaired Status.

How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)
Impaired status error message

This is one of the usual situations for a BusinessWorks junior developer and one of the reasons you have more time spent doing troubleshooting. Let’s get some tricks today, so this message will never again stop you in your journey to production.

What is the cause of this error?

This error means that the BusinessWorks runtime is not able to meet all the dependencies among the components to be able to start. As you probably know, BusinessWorks each of the applications’ components is managed independently, and they are referencing each other.

For example, the Application depends on the Application Module and the Shared Module. The Application module can have a dependency on a JMS Connection and so on.

Situations that can raise this error

Let’s take a look now at the situation that can raise this error and solve it.

1.- Missing module or incompatible versions

One usual situation that can lead to this problem is missing modules or incompatible versions of the modules. In that case, the referencable component will wait for a module or a specific version of a module to be started. Still, this module is missing, or it is starting another version.

2.- Not valid shared connections

Another option can be if some of the components are required to establish the connection with other technologies such as JDBC connections, JMS connections, KAFKA connections, or another one of the more than 200 connectors available.

3.- Missing Starter component in the Module Descriptors

The last of the usual suspects here is when you have a Stater component in the Module Descriptors, but this process is not available inside the EAR file that you are deploying. That dependency is never satisfied, and that leads to unlimited Impaired status.

How to detect what component is missing?

To help you in the process of detecting in which situation you are, you have an incredible tool at your disposal which is the command la from the OSGi Console Interface.

This command helps us list the applications deployed in this specific AppNode or container and give us the details of them, including the reason for an Impaired situation.

How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)

How to run the OSGi console depends on your deployment model, but you can read all the information about it in the link below:

Summary

I hope you find this interesting to solve your TIBCO BW Impaired status on your apps, and if you are one of those facing this issue now, you have information not to be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just use the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev
How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)

3 Unusual Developer Tools That Seriously Boost Productivity (Beyond VS Code)

3 Unusual Developer Tools That Seriously Boost Productivity (Beyond VS Code)

A non-VS Code list for software engineers

This is not going to be one of those articles about tools that can help you develop code faster. If you’re interested in that, you can check out my previous articles regarding VS Code extensions, linters, and other tools that make your life as a developer easier.

My job is not only about software development but also about solving issues that my customers have. While their issues can be code-related, they can be an operation error or even a design problem.

I usually tend to define my role as a lone ranger. I go out there without knowing what I will face, and I need to be ready to adapt, solve the problem, and make customers happy. This experience has helped me to develop a toolchain that is important for doing that job.

Let’s dive in!


1. MobaXterm

This is the best tool to manage different connections to different servers (SSH access for a Linux server, RDP for a Windows server, etc.). Here are some of its key features:

  • Graphical SSH port-forwarding for those cases when you need to connect to a server you don’t have direct access to.
  • Easy identity management to save the passwords for the different servers. You can organize them hierarchically for ease of access, especially when you need to access so many servers for different environments and even different customers.
  • SFTP automatic connection when you connect to an SSH server. It lets you download and upload files as easily as dropping files there.
  • Automatical X11 forwarding so you can launch graphical applications from your Linux servers without needing to configure anything or use other XServers like XMing.
MobaXterm in action
MobaXterm in action

2. Beyond Compare

There are so many tools to compare files, and I think I have used all of them — from standalone applications like WinMerge, Meld, Araxis, KDiff, and others to extensions for text editors like VS Code and Notepad++.

However, none of those can compare to the one and only Beyond Compare.

I covered Beyond Compare when I started working on software engineering in 2010, and it is a tool that comes with me on each project I have. I use it every day. So, what makes this tool different from the rest?

It is simply the best tool to make any comparison because it does not just compare text and folders. It does that perfectly, but at the same time, it also compares ZIP files while browsing the content, JAR files, and so on. This is very important when we’d like to check if two JAR files that are uploaded in DEV and PROD are the same version of the tool or to know if a ZIP file has the right content when it is uploaded.

Beyond Compare in action
Beyond Compare 4

3. Vi Editor

This is the most important one — the best text editor of all time — and it is available pretty much on every server.

It is a command-line text editor with a huge number of shortcuts that allows you to be very productive when you are inside a server checking logs and configuration files to see where the problem is.

For a long time, I have had a Vi cheat sheet printed out to make sure I can master the most important shortcuts and thus increase my productivity while fighting inside the enemy lines (the customer’s servers).

Vi text editor
VIM — Vi improved the ultimate text editor.

How to Fix “Failed to Read Profile from /tmp/tmp/pcf.substvar” in TIBCO BWCE

How to Fix “Failed to Read Profile from /tmp/tmp/pcf.substvar” in TIBCO BWCE

This is another post of the #TIBFAQS series and only to remind you what all this about is that you can submit your questions regarding TIBCO developing issues or doubts and I try to provide an answer here to try to help the community of TIBCO developers out there.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

So, today I am going to start with one of the I think most common issues when we work with BusinessWorks Container Edition and we are deploying into our target environment and it is a trace similar to this one:

How to Fix “Failed to Read Profile from /tmp/tmp/pcf.substvar” in TIBCO BWCE
Error trace regarding I am not able to read the profile

What is the cause of this error?

This error means that the BusinessWorks runtime is not able to read and process the properties file to start the application. So that means that your error is regarding the configuration of the application and not the application itself. So, the good news here: your code seems to be fine at this point.

As probably you know all the TIBCO BW applications use for a very long time an XML file to have the configuration values to start with. This is the file that in the case of BusinessWokrks Container Edition is stored at /tmp/tmp/pcf.substvar and it is populated for several sources depending on how to manage your configuration.

As you know you have several options to manage your configuration in Cloud base environments: Environment variables, Config Maps, Configuration Management Systems such as Spring Cloud Config or Consul.. So it is important that you have a clear understanding of what are you using.

So the error is that the file has something in its content that is not valid, because it is wrong or because it is not able to understand it.

Situations that can raise this error

Let’s take a look now at the situation that can raise this error and how we can solve it.

1.- Incompatible BW-runtime vs EAR versions

Usually, EAR files are compatible with different BusinessWorks runtimes, but this is true when the runtime is more current than the EAR. So I mean, if I generate my application with BWCE 2.5.0 I can run it with runtime 2.5.0, or 2.5.3 or 2.6.0 without any issue, but if I try to run it with an older version like 2.4.2 I can get this error because the EAR file has some “new things” that the runtime is not able to understand.

So it is important to validate that the runtime version that you are using is the expected one and updated it if that is not the case.

2.- XML special characters that need to be escaped

This situation is only true in versions before 2.5.0, but in case you are running an older version, you can also get this error because your property value has an XML character that needs to be escaped. Characters like ‘<’ or ‘&’ and the most used ones to generate this error. If you are using a more updated version you don’t need to escape it because they are automatically escaped.

So depending on the version that you are using update your property’s value accordingly.

Summary

I hope you find this interesting and if you are one of those facing this issue now you have information to not be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just using the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev
How to Fix “Failed to Read Profile from /tmp/tmp/pcf.substvar” in TIBCO BWCE
#TIBFAQS: Failed to read profile from [/tmp/tmp/pcf.substvar]

Amazon Managed Service for Prometheus Explained: High-Availability Monitoring on AWS

Amazon Managed Service for Prometheus Explained: High-Availability Monitoring on AWS

Learn what Amazon Managed Service for Prometheus provides and how you can benefit from it.

Monitoring is one of the hot topics when we talk about cloud-native architectures. Prometheus is a graduated Cloud Native Computing Foundation (CNCF) open-source project and one of the industry-standard solutions when it comes to monitoring your cloud-native deployment, especially when Kubernetes is involved.

Following its own philosophy of providing a managed service for some of the most used open-source projects but fully integrated with the AWS ecosystem, AWS releases a general preview (at the time of writing this article): Amazon Managed Service for Prometheus (AMP).

The first thing is to define what Amazon Managed Service for Prometheus is and what features provide. So, this is the Amazon definition of the service:

A fully managed Prometheus-compatible monitoring service that makes it easy to monitor containerized applications securely and at scale.

And I would like to spend some time on some parts of this sentence.

  • Fully managed service: So, this will be hosted and handle by Amazon, and we are just going to interact with it using API as we do with other Amazon services like EKS, RDS, MSK, SQS/SNS, and so on.
  • Prometheus-compatible: So, that means that even if this is not a pure-Prometheus installation, the API is going to be compatible. So the Prometheus clients who can use Grafana or others to get the information from Prometheus will work without changing their interfaces.
  • Service at-scale: Amazon, as part of the managed service, will take care of the solution’s scalability. You don’t need to define an instance-type or how much RAM or CPU you do need. This is going to be handled by AWS.

So, that sounds perfect. So you can think that you are going to delete your Prometheus server, and it will start using this service. Maybe you are even typing something like helm delete prom… WAIT WAIT!!

Because at this point, this is not going to replace your local Prometheus server, but it will allow the integration with it. So, that means that your Prometheus server is going to act like a scraper for the whole monitoring scalable solution that AMP is providing, something as you can see in the picture below:

Amazon Managed Service for Prometheus Explained: High-Availability Monitoring on AWS
Reference Architecture for Amazon Prometheus Service

So, you are still going to need a Prometheus server, that is right, but all the complexity are going to be avoided and leverage to the managed service: Storage configuration, High availability, API optimization, and so on is going to be just provided to you out of the box.

Ingesting information into Amazon Managed Service for Prometheus

At this moment, there is two way to ingest data into the Amazon Prometheus Service:

  • From an existing Prometheus server using the remote_write capability and configuration, so that means that each series that is scraped by the local Prometheus is going to be sent to the Amazon Prometheus Service.
  • Using AWS Distro for OpenTelemetry to integrate with this service using the Prometheus Receiver and the AWS Prometheus Remote Write Exporter components to get that.

Summary

So this is a way to provide an enterprise-grade installation leveraging on all the knowledge that AWS has hosting and managing this solution at scale and optimized in terms of performance. You can focus on the components you need to get the metrics ingested into the service.

I am sure this will not be the last movement from AWS in observability and metrics management topics. I am sure they will continue to provide more tools to the developer’s and architects’ hands to define optimized solutions as easily as possible.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Why Use GraphQL? 3 Key Benefits Over REST APIs Explained

Why Use GraphQL? 3 Key Benefits Over REST APIs Explained

3 benefits of using GraphQL in your API that you should take into consideration.

We all know that APIs are the new standard when we develop any piece of software. All the latest paradigm approaches are based on a distributed amount of components created with a collaborative approach in mind that they need to work together to provide more value to the whole ecosystem.

Talking about the technical part, an API has become a synonym for using REST/JSON to expose those APIs as a new standard. But this is not the only option even in the synchronous request/reply world, and we are starting to see a shift in this by-default selection of REST as the only choice in this area.

GraphQL has emerged as an alternative that works as well since Facebook introduced it in 2015. During these five years of existence, its adoption is growing outside Facebook walls, but this is still far from the general public uses as the following Google Trends graph shows

Why Use GraphQL? 3 Key Benefits Over REST APIs Explained
Google Trend graph showing interest in REST vs. GraphQL in the last five years

But I think this is a great moment to look again and the benefits that GraphQL can provide to your APIs in your ecosystem. You can start a new year by introducing a technology that can provide you and your enterprise with clear benefits. So, let’s take a look at them.

1.- More flexible style to meet different client profile needs.

I want to start this point with a small jump to the past when REST was introduced. REST was not always the standard we use to create our API or Web Services, as we called it at that point. A W3C standard, SOAP, was the leader of that, and REST replaces it, focusing on several points.

However, the weight of the protocol much lighter than SOAP makes a difference, especially when mobile devices start to be part of the ecosystem.

That is the situation today, and GraphQL is an additional step further on that approach and the perspective of being more flexible. GraphQL allows each customer to decide what part of the data they would like to use the same interface for different applications. Each of them will still have an optimized approach because they can decide what they like to obtain at each time.

2.- More loosely coupled approach with the service provider

Another important topic is the dependency between the consumer of the API and the provider. We all know that different paradigms like microservices are focus on that approach. We aim to get much independence as possible among our components.

REST is not providing a big link between the components that is true. Still, the interface is fixed at the same time, so that means each time we modify that interface by adding a new field or changing one, we can affect the consumer even if they do not need that field for anything.

GraphQL, by its feature of selecting the fields that I would like to obtain, makes much easier the evolution of the API itself much and at the same time provides much more independence for the components because only the changes that have a clear impact on the data that a client needs can generate an effect on them but the rest it is completely transparent form them.

3.- More structured and defined specification

One of the aspects that defined the rise of REST as a wide-used protocol is the lack of standards to structure and define its behavior. We had several attempts using RAML or even just “samples as specification”, swagger, and finally an OpenAPI specification. But that time of “unstructured” situation generates that REST API can be done in very different ways.

Each developer or service provider can generate REST API with a different approach and philosophy that generates noise and is difficult to standardize. GraphQL is based on a GraphQL Schema that defines the type managed by the API and the operations that you can do with it in two main groups: queries and mutations. That allows that all the GraphQL APIs, no matter who is developing them, follow the same philosophy as it is already included in the core of the specification itself.

Summary

After reading this article, you are probably saying, so that means that I should remove all my REST API and start building everything in GraphQL. And my answer to that is …. NO!

The goal of this article if that you are aware of the benefits that different way to define API is providing to you so you can add them to your toolbelt, so next time that you create an API to think about these topics described here and reach to a conclusion that is: mmm I think GraphQL is the better pick for this specific situation or the other way around, I am not going to get any benefits on this specific API, so I rather use REST.

The idea is that you now know to apply to your specific case and choose based on that because no better than yourself to decide what is best for your use case.

SOA Principles That Still Matter in Cloud-Native Architecture

SOA Principles That Still Matter in Cloud-Native Architecture

The development world has changed a lot, but that does not mean that all things are not valid. Learn what principles you should be aware of.

The world changes fast, and in IT, it changes even faster. We all know that, which usually means that we need to face new challenges and find new solutions. Samples of that approach are the trends we have seen in the last years: Containers, DevSecOps, Microservices, GitOps, Service Mesh…

But at the same time, we know that IT is a cycle in terms that the challenges that we face today are different evolution of challenges that have been addressed in the past. The main goal is to avoid re-inventing the wheel and avoiding making the same mistakes people before us.

So, I think it is worth it to review principles that Service-oriented Architectures (SOA) provided to us in the last decades and see which ones are relevant today.

Principles Definition

I will use the principles from Thomas Erl’s SOA Principles of Service Design and the definitions that we can found on the Wikipedia article:

1.- Service Abstraction

Design principle that is applied within the service-orientation design paradigm so that the information published in a service contract is limited to what is required to effectively utilize the service.

The main goal behind these principles is that a service consumer should not be aware of the particular component. The main advantage of that approach is that we need to change the current service provider. We can do it without impacting those consumers. This is still totally relevant today because of different reasons:

  • Service to service communication: Service Meshes and similar projects provide service registry and service discovery capabilities based on the same principles to avoid knowing the pod providing the functionality.
  • SaaS “protection-mode” enabled: Some backend systems are still here to stay even if they have more modern ways to be set up as SaaS platforms. That flexibility also provides a more easy way to move away or change the SaaS application providing the functionality. But all that flexibility is not real if you have that SaaS application totally coupled with the rest of the microservices and cloud-native application in your land space.

2.- Service Autonomy

Design principle that is applied within the service-orientation design paradigm, to provide services with improved independence from their execution environments.

We all know the importance of the service isolation that cloud-native development patterns provide based on containers’ capabilities to provide independence among execution environments.

Each service should have its own execution context isolated as much as possible from the execution context of the other services to avoid any interference between them.

So that is still relevant today but encouraged by today’s paradigms as the new normal way to do things because of the benefits shown.

3.- Service Statelessness

Design principle that is applied within the service-orientation design paradigm, in order to design scalable services by separating them from their state data whenever possible.

Stateless microservices do not maintain their own state within the services across calls. The services receive a request, handle it, and reply to the client requesting that information. If needed to store some state information, this should be done externally to the microservices using an external data store such as a relational database, a NoSQL database, or any other way to store information outside the microservice.

4.- Service Composability

Design of services that can be reused in multiple solutions that are themselves made up of composed services. The ability to recompose the service is ideally independent of the size and complexity of the service composition.

We all know that re-usability is not one of the principles behind the microservices because they argue that re-usability is against agility; when we have a shared service among many parties, we do not have an easy way to evolve it.

But this is more about leverage on existing services to create new ones that are the same approach that we follow with the API Orchestration & Choreography paradigm and the agility that provides leverage on the existing ones to create compounded services that meet the innovation targets from the business.

Summary

Cloud-native application development paradigms are a smooth evolution from the existing principles. We should leverage the ones that are still relevant and provide an updated view to them and update the needed ones.

In the end, in this industry, what we do each day is to do a new step of the long journey that is the history of the industry, and we leverage all the work that has been done in the past, and we learn from it.