Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer

SetApp provides a fantastic set of tools that can boost your productivity as a Software Developer.

I just switched the PC laptop that I used extensively for the last years for a new Macbook Pro last week, and I have entered into a new world. I have used OS X environments in the past. I had my first Macbook in 2008 and my second one in 2016, so I am not new to the OS X ecosystem, but even with that, things change quickly in the App industry, especially in the last four years.

So, when I face the login screen of Big Sur in front of me, I just wondered about how I can equip myself, and I remember SetApp. I discovered SetApp a long time ago because one of the main podcasters I listen to, Emilio Cano, is a very supportive fan of SetApp and uses any chance he has to talk about its benefits.

So I decided to give it a try, and I could not be happier for doing so. But before I start talking about the apps, I would like to give a summary of what SetApp is, and I will use their own words from their official website:

Setapp is a cross-platform suite of apps, solving your daily tasks. Stay in your flow, anywhere.

So, It is like a Netflix for Apps, you pay a monthly subscription, and you have access to paid apps automatically, and they keep adding new ones to their repository so you can use them.

As a Software Developer, I try to focus this post on the apps that help me in my daily job, and here are the three (3) ones that help me more:

1.- Shimo — An Awesome VPN Client

On these days of remote work, we will need to connect to several VPN each day to access your company environment or even customer environment. If you are like me that you work daily for several customers switching from one customer VPN to another is your daily task, and if you can do it fast, you optimize your time.

Shimo is a VPN client who supports all the main protocols that companies used: Cisco, Juniper, OpenVPN … all you can need.

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer
VPN options that Shimo provides to you (screenshot by the author)

You can connect to more than one VPN if they are not overlapping, and you can also access a quick way to connect or disconnect to any VPN from the MenuBar.

2.- Paste — The Ultimate Clipboard

This is an app that is key for any developer and for any person who uses a computer. Paste is just how the clipboard should be. It is an enhanced clipboard with a history, so you can go back and select something that you copied yesterday and you need to recover.

And let’s be honest, as Software Developer, one of our main tricks is the CTRL+C, CTRL+V. It can be needed for everything: a snippet of code a colleague shared with you or the UNIX command that you always forget about it or recover the username that somebody shared with you using an email or Slack.

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer
Screenshot from Paste taken by the author

3.- DevUtils

This is a clear choice. A tool that is named DevUtils should be on this list. But, what DevUtils is? It is a collection of all these tools you always look on the internet to do simple but usual tasks.

Tasks like encode or decode from base64, a regular expression tester, UNIX time converter, JSON formatter, JWT debugger, and much more… How many times do you google to do one of these tasks? How much time can you save just having that in your dock all the time? The answer is simple a lot!!

Top 3 Apps From SetApp to Boost Your Productivity as a Software Developer
Screenshot from DevUtils taken by the Author

Summary

There are a lot more apps in the catalog from SetApp. When writing this article, the number goes up to 210 apps that cover all the aspects of your life, including one of the best-selling apps in the App Store. But I would like to focus on the ones that I use most in my life as a Software Developer, and if you are like me, you will find it awesome!

How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)

How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)

Learn the main reasons behind an Impaired status and how you can perform troubleshooting to identify and solve the error.

#TIBFAQS: TIBCO BW Impaired Status: How to solve it?
Photo by Charles Deluvio on Unsplash

This is another post of the #TIBFAQS series. To remind you what all this about is, you can submit your questions regarding TIBCO developing issues or doubts and try to provide an answer here to try to help the community of TIBCO developers out there.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

So, today I am going to start with one of the most common issues when we work with BusinessWorks, and it is when I am going to deploy my application or test it locally, and I get this log trace and nothing after that: TIBCO BW Impaired Status.

How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)
Impaired status error message

This is one of the usual situations for a BusinessWorks junior developer and one of the reasons you have more time spent doing troubleshooting. Let’s get some tricks today, so this message will never again stop you in your journey to production.

What is the cause of this error?

This error means that the BusinessWorks runtime is not able to meet all the dependencies among the components to be able to start. As you probably know, BusinessWorks each of the applications’ components is managed independently, and they are referencing each other.

For example, the Application depends on the Application Module and the Shared Module. The Application module can have a dependency on a JMS Connection and so on.

Situations that can raise this error

Let’s take a look now at the situation that can raise this error and solve it.

1.- Missing module or incompatible versions

One usual situation that can lead to this problem is missing modules or incompatible versions of the modules. In that case, the referencable component will wait for a module or a specific version of a module to be started. Still, this module is missing, or it is starting another version.

2.- Not valid shared connections

Another option can be if some of the components are required to establish the connection with other technologies such as JDBC connections, JMS connections, KAFKA connections, or another one of the more than 200 connectors available.

3.- Missing Starter component in the Module Descriptors

The last of the usual suspects here is when you have a Stater component in the Module Descriptors, but this process is not available inside the EAR file that you are deploying. That dependency is never satisfied, and that leads to unlimited Impaired status.

How to detect what component is missing?

To help you in the process of detecting in which situation you are, you have an incredible tool at your disposal which is the command la from the OSGi Console Interface.

This command helps us list the applications deployed in this specific AppNode or container and give us the details of them, including the reason for an Impaired situation.

How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)

How to run the OSGi console depends on your deployment model, but you can read all the information about it in the link below:

Summary

I hope you find this interesting to solve your TIBCO BW Impaired status on your apps, and if you are one of those facing this issue now, you have information not to be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just use the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev
How to Fix TIBCO BusinessWorks Impaired Status (Root Causes & Troubleshooting Guide)

3 Unusual Developer Tools That Seriously Boost Productivity (Beyond VS Code)

3 Unusual Developer Tools That Seriously Boost Productivity (Beyond VS Code)

A non-VS Code list for software engineers

This is not going to be one of those articles about tools that can help you develop code faster. If you’re interested in that, you can check out my previous articles regarding VS Code extensions, linters, and other tools that make your life as a developer easier.

My job is not only about software development but also about solving issues that my customers have. While their issues can be code-related, they can be an operation error or even a design problem.

I usually tend to define my role as a lone ranger. I go out there without knowing what I will face, and I need to be ready to adapt, solve the problem, and make customers happy. This experience has helped me to develop a toolchain that is important for doing that job.

Let’s dive in!


1. MobaXterm

This is the best tool to manage different connections to different servers (SSH access for a Linux server, RDP for a Windows server, etc.). Here are some of its key features:

  • Graphical SSH port-forwarding for those cases when you need to connect to a server you don’t have direct access to.
  • Easy identity management to save the passwords for the different servers. You can organize them hierarchically for ease of access, especially when you need to access so many servers for different environments and even different customers.
  • SFTP automatic connection when you connect to an SSH server. It lets you download and upload files as easily as dropping files there.
  • Automatical X11 forwarding so you can launch graphical applications from your Linux servers without needing to configure anything or use other XServers like XMing.
MobaXterm in action
MobaXterm in action

2. Beyond Compare

There are so many tools to compare files, and I think I have used all of them — from standalone applications like WinMerge, Meld, Araxis, KDiff, and others to extensions for text editors like VS Code and Notepad++.

However, none of those can compare to the one and only Beyond Compare.

I covered Beyond Compare when I started working on software engineering in 2010, and it is a tool that comes with me on each project I have. I use it every day. So, what makes this tool different from the rest?

It is simply the best tool to make any comparison because it does not just compare text and folders. It does that perfectly, but at the same time, it also compares ZIP files while browsing the content, JAR files, and so on. This is very important when we’d like to check if two JAR files that are uploaded in DEV and PROD are the same version of the tool or to know if a ZIP file has the right content when it is uploaded.

Beyond Compare in action
Beyond Compare 4

3. Vi Editor

This is the most important one — the best text editor of all time — and it is available pretty much on every server.

It is a command-line text editor with a huge number of shortcuts that allows you to be very productive when you are inside a server checking logs and configuration files to see where the problem is.

For a long time, I have had a Vi cheat sheet printed out to make sure I can master the most important shortcuts and thus increase my productivity while fighting inside the enemy lines (the customer’s servers).

Vi text editor
VIM — Vi improved the ultimate text editor.

How to Fix “Failed to Read Profile from /tmp/tmp/pcf.substvar” in TIBCO BWCE

How to Fix “Failed to Read Profile from /tmp/tmp/pcf.substvar” in TIBCO BWCE

This is another post of the #TIBFAQS series and only to remind you what all this about is that you can submit your questions regarding TIBCO developing issues or doubts and I try to provide an answer here to try to help the community of TIBCO developers out there.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

So, today I am going to start with one of the I think most common issues when we work with BusinessWorks Container Edition and we are deploying into our target environment and it is a trace similar to this one:

How to Fix “Failed to Read Profile from /tmp/tmp/pcf.substvar” in TIBCO BWCE
Error trace regarding I am not able to read the profile

What is the cause of this error?

This error means that the BusinessWorks runtime is not able to read and process the properties file to start the application. So that means that your error is regarding the configuration of the application and not the application itself. So, the good news here: your code seems to be fine at this point.

As probably you know all the TIBCO BW applications use for a very long time an XML file to have the configuration values to start with. This is the file that in the case of BusinessWokrks Container Edition is stored at /tmp/tmp/pcf.substvar and it is populated for several sources depending on how to manage your configuration.

As you know you have several options to manage your configuration in Cloud base environments: Environment variables, Config Maps, Configuration Management Systems such as Spring Cloud Config or Consul.. So it is important that you have a clear understanding of what are you using.

So the error is that the file has something in its content that is not valid, because it is wrong or because it is not able to understand it.

Situations that can raise this error

Let’s take a look now at the situation that can raise this error and how we can solve it.

1.- Incompatible BW-runtime vs EAR versions

Usually, EAR files are compatible with different BusinessWorks runtimes, but this is true when the runtime is more current than the EAR. So I mean, if I generate my application with BWCE 2.5.0 I can run it with runtime 2.5.0, or 2.5.3 or 2.6.0 without any issue, but if I try to run it with an older version like 2.4.2 I can get this error because the EAR file has some “new things” that the runtime is not able to understand.

So it is important to validate that the runtime version that you are using is the expected one and updated it if that is not the case.

2.- XML special characters that need to be escaped

This situation is only true in versions before 2.5.0, but in case you are running an older version, you can also get this error because your property value has an XML character that needs to be escaped. Characters like ‘<’ or ‘&’ and the most used ones to generate this error. If you are using a more updated version you don’t need to escape it because they are automatically escaped.

So depending on the version that you are using update your property’s value accordingly.

Summary

I hope you find this interesting and if you are one of those facing this issue now you have information to not be stopped by this one. If you would like to submit your questions feel free to use one of the following options:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just using the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev
How to Fix “Failed to Read Profile from /tmp/tmp/pcf.substvar” in TIBCO BWCE
#TIBFAQS: Failed to read profile from [/tmp/tmp/pcf.substvar]

Amazon Managed Service for Prometheus Explained: High-Availability Monitoring on AWS

Amazon Managed Service for Prometheus Explained: High-Availability Monitoring on AWS

Learn what Amazon Managed Service for Prometheus provides and how you can benefit from it.

Monitoring is one of the hot topics when we talk about cloud-native architectures. Prometheus is a graduated Cloud Native Computing Foundation (CNCF) open-source project and one of the industry-standard solutions when it comes to monitoring your cloud-native deployment, especially when Kubernetes is involved.

Following its own philosophy of providing a managed service for some of the most used open-source projects but fully integrated with the AWS ecosystem, AWS releases a general preview (at the time of writing this article): Amazon Managed Service for Prometheus (AMP).

The first thing is to define what Amazon Managed Service for Prometheus is and what features provide. So, this is the Amazon definition of the service:

A fully managed Prometheus-compatible monitoring service that makes it easy to monitor containerized applications securely and at scale.

And I would like to spend some time on some parts of this sentence.

  • Fully managed service: So, this will be hosted and handle by Amazon, and we are just going to interact with it using API as we do with other Amazon services like EKS, RDS, MSK, SQS/SNS, and so on.
  • Prometheus-compatible: So, that means that even if this is not a pure-Prometheus installation, the API is going to be compatible. So the Prometheus clients who can use Grafana or others to get the information from Prometheus will work without changing their interfaces.
  • Service at-scale: Amazon, as part of the managed service, will take care of the solution’s scalability. You don’t need to define an instance-type or how much RAM or CPU you do need. This is going to be handled by AWS.

So, that sounds perfect. So you can think that you are going to delete your Prometheus server, and it will start using this service. Maybe you are even typing something like helm delete prom… WAIT WAIT!!

Because at this point, this is not going to replace your local Prometheus server, but it will allow the integration with it. So, that means that your Prometheus server is going to act like a scraper for the whole monitoring scalable solution that AMP is providing, something as you can see in the picture below:

Amazon Managed Service for Prometheus Explained: High-Availability Monitoring on AWS
Reference Architecture for Amazon Prometheus Service

So, you are still going to need a Prometheus server, that is right, but all the complexity are going to be avoided and leverage to the managed service: Storage configuration, High availability, API optimization, and so on is going to be just provided to you out of the box.

Ingesting information into Amazon Managed Service for Prometheus

At this moment, there is two way to ingest data into the Amazon Prometheus Service:

  • From an existing Prometheus server using the remote_write capability and configuration, so that means that each series that is scraped by the local Prometheus is going to be sent to the Amazon Prometheus Service.
  • Using AWS Distro for OpenTelemetry to integrate with this service using the Prometheus Receiver and the AWS Prometheus Remote Write Exporter components to get that.

Summary

So this is a way to provide an enterprise-grade installation leveraging on all the knowledge that AWS has hosting and managing this solution at scale and optimized in terms of performance. You can focus on the components you need to get the metrics ingested into the service.

I am sure this will not be the last movement from AWS in observability and metrics management topics. I am sure they will continue to provide more tools to the developer’s and architects’ hands to define optimized solutions as easily as possible.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Why Use GraphQL? 3 Key Benefits Over REST APIs Explained

Why Use GraphQL? 3 Key Benefits Over REST APIs Explained

3 benefits of using GraphQL in your API that you should take into consideration.

We all know that APIs are the new standard when we develop any piece of software. All the latest paradigm approaches are based on a distributed amount of components created with a collaborative approach in mind that they need to work together to provide more value to the whole ecosystem.

Talking about the technical part, an API has become a synonym for using REST/JSON to expose those APIs as a new standard. But this is not the only option even in the synchronous request/reply world, and we are starting to see a shift in this by-default selection of REST as the only choice in this area.

GraphQL has emerged as an alternative that works as well since Facebook introduced it in 2015. During these five years of existence, its adoption is growing outside Facebook walls, but this is still far from the general public uses as the following Google Trends graph shows

Why Use GraphQL? 3 Key Benefits Over REST APIs Explained
Google Trend graph showing interest in REST vs. GraphQL in the last five years

But I think this is a great moment to look again and the benefits that GraphQL can provide to your APIs in your ecosystem. You can start a new year by introducing a technology that can provide you and your enterprise with clear benefits. So, let’s take a look at them.

1.- More flexible style to meet different client profile needs.

I want to start this point with a small jump to the past when REST was introduced. REST was not always the standard we use to create our API or Web Services, as we called it at that point. A W3C standard, SOAP, was the leader of that, and REST replaces it, focusing on several points.

However, the weight of the protocol much lighter than SOAP makes a difference, especially when mobile devices start to be part of the ecosystem.

That is the situation today, and GraphQL is an additional step further on that approach and the perspective of being more flexible. GraphQL allows each customer to decide what part of the data they would like to use the same interface for different applications. Each of them will still have an optimized approach because they can decide what they like to obtain at each time.

2.- More loosely coupled approach with the service provider

Another important topic is the dependency between the consumer of the API and the provider. We all know that different paradigms like microservices are focus on that approach. We aim to get much independence as possible among our components.

REST is not providing a big link between the components that is true. Still, the interface is fixed at the same time, so that means each time we modify that interface by adding a new field or changing one, we can affect the consumer even if they do not need that field for anything.

GraphQL, by its feature of selecting the fields that I would like to obtain, makes much easier the evolution of the API itself much and at the same time provides much more independence for the components because only the changes that have a clear impact on the data that a client needs can generate an effect on them but the rest it is completely transparent form them.

3.- More structured and defined specification

One of the aspects that defined the rise of REST as a wide-used protocol is the lack of standards to structure and define its behavior. We had several attempts using RAML or even just “samples as specification”, swagger, and finally an OpenAPI specification. But that time of “unstructured” situation generates that REST API can be done in very different ways.

Each developer or service provider can generate REST API with a different approach and philosophy that generates noise and is difficult to standardize. GraphQL is based on a GraphQL Schema that defines the type managed by the API and the operations that you can do with it in two main groups: queries and mutations. That allows that all the GraphQL APIs, no matter who is developing them, follow the same philosophy as it is already included in the core of the specification itself.

Summary

After reading this article, you are probably saying, so that means that I should remove all my REST API and start building everything in GraphQL. And my answer to that is …. NO!

The goal of this article if that you are aware of the benefits that different way to define API is providing to you so you can add them to your toolbelt, so next time that you create an API to think about these topics described here and reach to a conclusion that is: mmm I think GraphQL is the better pick for this specific situation or the other way around, I am not going to get any benefits on this specific API, so I rather use REST.

The idea is that you now know to apply to your specific case and choose based on that because no better than yourself to decide what is best for your use case.

SOA Principles That Still Matter in Cloud-Native Architecture

SOA Principles That Still Matter in Cloud-Native Architecture

The development world has changed a lot, but that does not mean that all things are not valid. Learn what principles you should be aware of.

The world changes fast, and in IT, it changes even faster. We all know that, which usually means that we need to face new challenges and find new solutions. Samples of that approach are the trends we have seen in the last years: Containers, DevSecOps, Microservices, GitOps, Service Mesh…

But at the same time, we know that IT is a cycle in terms that the challenges that we face today are different evolution of challenges that have been addressed in the past. The main goal is to avoid re-inventing the wheel and avoiding making the same mistakes people before us.

So, I think it is worth it to review principles that Service-oriented Architectures (SOA) provided to us in the last decades and see which ones are relevant today.

Principles Definition

I will use the principles from Thomas Erl’s SOA Principles of Service Design and the definitions that we can found on the Wikipedia article:

1.- Service Abstraction

Design principle that is applied within the service-orientation design paradigm so that the information published in a service contract is limited to what is required to effectively utilize the service.

The main goal behind these principles is that a service consumer should not be aware of the particular component. The main advantage of that approach is that we need to change the current service provider. We can do it without impacting those consumers. This is still totally relevant today because of different reasons:

  • Service to service communication: Service Meshes and similar projects provide service registry and service discovery capabilities based on the same principles to avoid knowing the pod providing the functionality.
  • SaaS “protection-mode” enabled: Some backend systems are still here to stay even if they have more modern ways to be set up as SaaS platforms. That flexibility also provides a more easy way to move away or change the SaaS application providing the functionality. But all that flexibility is not real if you have that SaaS application totally coupled with the rest of the microservices and cloud-native application in your land space.

2.- Service Autonomy

Design principle that is applied within the service-orientation design paradigm, to provide services with improved independence from their execution environments.

We all know the importance of the service isolation that cloud-native development patterns provide based on containers’ capabilities to provide independence among execution environments.

Each service should have its own execution context isolated as much as possible from the execution context of the other services to avoid any interference between them.

So that is still relevant today but encouraged by today’s paradigms as the new normal way to do things because of the benefits shown.

3.- Service Statelessness

Design principle that is applied within the service-orientation design paradigm, in order to design scalable services by separating them from their state data whenever possible.

Stateless microservices do not maintain their own state within the services across calls. The services receive a request, handle it, and reply to the client requesting that information. If needed to store some state information, this should be done externally to the microservices using an external data store such as a relational database, a NoSQL database, or any other way to store information outside the microservice.

4.- Service Composability

Design of services that can be reused in multiple solutions that are themselves made up of composed services. The ability to recompose the service is ideally independent of the size and complexity of the service composition.

We all know that re-usability is not one of the principles behind the microservices because they argue that re-usability is against agility; when we have a shared service among many parties, we do not have an easy way to evolve it.

But this is more about leverage on existing services to create new ones that are the same approach that we follow with the API Orchestration & Choreography paradigm and the agility that provides leverage on the existing ones to create compounded services that meet the innovation targets from the business.

Summary

Cloud-native application development paradigms are a smooth evolution from the existing principles. We should leverage the ones that are still relevant and provide an updated view to them and update the needed ones.

In the end, in this industry, what we do each day is to do a new step of the long journey that is the history of the industry, and we leverage all the work that has been done in the past, and we learn from it.

TIBCO Development FAQs, Solutions & Best Practices

TIBCO Development FAQs, Solutions & Best Practices

Improve your knowledge about TIBCO technology and also solve the issues that you are facing in your daily development tasks

TIBFAQS is here! This new year I would like to start several initiatives, and I hope you can walk with me during this journey. As you may know, I am a TIBCO Architect, so in my daily activities, I got a lot of questions and inquiries about how to do different things with TIBCO technology from TIBCO BusinessWorks to TIBCO EMS or TIBCO Spotfire.

This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.

I have noticed that some of these questions are similar from one customer to another, so I would like to use this platform to share all this knowledge to benefit in our daily activities and use the technology most efficiently.

1.- How is this going to work?

I will use some of the common topics that I am aware of in terms of TIBCO development questions and create a periodic article covering it with detail and with a sample application showing the problem and the solution. All the code will be available in my GitHub repository to use for your own reference.

https://github.com/alexandrev

2.- Can I send you my questions?

Yes, sure!! That should be amazing! As I said, I would like to create an engaging community around these posts so we all can benefit from there. So I would like to see your questions and to send them to me you can do it in the following ways:

  • Twitter: You can send me a mention at @alexandrev on Twitter or a DM or even just using the hashtag #TIBFAQs that I will monitor.
  • Email: You can send me an email to alexandre.vazquez at gmail.com with your question.
  • Instagram: You can send me a DM on Instagram at @alexandrev

3.- Where is this going to start?

This will start in late January. The idea is to have at least an article in a bi-weekly periodicity, but that will depend a lot on this initiative’s engagement. The more you share and talk about this initiative to your peers, and the more questions you send to me, the more articles I will create.

4.- What is next?

Since today you can start sending your questions and sharing your feedback about this initiative, and you can follow this blog to wait for the articles to come! Let’s do it together!

TIBCO Development FAQs, Solutions & Best Practices
Photo by Brett Jordan on Unsplash

CICD Docker: Top 3 Reasons Why Using Containers In Your DevSecOps pipeline

CICD Docker: Top 3 Reasons Why Using Containers In Your DevSecOps pipeline

Improve the performance and productivity of your DevSecOps pipeline using containers.

CICD Docker means the approach most companies are using to introduce containers also in the building and pre-deployment phase to implement a part of the CICD pipeline. Let’s see why.

DevSecOps is the new normal for deployments at scale in large enterprises to meet the pace required in digital business nowadays. These processes are orchestrated using a CICD orchestration tool that acts as the brain of this process. Usual tools for doing this job are Jenkins, Bamboo, AzureDevOps, GitLab, GitHub.

In the traditional approach, we have different worker servers doing stages of the DevOps process: Code, Build, Test, Deploy, and for each of them, we need different kinds of tools and utilities to do the job. For example, to get the code, we can need a git installed. To do the build, we can rely on maven or Gradle, and to test, we can use SonarQube and so on.

CICD Docker: 3 Reasons to use Containers in your DevSecOps pipeline
CICD Docker Structure and the relationship between Orchestrator and Workers

So, in the end, we need a set of tools to perform successfully, and that also requires some management. In the new days, with the rise of cloud-native development and the container approach in the industry, this is also affecting the way that you develop your pipelines to introduce containers as part of the stage.

In most of the CI Orchestrators, you can define a container image to run as any step of your DevSecOps process, and let me tell you that is great if you do so because this will provide you a lot of the benefits that you need to be aware of.

1.- Much more scalable solution

One of the problems when you use an orchestrator as the main element in your company, and that is being used by a lot of different technologies that can be open-source proprietary, code-based, visual development, and so on that means that you need to manage a lot of things and install the software in the workers.

Usually, what you do is that you define some workers to do the build of some artifacts, like the image shown below:

CICD Docker: Top 3 Reasons Why Using Containers In Your DevSecOps pipeline
Worker distribution based on its own capabilities

That is great because it allows segmentation of the build process and doesn’t require all software installed in all machines, even when they can be non-compatible.

But what happens if we need to deploy a lot of applications of one of the types that we have in the picture below, like TIBCO BusinessWorks applications? That you will be limited based on the number of workers who have the software installed to build it and deploy it.

With a container-based approach, you will have all the workers available because no software is needed, you just need to pull the docker image, and that’s it, so you are only limited by the infrastructure you use, and if you adopt a cloud platform as part of the build process, these limitations are just removed. Your time to market and deployment pace is improved.

2.- Easy to maintain and extend

If you remove the need to install and manage the workers because they are spin up when you need it and delete it when they are not needed and all the thing you need to do is to create a container image that does the job, the time and the effort the teams need to spend in maintaining and extending the solution will drop considerably.

Also the removal of any upgrade process for the components involved on the steps as they follow the usual container image process.

3.- Avoid Orchestrator lock-in

As we rely on the containers to do most of the job, the work that we need to do to move from one DevOps solution to another is small, and that gives us the control to choose at any moment if the solution that we are using is the best one for our use-case and context or we need to move to another more optimized without the problem to justify big investments to do that job.

You get the control back, and you can also even go to a multi-orchestrator approach if needed, like using the best solution for each use-case and getting all the benefits for each of them at the same time without needing to fight against each of them.

Summary

All the benefits that we all know from cloud-native development paradigms and containers are relevant for application development and other processes that we use in our organization, being one of those your DevSecOps pipeline and processes. Start today making that journey to get all those advantages in the building process and not wait until it is too late. Enjoy your day. Enjoy your life.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kubernetes Autoscaling: Learn How to Scale Your Kubernetes Deployments Dynamically

Kubernetes Autoscaling: Learn How to Scale Your Kubernetes Deployments Dynamically

Discover the different options to scale your platform based on the traffic load you receive

When talking about Kubernetes, you’re always talking about the flexibility options that it provides. Usually, one of the topics that come into the discussion is the elasticity options that come with the platform — especially when working on a public cloud provider. But how can we really implement it?

Before we start to show how to scale our Kubernetes platform, we need to do a quick recap of the options that are available to us:

  • Cluster Autoscaler: When the load of the whole infrastructure reaches its peak, we can improve it by creating new worker nodes to host more service instances.
  • Horizontal Pod Autoscaling: When the load for a specific pod or set of pods reaches its peak, we deploy a new instance to ensure that we can have the global availability that we need.

Let’s see how we can implement these using one of the most popular Kubernetes-managed services, Amazon’s Elastic Kubernetes Services (EKS).


Setup

The first thing that we’re going to do is create a cluster with a single worker node to demonstrate the scalability behavior easily. And to do that, we’re going to use the command-line tool eksctl to manage an EKS cluster easily.

To be able to create the cluster, we’re going to do it with the following command:

eksctl create cluster --name=eks-scalability --nodes=1 --region=eu-west-2 --node-type=m5.large --version 1.17 --managed --asg-access

After a few minutes, we will have our own Kubernetes cluster with a single node to deploy applications on top of it.

Now we’re going to create a sample application to generate load. We’re going to use TIBCO BusinessWorks Application Container Edition to generate a simple application. It will be a REST API that will execute a loop of 100,000 iterations acting as a counter and return a result.

BusinessWorks sample application to show the scalability options
BusinessWorks sample application to show the scalability options

And we will use the resources available in this GitHub repository:

We will build the container image and push it to a container registry. In my case, I am going to use my Amazon ECR instance to do so, and I will use the following commands:

docker build -t testeks:1.0 .
docker tag testeks:1.0 938784100097.dkr.ecr.eu-west-2.amazonaws.com/testeks:1.0
docker push 938784100097.dkr.ecr.eu-west-2.amazonaws.com/testeks:1.0

And once that the image is pushed into the registry, we will deploy the application on top of the Kubernetes cluster using this command:

kubectl apply -f .\testeks.yaml

After that, we will have our application deployed there, as you can see in the picture below:

Image deployed on the Kubernetes cluster
Image deployed on the Kubernetes cluster

So, now we can test the application. To do so, I will make port 8080 available using a port-forward command like this one:

kubectl port-forward pod/testeks-v1-869948fbb-j5jh7 8080:8080

With that, I can see and test the sample application using the browser, as shown below:

Swagger UI tester for the Kubernetes sample application
Swagger UI tester for the Kubernetes sample application

Horizontal pod autoscaling

Now, we need to start defining the autoscale rules, and we will start with the Horizontal Pod Autoscaler (HPA) rule. We will need to choose the resource that we would like to use to scale our pod. In this test, I will use the CPU utilization to do so, and I will use the following command:

kubectl autoscale deployment testeks-v1 --min=1 --max=5 --cpu-percent=80

That command will scale the replica set testeks from one (1) instance to five (5) instances when the CPU utilization percent is higher than 80%.

If now we check the status of the components, we will get something similar to the image below:

HPA rule definition for the application using CPU utilization as the key metric
HPA rule definition for the application using CPU utilization as the key metric

If we check the TARGETS column, we will see this value: <unknown>/80%. That means that 80% is the target to trigger the new instances and the current usage is <unknown>.

We do not have anything deployed on the cluster to get the metrics for each of the pods. To solve that, we need to deploy the Metrics Server. To do so, we will follow the Amazon AWS documentation:

So, running the following command, we will have the Metrics Server installed.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml

And after doing that, if we check again, we can see that the current user has replaced the <unknown>:

Current resource utilization after installing the metrics-server on the Kubernetes cluster
Current resource utilization after installing the Metrics Server on the Kubernetes cluster

If that works, I am going to start sending requests using a Load Test inside the cluster. I will use the sample app defined below:

To deploy, we will use a YAML file with the following content:

https://gist.github.com/BetterProgramming/53181f3aa7bee7b7e3adda7c4ed8ca40#file-deploy-yaml

And we will deploy it using the following command:

kubectl apply -f tester.yaml

After doing that, we will see that the current utilization is being increased. After a few seconds, it will start spinning new instances until it meets the maximum number of pods defined in the HPA rule.

Pods increasing when the load exceeds the target defined in previous steps.
Pods increasing when the load exceeds the target defined in previous steps.

Then, as soon as the load also decreases, the number of instances will be deleted.

Pods are deleted as soon as the load decreases.
Pods are deleted as soon as the load decreases.

Cluster autoscaling

Now, we need to see how we can implement the Cluster Autoscaler using EKS. We will use the information that Amazon provides:

https://github.com/alexandrev/testeks

The first step is to deploy the cluster autoscaling, and we will do it using the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Then we will run this command:

kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict=”false”

And we will edit the deployment to provide the current name of the cluster that we are managing. To do that, and we will run the following command:

kubectl -n kube-system edit deployment.apps/cluster-autoscaler

When your default text editor opens with the text content, you need to make the following changes:

  • Set your cluster name in the placeholder available.
  • Add these additional properties:
- --balance-similar-node-groups
- --skip-nodes-with-system-pods=false
Deployment edits that are needed to configure the Cluster Autoscaler
Deployment edits that are needed to configure the Cluster Autoscaler

Now we need to run the following command:

kubectl -n kube-system set image deployment.apps/cluster-autoscaler cluster-autoscaler=eu.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.17.4

The only thing that is left is to define the AutoScaling policy. To do that, we will use the AWS Services portal:

  • Enter into the EC service page on the region in which we have deployed the cluster.
  • Select the Auto Scaling Group options.
  • Select the Auto Scaling Group that has been created as part of the EKS cluster-creating process.
  • Go to the Automatic Scaling tab and click on the Add Policy button available.
Autoscaling policy option in the EC2 Service console
Autoscaling policy option in the EC2 Service console

Then we should define the policy. We will use the Average CPU utilization as the metric and set the target value to 50%:

Autoscaling policy creation dialog
Autoscaling policy creation dialog

To validate the behavior, we will generate load using the tester as we did in the previous test and validate the node load using the following command:

kubectl top nodes
kubectl top nodes’ sample output
kubectl top nodes’ sample output

Now we deploy the tester again. As we already have it deployed in this cluster, we need to delete it first to deploy it again:

kubectl delete -f .\tester.yaml
kubectl apply -f .\tester.yaml

As soon as the load starts, new nodes are created, as shown in the image below:

kubectl top nodes’ sample output
kubectl top nodes showing how nodes have been scaled up

After the load finishes, we go back to the previous situation:

kubectl top nodes showing how nodes have been scaled down
kubectl top nodes showing how nodes have been scaled down

Summary

In this article, we have shown how we can scale a Kubernetes cluster in a dynamic way both at the worker node level using the Cluster Autoscaler capability and at the pod level using the Horizontal Pod Autoscaler. That gives us all the options needed to create a truly elastic and flexible environment able to adapt to each moment’s needs with the most efficient approach.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.