API Terminology Explained: Why We Are Misusing the Term “API” Everywhere

API Terminology Explained: Why We Are Misusing the Term “API” Everywhere

When marketing steals a technical word, it leads to madness and a complete change of its meaning.

API is the next on the list. It is always the same pattern regarding technical terms when they go beyond the normal really techy forum and reach a more “mainstream” level in the industry. As soon as this happens, the term starts to lose its meaning, and it starts to be like a wildcard word that can be very different things to very different people. If you don’t believe me come with me to this set of examples.

You can argue that terms need to evolve and that the same word can mean different things as long as the industry continues to evolve, and that is true. For example, the package term that in the past is referred to way to package software to be able to share it usually through mail or an FTP server as a TAR package it has been re-defined with the eclosion of the package managers in the 90’s and after that with the artifact management to handle dependencies with approaches such as Maven, npm and so on.

But I am not talking about these examples. I am talking about when a term is used a lot because it is fancy and means evolution, or modernization, so you try to use it as much as possible, even to mean different things. And one of these terms is API.

API stands for Application Programming Interface, and as its name states, it is an interface. Since the beginning of computer time, it has been created to reference the contract and how you need to interact with a specific application program. However, the term was mainly used for libraries to define their contract for other applications that needed the capability.

So If we would like to show this in a graphical form, this is the API referring to:

API Terminology Explained: Why We Are Misusing the Term “API” Everywhere

With the eclosion of the REST Services and mobile apps, the term of API will expand beyond its normal usage and become a normal word in today’s world because all devs need some API to do work. Starting from the common capabilities such as Authentication until just concrete capabilities are needed to perform its work.

The explosion of services that exposed their own API required a way to provide central management to exposed interfaces, especially when we start to publish some of these capabilities to the outside world. We needed to secure them, identify who was using them and at what level, and a way for devs to find the needed documentation to be able to use their services. And because of that, we have the rise of API Management solutions.

And then microservices came to revolutionize how applications are performed, and that suppose that now we have more services each of them providing its own API at a level that pretty much we have one service for one capability and because of that one API for one capability something as you can see in the picture below:

API Terminology Explained: Why We Are Misusing the Term “API” Everywhere

And the usage of API became so popular that some people started to use the term to refer to the interface and the whole service implementing this API, which leads and is leading to a lot of confusion. So because of that, when we talk now about API Development, we can talk about very different things:

  • We can talk about the definition and model of the interface itself and its management.
  • We can talk about a service implementation with an API exposed to be used and managed appropriately.
  • We can even talk about a service that uses several APIs as part of its capability implementation.

And the main problem when we use the same term to differ to so many different things is that the word loses all its meaning and with that to complicate our understanding in any conversation and that leads to many problems we could avoid just using the proper words and try to keep all the buzz and marketing a little bit out of the technical conversations.

Why Apache NetBeans Is Still a Great Java IDE in 2025 (Despite IntelliJ’s Popularity)

Why Apache NetBeans Is Still a Great Java IDE in 2025 (Despite IntelliJ’s Popularity)

Discover what are the reasons why to me, Apache NetBeans is still the best Java IDE you can use

Let me start from the beginning. I always have been a Java Developer since my time at University. Even that I first learned another less-known programming (Modula-2), I quickly jump to Java to do all the different assignments and pretty much every task on my journey as a student and later as a software engineer.

I was always looking for the best IDE that I could find to speed up my programming tasks. The main choice was Eclipse at the university, but I have never been an Eclipse fan, and that has become a problem.

If you are in the Enterprise Software industry, you have noticed that pretty much every Developer-based tool is based on Eclipse because its licensing and its community behind make the best option. But I never thought that Eclipse was a great IDE, and it was too flexible but at the same time too complex.

So at that time is when I discover NetBeans. I think the first version I tried was in branch 3.x, and Sun Microsystem developed it at that time. It was quite much better than Eclipse. Indeed, the number of plugins available was not comparable with Eclipse, but the things that it did, it did it awesomely.

To me, if I need to declare why at that time Netbeans was better than Eclipse, probably the main things will be these:

  • Simplicity in the Run Configuration: Still, I think most Java IDE makes things too complex just to run the code. NetBeans simple Run without needed to create a Run Configuration and configure it (you can do it, but you are not mandated to do so)
  • Better Look & Feel: This is more based on a personal preference, but I prefer the default configuration from NetBeans compared with Eclipse.

So because of that, Netbeans become my default app to do my Java Programming, but Oracle came, and things change a little. With the acquisition of Sun Microsystems from Oracle, NetBeans was stalled like many other Open source projects. For years no many updates and progress.

It is not that they deprecated the product, but Oracle had a different IDE at the time JDeveloper, which was the main choice. This is easy to understand. I continued loyal to NetBeans even that we had another big guy in the competition: IntelliJ IDEA.

This is the fancy option, the one most developers used today for Java programming, and I can understand why. I’ve tried several times in my idea to try to feel the same feelings that others did, and I could read the different articles, and I acknowledge some of the advantages of the solution:

  • Better performance: It is clear that the response time from the IDE is better with IntelliJ IDEA than NetBeans because it doesn’t come from an almost 20-years journey, and it could start from scratch and use modern approaches for the GUI.
  • Fewer Memory Resources: Let’s be honest: All IDE consumes tons of memory. No one does great here (unless you are talking about text editors with Java compiler; that is a different story). NetBeans indeed requires more resources to run properly.

So, I did the switch and started using the solution from JetBrains, but it never stuck with me, because to me is still too complex. A lot of fancy things, but less focus on the ones that I need. Or, just because I was too used to how NetBeans do things, I could not do the mental switch that is required to adopt a new tool.

And then… when everything seems lost, something awesome happens: Netbeans was donated to the Apache Foundation and became Apache NetBeans. It seems like a new life for the tool providing simple things like Dark Mode and keeping the solution up-to-date to the progress in Java Development.

So, today, Apache NetBeans is still my preferred IDE, and I couldn’t voucher more for the usage of this awesome tool. And these are the main points I would like to raise here:

  • Better Maven Management: To me, the way and the simplicity you can manage your Maven project with NetBeans is out of this league. It is simple and focuses on performance, adding a new dependency without go to the pom.xml file, updating dependencies on the fly.
  • Run Configuration: Again, this still is a differentiator. When I’m coding something fast because of a new kind of utility, I don’t like to waste time creating run configuration or adding a maven exec plugin to my pom.xml to run the software I just coded. Instead, I need to click Run, a green button, and let the magic begins.
  • There is no need for everything else: Things evolve too fast in the Java programming world, but even today, I never feel that I was missing some capability or something in my NetBeans IDE that I could get if I move to a more modern alternative. So, no trade-offs here at this level.

So, I am aware that probably my choice is because I have a biased view of this situation. After all, this has been my main solution for more than a decade now, and I’m just used to it. But I consider myself an open person, and if I saw a clear difference, I wouldn’t have second thoughts of ditching NetBeans as I did with many other solutions in the past (Evernote, OneNote, Apple Mail, Gmail, KDE Basket, Things, Wunderstling.. )

So, if you have some curiosity about seeing how Apache NetBeans has progressed, please take a look at the latest version and give it a try. Or, if you feel that you don’t connect with the current tool, give it a try again. Maybe you have the same biased view as I have!!!

Event-Driven Architecture: Enhancing the Responsiveness of Your Enterprise To Succeed

Event-Driven Architecture: Enhancing the Responsiveness of Your Enterprise To Succeed

Event-Driven architecture provides more agility to meet the changes of a more demanding customer ecosystem.

The market is shifting at a speed that is needed to be ready to change very quickly, customers are becoming more and more demanding and we need to be able to deliver what they are expecting, and to do so we need an architecture that is responsive enough to be able to adapt at the pace that is required.

Event-Driven Architectures (usually just referred to as EDA) are architectures where events are the crucial part of it and we design components ready to handle those events in the most efficient way. An architecture that is ready to react to what’s happening around us instead of just setting a specific path for our customers.

This approach provides a lot of benefits to enterprises because of its characteristics but also at the same time it requires a different mindset and a different set of components in place.

What is an Event?

Let’s start with the beginning. An event is anything that can happen and it is important to you. If you think about a scenario where a user is just navigating through an e-commerce website, everything that he has is an event. If we land on the e-commerce site because he had a referral link, that is an event.

Events not only happen in virtual life but in real life too. A person just walking into the lobby of the hotel is an event, going in front of the reception desk to do the check-in is another, just walking to his room is another… everything is an event.

Events in isolation provide a small piece of information but together they can provide a lot of valuable information about the customers, their preferences, their expectations, and also their needs. And all of that will help us to provide the most customized experience to each one of our customers.

EDA vs Traditional Architectures

Traditional architectures work in pull mode, which means that a consumer sends a request to a service, that services need other components to do the logic, it goes the answer and it answers back. Everything is pre-defined.

Events work in a different way because they work on the push mode, Events are being sent and that’s it, it could trigger one action, many actions, or none. You have a series of components waiting, listening until the event or the sequence of events they need to activate appears in front of them and when it does, it just triggers its logic and as part of that execution generates one or more events to be able to be consumed again.

Event-Driven Architecture: Enhancing the Responsiveness of Your Enterprise To Succeed
Pull vs Push mode for Communication.

To be able to build an Event-Driven Architecture the first thing we need is to have Event-Driven Components. We need software components that are activated based on events and also generate events as part of their processing logic. At the same time, this sequence of events also becomes the way to complete complex flows in a cooperation mode without the need or a master-mind component that is aware of all the flow from end to end.

You just have components that know that when happens this, they need to do their part of the job and other components will listen to the output of that components and be activated.

This approach is called Choreography because it works the same way in a ballet company where each of the dancers can be doing different moves but each of them knows exactly what they should do and all together in sync generate the whole piece.

Layers of an Event-Driven Architecture

Now that we have software components that are being activated using events we need some structure around that in our architecture to cover all the needs in the management of the events, so we need to handle the following layers:

Event-Driven Architecture: Enhancing the Responsiveness of Your Enterprise To Succeed
Layers of the Event Driven-Architecture
  • Event Ingestion: We need a series of components that helps us to introduce and receive events in our systems. As we explained there are tons and tons of ways to send events so it is important that we offer flexibility and options in that process. Adapters and API are crucial here to make sure all the events can be gathered and be part of the system.
  • Event Distribution: We need an Event Bus that acts like our Event Ocean where all the events are flowing across to be able to activate all the components that are listening to that event.
  • Event Processing: We need a series of components to listen to all the events that are sent and make them meaningful. These components should act as security guards: They filter the events that are not important, they also enrich the events they receive with context information from other systems or data sources, and they transform the format of some events to make it easy to understand to all the components that are waiting for those events.
  • Event Action: We need a series of components listening to those events and ready to react to what is seen in the Event Bus as soon as detect that they expect to start doing their logic and send the output again to the bus to be used for somebody else.

Summary

Event-Driven Architecture can provide a much more agile and flexible ecosystem where companies can address the current challenges to dispose a compelling experience to users and customers and at the same time provide more agility to the technical teams being able to create components that work in collaboration but loosely coupled making the components and teams more autonomous.

Event Streaming, APIs, and Data Integration: The 3 Core Pillars of Cloud Integration

Event Streaming, APIs, and Data Integration: The 3 Core Pillars of Cloud Integration

Event Streaming, API, and Data are the three musketeers that cover all the aspects of mastering integration in the cloud.

Enterprise Application Integration has been one of the most challenging IT landscape topics since the beginning of time. As soon as the number of systems and applications in big corporations started and grows, this becomes an issue we should address. This process’s efficiency will also define what companies succeed and which ones will fail as the cooperation between applications becomes critical to respond at the pace that the business was demanding.

I usually like to use the “road analogy” to define this:

It doesn’t matter if you have the fastest cars if you don’t have proper roads you will not get anywhere

This situation generates a lot of investments from the companies. Also, a lot of vendors and products were launched to support that situation. Some solutions are starting to emerge: EAI, ESB, SOA, Middleware, Distributed Integration Platforms, Cloud-Native solution, and iPaaS.

Each of the approaches provides a solution for existing challenges. As long as the rest of the industry was evolving, the solutions changed to adapt to the new reality (containers, microservices, DevOps, API-led, Event-Driven..)

So, what is the situation today? Today is extended the misconception that integration is the same as API and also that API is asynchronous HTTP based (REST, gRPC, GraphQL) API. But it is much more than this.

Event Streaming, APIs, and Data Integration: The 3 Core Pillars of Cloud Integration
Photo by Tolga Ulkan on Unsplash

1.- API

API-led is key to the integration solution for sure, especially focus on the philosophical approach behind it. Each component that we create today is created with a collaboration in mind to work with existing and future components to benefit the business in an easy and agile way. This transcends the protocol discussion completely.

API covers all different kinds of solutions from existing REST API to AsyncAPI to cover the event-based API.

2.- Event Streaming

Asynchronous communication is needed because the patterns and the requirements when you are talking about big enterprises and different applications make this essential. Requirements like pub-sub approach to increase independence among services and apps, control-flow to manage the execution of high-demanding flows that can exceed the throttling for applications, especially when talking about SaaS solutions.

So, you can think that this is a very opinionated view, but at the same time, this is something that most of the providers in this space have realized based on their actions:

  • AWS release SNS/SQS, its first messaging system, as its only solution.
  • Nov 2017 AWS releases Amazon MQ, another queue messaging system to cover the scenarios that SQS cannot cover.
  • May 2019 AWS releases Amazon MSK, a managed service for Kafka solutions to provide streaming data distribution and processing capabilities.

And that situation is because when we move away from smaller applications when we are migrating from a monolith approach to a micro-service application, more patterns and more requirements are needed, and here is. In contrast, integration solutions have shown in the past,t this is critical for integration solutions.

3.- Data Integration

Usually, when we talk about integration, we talk about Enterprise Application Integration because we have this past bias. Even I use this term to cover this topic, EAI, because we usually refer to these solutions. But since the last years, we are more focused on the data distribution amount the company rather than how applications integrated because what is really important is the data they are exchanging and how we can transform this raw data into insights that we can use to know better our customers or optimize our process or discover new opportunities based on that.

Until recently, this part was handled apart from the integration solutions. You probably rely on a focused ETL (Extract-Transform-Load) that helps to move the data from one database to another or a different kind of storage like a Data Warehouse so your Data Scientist can work with them.

But again, agility has made that this needs to change, and all the principles integration has in terms of providing more agility to the business is also applied to how we exchange data. We try to avoid the data’s technical move and try to ease the access and the right organization on this data. Data Virtualization and Data Streaming are the core capabilities that address and handle those challenges providing an optimized solution for how the data is distributed.

Summary

My main expectation with this article is to make you aware that when you are thinking about integrating your application, this is much more than the REST API that you are exposing, maybe using some API Gateway, and the needs can be very different. The strongest your integration platform is, the stronger your business will be.

SOA Principles That Still Matter in Cloud-Native Architecture

SOA Principles That Still Matter in Cloud-Native Architecture

The development world has changed a lot, but that does not mean that all things are not valid. Learn what principles you should be aware of.

The world changes fast, and in IT, it changes even faster. We all know that, which usually means that we need to face new challenges and find new solutions. Samples of that approach are the trends we have seen in the last years: Containers, DevSecOps, Microservices, GitOps, Service Mesh…

But at the same time, we know that IT is a cycle in terms that the challenges that we face today are different evolution of challenges that have been addressed in the past. The main goal is to avoid re-inventing the wheel and avoiding making the same mistakes people before us.

So, I think it is worth it to review principles that Service-oriented Architectures (SOA) provided to us in the last decades and see which ones are relevant today.

Principles Definition

I will use the principles from Thomas Erl’s SOA Principles of Service Design and the definitions that we can found on the Wikipedia article:

1.- Service Abstraction

Design principle that is applied within the service-orientation design paradigm so that the information published in a service contract is limited to what is required to effectively utilize the service.

The main goal behind these principles is that a service consumer should not be aware of the particular component. The main advantage of that approach is that we need to change the current service provider. We can do it without impacting those consumers. This is still totally relevant today because of different reasons:

  • Service to service communication: Service Meshes and similar projects provide service registry and service discovery capabilities based on the same principles to avoid knowing the pod providing the functionality.
  • SaaS “protection-mode” enabled: Some backend systems are still here to stay even if they have more modern ways to be set up as SaaS platforms. That flexibility also provides a more easy way to move away or change the SaaS application providing the functionality. But all that flexibility is not real if you have that SaaS application totally coupled with the rest of the microservices and cloud-native application in your land space.

2.- Service Autonomy

Design principle that is applied within the service-orientation design paradigm, to provide services with improved independence from their execution environments.

We all know the importance of the service isolation that cloud-native development patterns provide based on containers’ capabilities to provide independence among execution environments.

Each service should have its own execution context isolated as much as possible from the execution context of the other services to avoid any interference between them.

So that is still relevant today but encouraged by today’s paradigms as the new normal way to do things because of the benefits shown.

3.- Service Statelessness

Design principle that is applied within the service-orientation design paradigm, in order to design scalable services by separating them from their state data whenever possible.

Stateless microservices do not maintain their own state within the services across calls. The services receive a request, handle it, and reply to the client requesting that information. If needed to store some state information, this should be done externally to the microservices using an external data store such as a relational database, a NoSQL database, or any other way to store information outside the microservice.

4.- Service Composability

Design of services that can be reused in multiple solutions that are themselves made up of composed services. The ability to recompose the service is ideally independent of the size and complexity of the service composition.

We all know that re-usability is not one of the principles behind the microservices because they argue that re-usability is against agility; when we have a shared service among many parties, we do not have an easy way to evolve it.

But this is more about leverage on existing services to create new ones that are the same approach that we follow with the API Orchestration & Choreography paradigm and the agility that provides leverage on the existing ones to create compounded services that meet the innovation targets from the business.

Summary

Cloud-native application development paradigms are a smooth evolution from the existing principles. We should leverage the ones that are still relevant and provide an updated view to them and update the needed ones.

In the end, in this industry, what we do each day is to do a new step of the long journey that is the history of the industry, and we leverage all the work that has been done in the past, and we learn from it.

Why IT Investments Fail: The Real Reason Companies Don’t Achieve Expected Benefits

Why IT Investments Fail: The Real Reason Companies Don’t Achieve Expected Benefits

Achieve the benefits from an IT investment is much more than just buy or deploy a technology. Learn how you can be prepared to that.

If there is a single truth that this year has provided to most of the business is that we live in a digital world. It’s not the future anymore.

To be ready for this present most of the companies of all verticals have invested a lot in technology. They’ve heard all the benefits that the latest developments in technology have provided to some companies and they’d like the same benefits.

But after a while, they tried to put in place the same principles and tools and they’re not seeing the benefits. For sure, they saw some improvement but nothing compared to what they were expecting. Why this is happening? Why some of these companies are not being able to unlock these achievements

A tool is a tool, nothing more.

Any technologies principle or tool, no matter if we’re talking about a new paradigm like containerization or serverless, or a tool like an API Management platform or a new Event-Driven Architecture, they’re just a tools in hands of people.

And, in the end, what matters the most are the way those human works and how they use the tools they have at hand to achieve the optimal benefits. Companies have computers for maybe 30 years now, do you remember how it was the initial usage of those computers? Do you think people at that time were used at the optimum level? So, here is the same thing.

You shouldn’t expect that just because you’ve installed a tool, deploy a new technology or buy a new SaaS application, after that exactly moment the life of your company is going to change and you’re going to unlock all the benefits that comes with it. It’s the same story as an agenda will not make you more productive just because you have one.

Yes, it is a requirement, but this is far to be the single step you need to take to be able to achieve the success of that investment.

What matter is your thinking

A new paradigm in IT requires a different way of thinking, a trust feeling in this paradigm to be able to unlock those benefits.

If you’re not doing that way you are going to be the one stopping the progress and blocking the benefits you can get. And that’s always hard at the beginning. In the beginning, if we have a formula done in Excel and the same one in the paper we believe that the one on the computer was wrong.

Today is the other way around. We know for sure that the computer is doing it right, so we try to find our own mistake to get the same result.

Some IT managers have now the same feeling with other techniques and they try to manage it and control it using the same principles they’ve always applied. And let’s be honest: That’s normal and that’s human because we all try to use the patterns we know and the ones that have shown to be successful in the past when we face something similar.

But, let’s be honest: Do you think Netflix or Uber succeeded using the same patterns and rules they’ve been using in the path? Of course not.

But maybe you’re thinking that’s not a fair comparison because your company or your vertical is not at stake and in the middle of a revolution, you just need small changes to get those benefits. You don’t need to do everything from scratch. And that’s true.

At the end what’s relevant is if you’re ready to do the leap faith jump into the void. To introduce yourself into the jungle just with your gut and the knowledge you’ve gotten so far to guide you during that path.

Be a researcher

In reality the jump into the void is needed but this is more regarding the way that you think. It is to be ready to open your mind a leave behind some pre-thoughts you can have. At the end this is more similar to be Marie Curie rather than Indiana Jones.

Researchers and scientific always need to be open to a different way of doing things. They have their basis, their experience, the knowledge of everything that has been done in the past, but to go a step further they need to think outside the box, and open to things that were no true several years ago or things that were not the right way to do it until now. Because you’re going further than anyone has gone.

IT is similar, you’re not getting into the unknown but inside your company maybe you’re the one that needs to guide everything one else during that route and being open to thinking that maybe the old rules don’t apply to this new revolution and be ready to leave some old practices in order to unlock bigger benefits.

Summary

In the end, when you adopt a new technology you need to think about the implications that technology require in order to make it successful or even optimize the benefits that you can get from it.

Think about others than have done that path and learn from their rights and their wrongs, so you can be prepared and also be realistic. If you’re not going to get the change that the technology requires in your organization the investment makes no sense. You need to work first on prepare your organization to be ready to the change, and that moment is the moment to introduce yourself into the jungle and get all the benefits that are waiting for you.

4 Reasons Low-Code Applications Boost Developer Productivity and Business Agility

4 Reasons Low-Code Applications Boost Developer Productivity and Business Agility

How truly achieve agility on your organization focusing on what matters to your business and multiply the productivity of your development team

Fashion is cyclical and the same thing happens in Software Engineering. We live in a world where each innovation seems similar to one in the past; we advanced some time ago. That’s because what we’re doing is just refining over and over solutions for the same problems.

We’ve lived for the last years a “developer is the new black” rise, where anything related to writing code is excellent. Even devs are now observed as a cool character like the ones from Silicon Valley (the HBO show) instead of the one you can make fun of like in The I.T Crowd.

But, now, it seems we’re going back to a new rise of what is called Low-Code (or No Code) Applications.

Low-Code Application is a piece of software that helps you generate your applications or services without needing to write code in any programming language, instead of doing that, you can drag & drop some boxes that represent what you’d like to do instead of write it yourself.

That has provided advantages that are now again on the table. Let’s take a look at those advantages in more detail

1.- Provides more agility

That’s clear. No matter how high level your programming language is, no matter how many archetypes you have to generate your project skeleton or the framework and libraries that you use. Typing is always slower than drag some boxes into the white canvas and connects them with some links.

And I’m a person that is a terminal guy and VI power-user, and I realize the power of the keyboard, But let’s be honest and ask you one question:

How many of the keywords you type in your code are providing value to the business and how many are just needed for technical reasons?

Not only things like exception handling, auditing, logging, service discovery, configuration management, but stuff like loop structure, function signature definition, variable definition, class definition, and so on…

You can truly focus on the business value that you’re trying to provide to your business instead of spending time around how to manage any technical capabilities.

2.- Easier to maintain

One month after production only the developer and god knows what the code does. After a year, just god knows…

Coding is awesome but it is at the same time complex to maintain. Mainly on enterprises when developers are shifting from one project to the other, from some departments to others, and new people are being onboarded all the time to maintain and evolve some codes.

And the ones that have been in the industry for some time, know for example the situation when people said: “I prefer not to touch that because we don’t know what’s doing”, “We cannot migrate this Mainframe application because we don’t know it will be able to capture all the functionality they’re providing.”

And that’s bad for several reasons. First of all, it is costly to maintain, more complex to do it, but second is also avoiding you to evolve at the pace that you want to do it.

3.- Safer and Cost-Effective

Don’t get me wrong about this: Programming can be as safer as any low-code application. That’s clear because, in the end, any low-code app ends up generating the same binary or bytecode to be executed.

The problem is that this is going to depend on the skills of the programmer. We live in a situation that, even programming and developers are a cool thing, as you need a big number of devs in your team that implies that not all of them are as experienced and skill as you want them to be.

Reality is much more complex and also you need to deal with your budget reality and find the way to get the best of your team.

Using Low-code application, you are guaranteed the quality of the base components that are verified by a company and that they’ve improved with dedicated teams incorporating feedback for customers all over the world, which makes it safer.

4.- As ready as a code-base solution for specific needs

One of the myths that are saying against Low Code is that it is suitable for generic workloads and use-cases, but it is not capable of being adapted and optimized for your needs.

Regarding this usual push-back, first of all, we need to work on the misconception of the level of specification our software needs. In the end, the times when you need to do something so specific that is not covered by the options available out of the box are so low that it is complex to justify. Are you going to make a slower 99% of your development only to be able to do it quicker than 1%? How much of your workloads are not similar to what other companies are doing in the same industry?

But even for the sake of the discussion, let’s assume that’s true, and you need a single piece of logic a low-code application doesn’t provide out of the box. Ok, Low-Code means that you don’t need to write code, not that you cannot do it.

Most of the platforms support the option to add code if needed as an option to cover these cases. So, even in those cases, you still have the same tools to make it specific without losing all the advantages of your daily activities.

Summary

Low-code applications are one of the solutions you have at your disposal to improve your agility and your productivity in your developments to meet the pace of the changes in your business.

The solutions working on that space are not new, but they’ve been renovated to adapt to the modern developer paradigms (microservices, container-based, API-led, event-driven…) so you’re not going to miss anything but to get more time to provide even more value to your business.

Log Aggregation Architecture Explained: 3 Reasons You Need It Today

Log Aggregation Architecture Explained: 3 Reasons You Need It Today

Log Aggregation are not more a commodity but a critical component in container-based platforms

Log Management doesn’t seem like a very fantastic topic. It is not the topic that you see and says: “Oh! Amazing! This is what I was dreaming about my whole life”. No, I’m aware that this is not to fancy, but that doesn’t make it less critical than other capabilities that you’re architecture needs to have.

Since the start of time, we’ve been used log files as the single trustable data source when it was related to troubleshoot your applications or know what was failed in your deployment or any other actions regarding a computer.

The procedure was easy:

  • Launch “something”
  • “something” failed.
  • Check the logs
  • Change something
  • Repeat

And we’ve been doing it that way for a long, long time. Even with other more robust error handling and management approaches like Audit System, we also go back to logs when we need to get the fine-grained detail about the error. Look for a stack trace there, more detail about the error that was inserted into the Audit System or more data than just the error code and description thas was provided by a REST API.

Systems starting to grow, architecture became more complicated, but even with that, we end with the same method over and over. You’re aware of log aggregation architectures like the ELK stack or commercial solutions like Splunk or even SaaS offerings like Loggly, but you just think they’re not just for you.

They’re expensive to buy or expensive to set, and you know very well your ecosystem, and it’s easier to just jump into a machine and tail the log file. Probably you also have your toolbox of scripts to do this as quickly as anyone can open Kibana and try to search for something instance ID there to see the error for a specific transaction.

Ok, I need to tell you something: It’s time to change, and I’m going to explain to you why.

Things are changing, and IT and all the new paradigms are based on some common grounds:

  • You’re going to have more components that are going to run isolated with its log files and data.
  • Deployments will be more regular in your production environment, and that means that things are going to be wrong more usual (on a controlled way, but more usual)
  • Technologies are going to coexist, so logs are going to be very different in terms of patterns and layouts, and you need to be ready for that.

So, let’s discuss these three arguments that I hope make you think in a different way about Log Management architectures and approaches.

1.- Your approach just doesn’t scale

Your approach is excellent for traditional systems. How many machines do you manage? 30? 50? 100? And you’re able to do it quite fine. Imagine now a container-base platform for a typical enterprise. I think an average number could be around 1000 containers just for business purposes, not talking about architecture or basic services. Are you able to be ready to go container by container to check 1000 logs streams to know the error?

Even if that’s possible, are you going to be the bottleneck for the growth of your company? How many container logs do you can keep a trace on? 2000? As I was saying at the beginning, that just not scale.

2.- Logs are not there forever

And now, you read the first topic and probably are you just saying to the screen you’re using to read is. Come on! I already know that logs are not there, they’re getting rotated, they got lost, and so on.

Yeah, that’s true, this is even more important in cloud-native approach. With container-based platforms, logs are ephemeral, and also, if we follow the 12-factor app manifesto there is no file with the log. All log traces should be printed to the standard output, and that’s it.

And where the logs are deleted? When the container fails.. and which records are the ones that you need more? The ones that have been failed.

So, if you don’t do anything, the log traces that you need the most are the ones that you’re going to lose.

3.- You need to be able to predict when things are going to fail

But logs are not only valid when something goes wrong are adequate to detect when something is going to be wrong but to predict when things are going to fail. And you need to be able to aggregate that data to be able to generate information and insights from it. To be able to run ML models to detect if something is going as expected or something different is happening that could lead to some issue before it happens.

Summary

I hope these arguments have made you think that even for your small size company or even for your system, you need to be able to set up a Log Aggregation technique now and not wait for another moment when it will probably be too late.

Increased agility through modern digital connectivity

Increased agility through modern digital connectivity

Find how TIBCO Cloud Integration can help you increase business agility by connecting all your apps, devices, and data no matter where they are hosted

We live in a world where the number of digital assets that need to be integrated, the types of assets, and where they are hosted are all exploding. We’ve transitioned away from a simple enterprise landscape where all of our systems were hosted in a single datacenter, and the number of systems was small. If you still remember those days, you probably could name all the systems that you maintained. Could you imagine doing that today?

This has changed completely. Businesses today are operating more and more on apps and data rather than on manual, documented processes, and that has increased the demands to have them connected together to support the operations of the business. How does a traditional IT team keep up with all connectivity requests coming from all areas of the business to ensure these assets are fully integrated and working seamlessly?

Additionally, the business environment has changed completely. Today everything is hyper-accelerated. You can no longer wait six months to get your new marketing promotions online, or to introduce new digital services.

This is because markets change constantly over time. At times they grow, and at other times they contract. This forced enterprises to change how they do business rapidly.

So, if we need to summarize everything that we need from an application architecture to make sure that it can help us to meet our business requirements, that word is “agility”. And architectural agility creates business agility

Different IT paradigms have been adopted to help increase architectural agility from different perspectives that provide a quick way to adapt, connect, and offer new capabilities to customers:

  • Infrastructure Agility: Based on cloud adoption, cloud providers offer an agile way to immediately tap into the infrastructure capacity required, allowing for rapid innovation by quickly creating new environments and deploying new services on-demand.
  • Operation & Management Agility: SaaS-based applications allow you to adopt best-of-breed business suites without having to procure and manage the underlying infrastructure, as you do in your on-premises approach. This allows you to streamline and accelerate the operations of your business.
  • Development Agility: Based on the application technologies that create small, highly scalable components of software that can be evolved, deployed, and managed in an autonomous way. This approach embeds integration capabilities directly within deployed applications, making integration no longer a separate layer but something that is built-in inside each component. Microservices, API-led development, and event-driven architecture concepts play an essential role and expand the people involved in the development process.

So, all of these forms of agility help build an application architecture that is highly agile — able to quickly respond quickly to changes in the environment within which it operates. And you can achieve all of them with TIBCO® Cloud Integration (TCI).

TCI is an Integration Platform-as-a-Service (iPaaS), a cloud-based integration solution that makes it extremely easy for you to connect all your assets together no matter where they’re hosted. It is a SaaS offering that runs on both AWS and Microsoft Azure, so you don’t have to manage the underlying infrastructure to make sure the integration assets that are critical to your business are always available and scale to any level of demand.

From the development perspective, TCI provides you all the tools needed for your business to develop and connect all your digital assets — including your apps, data sources, devices, business suites, processes, and SaaS solutions — using the most modern standards within an immersive experience.

It addresses integration patterns from traditional approaches — such as in data replication — to modern approaches including API-Ied to Event-driven architectures. It also supports the latest connectivity standards such as REST, GraphQL, AsyncAPI, and gRPC. And to reduce the time-to-market of your integrations, it also includes a significant number of pre-packaged connectors that simplify connectivity to legacy and modern business suites, data sources, and more — no matter if they reside in your data center or in the cloud. These connectors are easily accessible within a connector marketplace embedded directly within the user experience to be used across the whole platform.

TCI improves team-based development. With TIBCO® Cloud Mesh, accessible via TCI, your integrators can easily share, discover, and reuse digital assets created across the enterprise within TIBCO Cloud — such as APIs and apps — and utilize them very quickly within integrations in a secure way without the need to worry about technical aspects.

This capability promotes the reuse of existing assets and better collaboration among teams. Combined with pre-packed connectors which are directly accessible within TCI, the development time to introduce new integrations is significantly reduced.

Increased agility through modern digital connectivity
Easily access pre-packaged connectors within an embedded connector marketplace

TCI also expands the number of people in your business that can create integrations, with multiple development experiences that are tailored for different roles providing their own experience and skills. Now not only can integration specialists participate in the integration process, but developers, API product owners, and citizen integrators can as well.

This dramatically increases business agility because your various business units can create integrations in a self-service manner, collaborate to provide solutions even if they span across business units, and reduce their dependencies on overburdened IT teams. This frees up your integration specialists to focus on providing integration best practices for your enterprise and architecting a responsive application architecture.

TCI addresses a number of integration use cases including:

  1. Connecting apps, data, and devices together that reside anywhere (e.g., on-premises, SaaS, private/public cloud)
  2. Designing,, orchestrating, and managing APIs & microservices
  3. Rearchitecting inflexible monolith apps into highly scalable cloud-native apps.
  4. Building event-driven apps that process streams of data (e.g., from IoT devices or Apache Kafka)

TCI also provides detailed insights on the performance and execution status of your integrations so you can optimize them as needed or easily detect and solve any potential issues with them. This ensures that business processes that depend on your integrations are minimally disrupted.

Increased agility through modern digital connectivity
Get at-a-glance views of application execution and performance details.
Increased agility through modern digital connectivity
Drill down for expanded insights on application execution histories and performance trends.

By bringing more people into your integration process, empowering them with an immersive view that helps them seamlessly work together on your integrations, proving capabilities such as TIBCO Cloud Mesh and pre-packaged connectors within a unified connector marketplace that accelerates integration development, your digital business can be connected and reconnected very quickly to respond to changing markets, which greatly increases your business agility.

To experience how easily you can connect all of your digital assets together to boost your business agility, sign up for a free 30-day trial of TIBCO Cloud Integration today.

Sign up for the free trial at https://www.tibco.com/products/cloud-integration

TIBCO Cloud Integration is a service provided within the TIBCO Connected Intelligence Platform, which provides a complete set of capabilities to connect your business.

API Management vs Service Mesh: Differences, Use Cases, and When You Need Both

API Management vs Service Mesh: Differences, Use Cases, and When You Need Both

Service Mesh vs. API Management Solution: is it the same? Are they compatible? Are they rivals?

When we talk about communication in a distributed cloud-native world and especially when we are talking about container-based architectures based on Kubernetes platform like AKS, EKS, Openshift, and so on, two technologies generate a lot of confusion because they seem to be covering the same capabilities: Those are Service Mesh and API Management Solutions.

It is has been a controversial topic where different bold statements have been made: People who think that those technologies to work together in a complementary mode, others who believe that they’re trying to solve the same problems in different ways and even people who think one is just the evolution of the other to the new cloud-native architecture.

API Management Solutions

API Management Solutions have been part of our architectures for so long. It is a crucial component of any architecture nowadays that is created following the principles of the API-Led Architecture, and they’re an evolution of the pre-existent API Gateway we’ve included as an evolution of the pure proxies in the late 90s and early 2000.

API Management Solutions is a critical component of your API Strategy because it enable your company to work on an API Led Approach. And that is much more than the technical aspect of it. We usually try to simplify the API Led Approach to the technical side with the API-based development and the microservices we’re creating and the collaborative spirit in mind we use today to make any piece of software that is deployed on the production environment.

But it is pretty much more than that. API Lead Architectures is about creating products from our API, providing all the artifacts (technical and non-technical) that we need to do that conversion. A quick list of those artifacts (but it is not an exhaustive list are the following ones)

  • API Documentation Support
  • Package Plans Definition
  • Subscription capabilities
  • Monetization capabilities
  • Self-Service API Discovery
  • Versioning capabilities

Traditionally, the API Management solution also comes with API Gateway capabilities embedded to cover even the technical aspect of it, and that also provide some other capabilities more in the technical level:

  • Exposition
  • Routing
  • Security
  • Throttling

Service Mesh

Service Mesh is more a buzz word these days and a technology that is now trending because it has been created to solve some of the challenges that are inherent to the microservice and container approach and everything under the cloud-native label.

In this case, it comes from the technical side so, it is much more a bottom-top approach because their existence is to be able to solve a technical problem and try to provide a better user experience to the new developers and system administrators in this new world much more complicated. And what are the challenges that have been created in this transition? Let’s take a look at them:

Service Registry & Discovery is one of the critical things that we need to cover because with the elastic paradigm of the cloud-native world makes that the services are switching its location from time to time being started in new machines when needed, remove of them when there is no enough load to require its presence, so it is essential to provide a way to easily manage that new reality that we didn’t need in the past when our services were bounded to a specific machine or set of devices.

Security is another important topic in any architecture we can create today, and with the polyglot approach we’ve incorporated in our architectures is another challenging thing because we need to provide a secure way to communicate our services that are supported by any technology we’re using and anyone we can use in the future. And we’re not talking just about pure Authentication but also Authorization because in a service-to-service communication we also need to provide a way to check if the microservice that is calling another one is allowed to do so and do that in an agile way not to stop all the new advantages that your cloud-native architecture provides because of its conception.

Routing requirements also have been changed in these new architectures. If you remember how we usually deploy in traditional architectures, we typically try to find a zero down-time approach (when possible) but a very standard procedure. Deploy a new version, validate its working, and open the traffic for anyone, but today the requirements claim for much more complex paradigms. The Service Mesh technologies support rollout strategies like A/B Testing, Weight-based routing, Canary deployments.

Rival or Companion?

So, after doing a quick view of the purpose of these technologies and the problem they tried to solve, are they rivals or companions? Should we choose one or the other or try to place both of them in our architecture?

Like always, the answer to those questions is the same: “It depends!”. It depends on what you’re trying to do, what your company is trying to achieve, what you’re building..

  • API Management solution is needed as long as you’re implementing an API Strategy in your organization. Service Mesh technology is not trying to fill that gap. They can provide technical capabilities to cover that traditional has been done the API Gateway component, but this is just one of the elements of the API Management Solution. The other parts that provide the management and the governance capabilities are not covered by any Service Mesh today.
  • Service Mesh is needed if you have a cloud-native architecture based on the container platform that is firmly based on HTTP communication for synchronous communication. It provides so many technical capabilities that will make your life much more manageable that as soon as you include it into your architecture, you cannot live without it.
  • Service Mesh is only going to provide its capabilities in a container platform approach. So, if you have a more heterogeneous landscape as much of the enterprise do today, (you have a container platform but also other platforms like SaaS application, some systems still on-prem and traditional architectures that all of them are providing capabilities that you’d like to leverage as part of the API products), you will need to include an API Management Solution.

So, these technologies can play together in a complete architecture to cover different kinds of requirements, especially when we’re talking about complex heterogeneous-architectures with a need to include an API Lead approach.

In upcoming articles, we will cover how we can integrate both technologies from the technical aspect and how the data flow among the different components of the architecture.