How To Develop API Efficiently?

How To Develop API Efficiently?

Learn some tips about efficiently creating your API and dealing with the actual work simultaneously.

How To Develop API Efficiently?
Photo by Edho Pratama on Unsplash

When creating an API to expose a capability or integrate different systems, there are mainly two ways to do it: Contract-first or Contract-Last approach. The difference is about the methodology you will follow to create the API.

In a contract-first approach, the definition of the contract is the starting point. It does not matter which language or technology you are using. This reality has been the same since the beginning of the distributed system in times of RMI and CORBA and continues to be the same in the extraordinary times of gRPC and GraphQL.

You start with the definition of the contract between both parties: the one that exposes the capability and the initial consumer of the information. That implies the definition of several aspects of it:

  • Purpose of the operations.
  • Fields that each operation has.
  • Return information depending on each scenario.
  • Error information reported, and so on.

After that, you will start to design the API itself and the implementation to meet the definition agreed between the parties.

This approach has several advantages and disadvantages, but today it is the most “acceptable” way of developing API. As advantages we can comment about the following ones:

  • Reducing Rework Activities: As you start defining the contract, you can quickly validate that all parties are OK with the contract before writing any implementation work. That would avoid any re-coding activity or re-work because of a misunderstanding or just adaption of the expectations and become more efficient.
  • Separation of Duties: It will also provide the separation of duties for both parties, the provider and the consumers. Because as soon as you have the contract, both teams can start working on that. Even you can provide a mock for the consumer to test any scenario quickly without the need to wait for the actual service to be created.

But the contract First approach has some requirements or assumptions to be successful that are not very easy to meet in a real-world scenario. This situation is expected. There are a lot of methodologies, tips, or advices that you learn when you are studying that are not applicable in real-life. To validate that comment, let me ask you a question:

Did you create an API and the interface you created was 100% the same one you had at the end?

The answer to that question in my case is “No, never.” And you can think that I am a lousy API designer, and you can be right. I am sure that most people reading this article would define their contracts much better than I do, but this is not the point. Because when we are on the implementation phase, we usually detect something that we didn’t think about in the design phase, or when we try to do a low-level design, there are other concepts that we did not contemplate at the point that makes another solution the best suited for the scenario so that you will impact the API, and that has a cost.

It can be possible that you mitigate that risk by just spending more time on the contract definition phase to make sure that nothing is well-considered or even create some prototypes to ensure that the API generated will be the final one. But if you do this, you are just lowering the probability for this to happen, never removing it, and at the same time, you are reducing the benefits of the approach.

One of the critical points we commented on above was efficiency. Suppose we think about the efficiency now when you will spend more time on that phase. That means that it will be less efficient. Also, we commented on the great thing of doing separation of Duties: but in this case, while the interface creation time is extended, it is also extended the time that both teams need to wait until they can work on their parts.

But implementing the other approach will not provide much benefit. It can lead to even more expensive work because you will get no validation for the customer until the API is implemented. And again, another question:

Did you ever share something with your customer for the first time and they didn’t ask for any change?

Again, the answer is the same: “No, never.” And that cost will always be higher than the one talking about the change in the definition, because as you know, the change is much more costly the further you detect it in the development cycle, and it is not a linear increase. It is much more close to an exponential rise.

So, what is my recommendation here? Follow the contract-first approach and accept real life. So do your best shot of defining the API and have an agreement between parties and if you detect something that can impact the API, notify it as soon as possible to the parties. In the end, this is nothing else than an interactive approach also for the API definition, and there is nothing wrong with it.

Let’s be honest there is no silver bullet that will provide the green path in your daily work, and that is the great thing about doing it and why we enjoyed it so much. Because in each of our work decision as it happens in any other aspect of life, there is so many aspects, so many situations, so many details that always impacts the awesome beautiful methodology that you can see in an article, a paper, a class, or a tweet.

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?

Let’s find if gRPC protocol that is raising as one of the strong alternatives against traditional REST service can show all the benefits that people are claiming

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
Photo by Omar Flores on Unsplash

If you already have been around the tech industry lately you know that gRPC is becoming one of the most popular protocols for integration among components, mainly microservices because of its benefits comparing with other standard solutions such as REST or SOAP.

There are other alternatives that are also becoming much popular on a daily basis such as GraphQL but today’s focus is on gRPC. If you would like to take a look at GraphQL benefits you can take a look at the article displayed below:

[visual-link-preview encoded=”eyJ0eXBlIjoiaW50ZXJuYWwiLCJwb3N0Ijo3MywicG9zdF9sYWJlbCI6IlBvc3QgNzMgLSBXaHkgU2hvdWxkIFlvdSBVc2UgR3JhcGhRTCBmb3IgeW91ciBBUElzPyIsInVybCI6IiIsImltYWdlX2lkIjoyNDQ5LCJpbWFnZV91cmwiOiJodHRwOi8vYWxleGFuZHJlLXZhenF1ZXouY29tL3dwLWNvbnRlbnQvdXBsb2Fkcy8yMDIyLzAxL2ltZ182MWVkMTNiNDAwOGMxLmpwZyIsInRpdGxlIjoiV2h5IFNob3VsZCBZb3UgVXNlIEdyYXBoUUwgZm9yIHlvdXIgQVBJcz8iLCJzdW1tYXJ5IjoiMyBiZW5lZml0cyBvZiB1c2luZyBHcmFwaFFMIGluIHlvdXIgQVBJIHRoYXQgeW91IHNob3VsZCB0YWtlIGludG8gY29uc2lkZXJhdGlvbi4gUGhvdG8gYnkgTWlrYSBCYXVtZWlzdGVyIG9uwqBVbnNwbGFzaCBXZSBhbGwga25vdyB0aGF0IEFQSXMgYXJlIHRoZSBuZXcgc3RhbmRhcmQgd2hlbiB3ZSBkZXZlbG9wIGFueSBwaWVjZSBvZiBzb2Z0d2FyZS4gQWxsIHRoZSBsYXRlc3QgcGFyYWRpZ20gYXBwcm9hY2hlcyBhcmUgYmFzZWQgb24gYSBkaXN0cmlidXRlZCBhbW91bnQgb2YgY29tcG9uZW50cyBjcmVhdGVkIHdpdGggYSBjb2xsYWJvcmF0aXZlIGFwcHJvYWNoIGluIG1pbmQgWyZoZWxsaXA7XSIsInRlbXBsYXRlIjoidXNlX2RlZmF1bHRfZnJvbV9zZXR0aW5ncyJ9″]

So, what are the main benefits that are usually exposed regarding gRPC usage and why companies such as Netflix or Uber is using it?

  • Lightweight messages
  • High performance
  • Streaming pattern support

So it seems a good alternative from a renovated version of the traditional remote procedure call that has been using on the 90’s but let’s try it in some real-use cases to try to measure the benefits that everyone is claiming, especially regarding performance and lightweight of the messages, so I decided it to define a very easy scenario of a request/response pattern between two application and test them with a normal REST call and a gRPC call.

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
Simple Test Scenario Definition

Tecnology Stack

We are going to use TIBCO Flogo to create the application to use a visual no-code to simplify the application generating. If you would like to take a look more in detail about this technology please take a look at the post below:

[visual-link-preview encoded=”eyJ0eXBlIjoiaW50ZXJuYWwiLCJwb3N0IjoxMjIsInBvc3RfbGFiZWwiOiJQb3N0IDEyMiAtIFRJQkNPIEZsb2dvIEludHJvZHVjdGlvbiIsInVybCI6IiIsImltYWdlX2lkIjoyNzcyLCJpbWFnZV91cmwiOiJodHRwOi8vYWxleGFuZHJlLXZhenF1ZXouY29tL3dwLWNvbnRlbnQvdXBsb2Fkcy8yMDIyLzAxLzFWUzBwMG9MOTNPR1N6YjhHMUtLekNRLnBuZyIsInRpdGxlIjoiVElCQ08gRmxvZ28gSW50cm9kdWN0aW9uIiwic3VtbWFyeSI6IkZsb2dvIGlzIHRoZSBuZXh0IG5ldyB0aGluZyBpbiB0aGUgZGV2ZWxvcGluZyBhcHBsaWNhdGlvbnMgaW4gYSBjbG91ZC1uYXRpdmUgd2F5LiBTaW5jZSBpdHMgZm91bmRhdGlvbiBoYXMgYmVlbiBkZXNpZ25lZCB0byBjb3ZlciBhbGwgdGhlIG5ldyBjaGFsbGVuZ2VzIHRoYXQgd2UgbmVlZCB0byBmYWNlIHdoZW4gZGVhbGluZyB3aXRoIG5ldyBjbG91ZC1uYXRpdmUgZGV2ZWxvcG1lbnQuIFNvLCBwbGVhc2UsIGlmIHlvdSBvciB5b3VyIGVudGVycHJpc2UgaXMgc3RhcnRpbmcgaXRzIG1vdmVtZW50IHRvIHRoZSBjbG91ZCBpdOKAmXMgdGhlIG1vbWVudCB0byBbJmhlbGxpcDtdIiwidGVtcGxhdGUiOiJ1c2VfZGVmYXVsdF9mcm9tX3NldHRpbmdzIn0=”]

So, we are going to create two application: First one will be activated on a scheduled bases each 100 ms and it will call using gRPC to the second application that we just return the data to the call application hard-coded to avoid any other third party system could impact on the performance measure.

Regarding the data that we are going to transmit this will be a simple Hello world approach. First application will send a name to the second application that it will return the “Hello, name, This is my gRPC (or REST) application” to be able to print that in console.

REST Approach

Below are shown the application for the test case using TIBCO Flogo technology to define it:

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
Flogo Applications for the REST case

As you could see it is simple and intuitive we have the first application activated by a Trigger and with a REST Invoke activity and then a Log Message to print what it has been received. Second application is even simpler, just expone the REST API and return the hard-coded data.

gRPC Approach

gRPC approach will be a little bit more difficult because we need to create the protobuf definition for the gRPC client and server. So we will start with a simple definition of the Hello service as you can see in the picture below:

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
Protobuf definition for the gRPC Test Scenario

And based on that we can generate the different applications both the client and the server of this simple test:

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
gRPC Apps in TIBCO Flogo

As you can see application are very similar to the one for REST just changing one protocol for the other and that is one of the awesome things of TIBCO Flogo, we can just have a simple implementation without knowing the details of the newest protocols but getting all the advantages that they provide.

Test Results

After 100 executions of the REST service those are the metrics we were able to get using the Prometheus exporter that the tool provides:

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
Prometheus metrics for the REST Scenario Execution

So we have around 4 ms for the client flow and 0.16 ms for the REST service itself, so they are already low numbers. Do you really think that a gRPC version could improve it? Let’s watch it. Here are the same metrics for 100 invokes of the second flow using gRPC:

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
Prometheus metrics for the gRPC Scenario Execution

So as you can see the improvement is awesome even for a simple service running on localhost. The gRPC service had a metrics of 0.035 ms vs the 0.159 that it had the REST version pretty much an improvement of 77.98% vs the REST API, this is just incredible.. but what about the client? It went from 4.066 ms to at 0.89 ms what means another 78.1% of improvement.

Is It gRPC As Fast Versus REST As All The Industry Is Talking About?
Graphical Representation of Both Scenario Executions

So the rationale should be if this can be done with a simple service where data exchanged is pretty much nothing, what it can do when the payload is big? the options are just unimaginable..

Summary

We tested the good things we have heard online of the gRPC method that most of the cutting-edge technologies are using today and we have been impressed just a simple scenario comparing it with the performance of a REST interface. For sure gRPC has its cons like any other option but in terms of performance and message optimization the data speaks for itself, it is just amazing. Stay tuned for new tests regarding gRPC benefits and also some of its cons to try to see if could be a great option for your next development!

API Terminology: We Are Using Wrong the Term API and It is Driving Me Crazy

API Terminology: We Are Using Wrong the Term API and It is Driving Me Crazy

When marketing steals a technical word, it leads to madness and a complete change of its meaning.

API Terminology: We Are Using Wrong the Term API and It is Driving Me Crazy
Photo by Tengyart on Unsplash

API is the next on the list. It is always the same pattern regarding technical terms when they go beyond the normal really techy forum and reach a more “mainstream” level in the industry. As soon as this happens, the term starts to lose its meaning, and it starts to be like a wildcard word that can be very different things to very different people. If you don’t believe me come with me to this set of examples.

You can argue that terms need to evolve and that the same word can mean different things as long as the industry continues to evolve, and that is true. For example, the package term that in the past is referred to way to package software to be able to share it usually through mail or an FTP server as a TAR package it has been re-defined with the eclosion of the package managers in the 90’s and after that with the artifact management to handle dependencies with approaches such as Maven, npm and so on.

But I am not talking about these examples. I am talking about when a term is used a lot because it is fancy and means evolution, or modernization, so you try to use it as much as possible, even to mean different things. And one of these terms is API.

API stands for Application Programming Interface, and as its name states, it is an interface. Since the beginning of computer time, it has been created to reference the contract and how you need to interact with a specific application program. However, the term was mainly used for libraries to define their contract for other applications that needed the capability.

So If we would like to show this in a graphical form, this is the API referring to:

API Terminology: We Are Using Wrong the Term API and It is Driving Me Crazy

With the eclosion of the REST Services and mobile apps, the term of API will expand beyond its normal usage and become a normal word in today’s world because all devs need some API to do work. Starting from the common capabilities such as Authentication until just concrete capabilities are needed to perform its work.

The explosion of services that exposed their own API required a way to provide central management to exposed interfaces, especially when we start to publish some of these capabilities to the outside world. We needed to secure them, identify who was using them and at what level, and a way for devs to find the needed documentation to be able to use their services. And because of that, we have the rise of API Management solutions.

And then microservices came to revolutionize how applications are performed, and that suppose that now we have more services each of them providing its own API at a level that pretty much we have one service for one capability and because of that one API for one capability something as you can see in the picture below:

API Terminology: We Are Using Wrong the Term API and It is Driving Me Crazy

And the usage of API became so popular that some people started to use the term to refer to the interface and the whole service implementing this API, which leads and is leading to a lot of confusion. So because of that, when we talk now about API Development, we can talk about very different things:

  • We can talk about the definition and model of the interface itself and its management.
  • We can talk about a service implementation with an API exposed to be used and managed appropriately.
  • We can even talk about a service that uses several APIs as part of its capability implementation.

And the main problem when we use the same term to differ to so many different things is that the word loses all its meaning and with that to complicate our understanding in any conversation and that leads to many problems we could avoid just using the proper words and try to keep all the buzz and marketing a little bit out of the technical conversations.

Event-Driven Architecture: Enhancing the Responsiveness of Your Enterprise To Succeed

Event-Driven Architecture: Enhancing the Responsiveness of Your Enterprise To Succeed

Event-Driven architecture provides more agility to meet the changes of a more demanding customer ecosystem.

Increasing the Responsiveness of Your Enterprise With an Event-Driven Approach
Photo by Kristopher Roller on Unsplash

The market is shifting at a speed that is needed to be ready to change very quickly, customers are becoming more and more demanding and we need to be able to deliver what they are expecting, and to do so we need an architecture that is responsive enough to be able to adapt at the pace that is required.

Event-Driven Architectures (usually just referred to as EDA) are architectures where events are the crucial part of it and we design components ready to handle those events in the most efficient way. An architecture that is ready to react to what’s happening around us instead of just setting a specific path for our customers.

This approach provides a lot of benefits to enterprises because of its characteristics but also at the same time it requires a different mindset and a different set of components in place.

What is an Event?

Let’s start with the beginning. An event is anything that can happen and it is important to you. If you think about a scenario where a user is just navigating through an e-commerce website, everything that he has is an event. If we land on the e-commerce site because he had a referral link, that is an event.

Events not only happen in virtual life but in real life too. A person just walking into the lobby of the hotel is an event, going in front of the reception desk to do the check-in is another, just walking to his room is another… everything is an event.

Events in isolation provide a small piece of information but together they can provide a lot of valuable information about the customers, their preferences, their expectations, and also their needs. And all of that will help us to provide the most customized experience to each one of our customers.

EDA vs Traditional Architectures

Traditional architectures work in pull mode, which means that a consumer sends a request to a service, that services need other components to do the logic, it goes the answer and it answers back. Everything is pre-defined.

Events work in a different way because they work on the push mode, Events are being sent and that’s it, it could trigger one action, many actions, or none. You have a series of components waiting, listening until the event or the sequence of events they need to activate appears in front of them and when it does, it just triggers its logic and as part of that execution generates one or more events to be able to be consumed again.

Event-Driven Architecture: Enhancing the Responsiveness of Your Enterprise To Succeed
Pull vs Push mode for Communication.

To be able to build an Event-Driven Architecture the first thing we need is to have Event-Driven Components. We need software components that are activated based on events and also generate events as part of their processing logic. At the same time, this sequence of events also becomes the way to complete complex flows in a cooperation mode without the need or a master-mind component that is aware of all the flow from end to end.

You just have components that know that when happens this, they need to do their part of the job and other components will listen to the output of that components and be activated.

This approach is called Choreography because it works the same way in a ballet company where each of the dancers can be doing different moves but each of them knows exactly what they should do and all together in sync generate the whole piece.

Layers of an Event-Driven Architecture

Now that we have software components that are being activated using events we need some structure around that in our architecture to cover all the needs in the management of the events, so we need to handle the following layers:

Event-Driven Architecture: Enhancing the Responsiveness of Your Enterprise To Succeed
Layers of the Event Driven-Architecture
  • Event Ingestion: We need a series of components that helps us to introduce and receive events in our systems. As we explained there are tons and tons of ways to send events so it is important that we offer flexibility and options in that process. Adapters and API are crucial here to make sure all the events can be gathered and be part of the system.
  • Event Distribution: We need an Event Bus that acts like our Event Ocean where all the events are flowing across to be able to activate all the components that are listening to that event.
  • Event Processing: We need a series of components to listen to all the events that are sent and make them meaningful. These components should act as security guards: They filter the events that are not important, they also enrich the events they receive with context information from other systems or data sources, and they transform the format of some events to make it easy to understand to all the components that are waiting for those events.
  • Event Action: We need a series of components listening to those events and ready to react to what is seen in the Event Bus as soon as detect that they expect to start doing their logic and send the output again to the bus to be used for somebody else.

Summary

Event-Driven Architecture can provide a much more agile and flexible ecosystem where companies can address the current challenges to dispose a compelling experience to users and customers and at the same time provide more agility to the technical teams being able to create components that work in collaboration but loosely coupled making the components and teams more autonomous.

Why GraphQL? 3 Clear Benefits Explained

Why GraphQL? 3 Clear Benefits Explained

3 benefits of using GraphQL in your API that you should take into consideration.

GraphQL API Why Should You Use GraphQL for your APIs?
Photo by Mika Baumeister on Unsplash

We all know that APIs are the new standard when we develop any piece of software. All the latest paradigm approaches are based on a distributed amount of components created with a collaborative approach in mind that they need to work together to provide more value to the whole ecosystem.

Talking about the technical part, an API has become a synonym for using REST/JSON to expose those APIs as a new standard. But this is not the only option even in the synchronous request/reply world, and we are starting to see a shift in this by-default selection of REST as the only choice in this area.

GraphQL has emerged as an alternative that works as well since Facebook introduced it in 2015. During these five years of existence, its adoption is growing outside Facebook walls, but this is still far from the general public uses as the following Google Trends graph shows

Why GraphQL? 3 Clear Benefits Explained
Google Trend graph showing interest in REST vs. GraphQL in the last five years

But I think this is a great moment to look again and the benefits that GraphQL can provide to your APIs in your ecosystem. You can start a new year by introducing a technology that can provide you and your enterprise with clear benefits. So, let’s take a look at them.

1.- More flexible style to meet different client profile needs.

I want to start this point with a small jump to the past when REST was introduced. REST was not always the standard we use to create our API or Web Services, as we called it at that point. A W3C standard, SOAP, was the leader of that, and REST replaces it, focusing on several points.

However, the weight of the protocol much lighter than SOAP makes a difference, especially when mobile devices start to be part of the ecosystem.

That is the situation today, and GraphQL is an additional step further on that approach and the perspective of being more flexible. GraphQL allows each customer to decide what part of the data they would like to use the same interface for different applications. Each of them will still have an optimized approach because they can decide what they like to obtain at each time.

2.- More loosely coupled approach with the service provider

Another important topic is the dependency between the consumer of the API and the provider. We all know that different paradigms like microservices are focus on that approach. We aim to get much independence as possible among our components.

REST is not providing a big link between the components that is true. Still, the interface is fixed at the same time, so that means each time we modify that interface by adding a new field or changing one, we can affect the consumer even if they do not need that field for anything.

GraphQL, by its feature of selecting the fields that I would like to obtain, makes much easier the evolution of the API itself much and at the same time provides much more independence for the components because only the changes that have a clear impact on the data that a client needs can generate an effect on them but the rest it is completely transparent form them.

3.- More structured and defined specification

One of the aspects that defined the rise of REST as a wide-used protocol is the lack of standards to structure and define its behavior. We had several attempts using RAML or even just “samples as specification”, swagger, and finally an OpenAPI specification. But that time of “unstructured” situation generates that REST API can be done in very different ways.

Each developer or service provider can generate REST API with a different approach and philosophy that generates noise and is difficult to standardize. GraphQL is based on a GraphQL Schema that defines the type managed by the API and the operations that you can do with it in two main groups: queries and mutations. That allows that all the GraphQL APIs, no matter who is developing them, follow the same philosophy as it is already included in the core of the specification itself.

Summary

After reading this article, you are probably saying, so that means that I should remove all my REST API and start building everything in GraphQL. And my answer to that is …. NO!

The goal of this article if that you are aware of the benefits that different way to define API is providing to you so you can add them to your toolbelt, so next time that you create an API to think about these topics described here and reach to a conclusion that is: mmm I think GraphQL is the better pick for this specific situation or the other way around, I am not going to get any benefits on this specific API, so I rather use REST.

The idea is that you now know to apply to your specific case and choose based on that because no better than yourself to decide what is best for your use case.

Increased agility through modern digital connectivity

Increased agility through modern digital connectivity

Find how TIBCO Cloud Integration can help you increase business agility by connecting all your apps, devices, and data no matter where they are hosted

We live in a world where the number of digital assets that need to be integrated, the types of assets, and where they are hosted are all exploding. We’ve transitioned away from a simple enterprise landscape where all of our systems were hosted in a single datacenter, and the number of systems was small. If you still remember those days, you probably could name all the systems that you maintained. Could you imagine doing that today?

This has changed completely. Businesses today are operating more and more on apps and data rather than on manual, documented processes, and that has increased the demands to have them connected together to support the operations of the business. How does a traditional IT team keep up with all connectivity requests coming from all areas of the business to ensure these assets are fully integrated and working seamlessly?

Additionally, the business environment has changed completely. Today everything is hyper-accelerated. You can no longer wait six months to get your new marketing promotions online, or to introduce new digital services.

This is because markets change constantly over time. At times they grow, and at other times they contract. This forced enterprises to change how they do business rapidly.

So, if we need to summarize everything that we need from an application architecture to make sure that it can help us to meet our business requirements, that word is “agility”. And architectural agility creates business agility

Different IT paradigms have been adopted to help increase architectural agility from different perspectives that provide a quick way to adapt, connect, and offer new capabilities to customers:

  • Infrastructure Agility: Based on cloud adoption, cloud providers offer an agile way to immediately tap into the infrastructure capacity required, allowing for rapid innovation by quickly creating new environments and deploying new services on-demand.
  • Operation & Management Agility: SaaS-based applications allow you to adopt best-of-breed business suites without having to procure and manage the underlying infrastructure, as you do in your on-premises approach. This allows you to streamline and accelerate the operations of your business.
  • Development Agility: Based on the application technologies that create small, highly scalable components of software that can be evolved, deployed, and managed in an autonomous way. This approach embeds integration capabilities directly within deployed applications, making integration no longer a separate layer but something that is built-in inside each component. Microservices, API-led development, and event-driven architecture concepts play an essential role and expand the people involved in the development process.

So, all of these forms of agility help build an application architecture that is highly agile — able to quickly respond quickly to changes in the environment within which it operates. And you can achieve all of them with TIBCO® Cloud Integration (TCI).

TCI is an Integration Platform-as-a-Service (iPaaS), a cloud-based integration solution that makes it extremely easy for you to connect all your assets together no matter where they’re hosted. It is a SaaS offering that runs on both AWS and Microsoft Azure, so you don’t have to manage the underlying infrastructure to make sure the integration assets that are critical to your business are always available and scale to any level of demand.

From the development perspective, TCI provides you all the tools needed for your business to develop and connect all your digital assets — including your apps, data sources, devices, business suites, processes, and SaaS solutions — using the most modern standards within an immersive experience.

Increased agility through modern digital connectivity
Easily access all of your applications within an immersive user experience.

It addresses integration patterns from traditional approaches — such as in data replication — to modern approaches including API-Ied to Event-driven architectures. It also supports the latest connectivity standards such as REST, GraphQL, AsyncAPI, and gRPC. And to reduce the time-to-market of your integrations, it also includes a significant number of pre-packaged connectors that simplify connectivity to legacy and modern business suites, data sources, and more — no matter if they reside in your data center or in the cloud. These connectors are easily accessible within a connector marketplace embedded directly within the user experience to be used across the whole platform.

TCI improves team-based development. With TIBCO® Cloud Mesh, accessible via TCI, your integrators can easily share, discover, and reuse digital assets created across the enterprise within TIBCO Cloud — such as APIs and apps — and utilize them very quickly within integrations in a secure way without the need to worry about technical aspects.

This capability promotes the reuse of existing assets and better collaboration among teams. Combined with pre-packed connectors which are directly accessible within TCI, the development time to introduce new integrations is significantly reduced.

Increased agility through modern digital connectivity
Easily access pre-packaged connectors within an embedded connector marketplace

TCI also expands the number of people in your business that can create integrations, with multiple development experiences that are tailored for different roles providing their own experience and skills. Now not only can integration specialists participate in the integration process, but developers, API product owners, and citizen integrators can as well.

This dramatically increases business agility because your various business units can create integrations in a self-service manner, collaborate to provide solutions even if they span across business units, and reduce their dependencies on overburdened IT teams. This frees up your integration specialists to focus on providing integration best practices for your enterprise and architecting a responsive application architecture.

TCI addresses a number of integration use cases including:

  1. Connecting apps, data, and devices together that reside anywhere (e.g., on-premises, SaaS, private/public cloud)
  2. Designing,, orchestrating, and managing APIs & microservices
  3. Rearchitecting inflexible monolith apps into highly scalable cloud-native apps.
  4. Building event-driven apps that process streams of data (e.g., from IoT devices or Apache Kafka)

TCI also provides detailed insights on the performance and execution status of your integrations so you can optimize them as needed or easily detect and solve any potential issues with them. This ensures that business processes that depend on your integrations are minimally disrupted.

Increased agility through modern digital connectivity
Get at-a-glance views of application execution and performance details.
Increased agility through modern digital connectivity
Drill down for expanded insights on application execution histories and performance trends.

By bringing more people into your integration process, empowering them with an immersive view that helps them seamlessly work together on your integrations, proving capabilities such as TIBCO Cloud Mesh and pre-packaged connectors within a unified connector marketplace that accelerates integration development, your digital business can be connected and reconnected very quickly to respond to changing markets, which greatly increases your business agility.

To experience how easily you can connect all of your digital assets together to boost your business agility, sign up for a free 30-day trial of TIBCO Cloud Integration today.

Sign up for the free trial at https://www.tibco.com/products/cloud-integration

TIBCO Cloud Integration is a service provided within the TIBCO Connected Intelligence Platform, which provides a complete set of capabilities to connect your business.

Technology wars: API Management Solution vs Service Mesh

Technology wars: API Management Solution vs Service Mesh

Service Mesh vs. API Management Solution: is it the same? Are they compatible? Are they rivals?

Technology wars: API Management Solution vs Service Mesh
Photo by Alvaro Reyes on Unsplash

When we talk about communication in a distributed cloud-native world and especially when we are talking about container-based architectures based on Kubernetes platform like AKS, EKS, Openshift, and so on, two technologies generate a lot of confusion because they seem to be covering the same capabilities: Those are Service Mesh and API Management Solutions.

It is has been a controversial topic where different bold statements have been made: People who think that those technologies to work together in a complementary mode, others who believe that they’re trying to solve the same problems in different ways and even people who think one is just the evolution of the other to the new cloud-native architecture.

API Management Solutions

API Management Solutions have been part of our architectures for so long. It is a crucial component of any architecture nowadays that is created following the principles of the API-Led Architecture, and they’re an evolution of the pre-existent API Gateway we’ve included as an evolution of the pure proxies in the late 90s and early 2000.

API Management Solutions is a critical component of your API Strategy because it enable your company to work on an API Led Approach. And that is much more than the technical aspect of it. We usually try to simplify the API Led Approach to the technical side with the API-based development and the microservices we’re creating and the collaborative spirit in mind we use today to make any piece of software that is deployed on the production environment.

But it is pretty much more than that. API Lead Architectures is about creating products from our API, providing all the artifacts (technical and non-technical) that we need to do that conversion. A quick list of those artifacts (but it is not an exhaustive list are the following ones)

  • API Documentation Support
  • Package Plans Definition
  • Subscription capabilities
  • Monetization capabilities
  • Self-Service API Discovery
  • Versioning capabilities

Traditionally, the API Management solution also comes with API Gateway capabilities embedded to cover even the technical aspect of it, and that also provide some other capabilities more in the technical level:

  • Exposition
  • Routing
  • Security
  • Throttling

Service Mesh

Service Mesh is more a buzz word these days and a technology that is now trending because it has been created to solve some of the challenges that are inherent to the microservice and container approach and everything under the cloud-native label.

In this case, it comes from the technical side so, it is much more a bottom-top approach because their existence is to be able to solve a technical problem and try to provide a better user experience to the new developers and system administrators in this new world much more complicated. And what are the challenges that have been created in this transition? Let’s take a look at them:

Service Registry & Discovery is one of the critical things that we need to cover because with the elastic paradigm of the cloud-native world makes that the services are switching its location from time to time being started in new machines when needed, remove of them when there is no enough load to require its presence, so it is essential to provide a way to easily manage that new reality that we didn’t need in the past when our services were bounded to a specific machine or set of devices.

Security is another important topic in any architecture we can create today, and with the polyglot approach we’ve incorporated in our architectures is another challenging thing because we need to provide a secure way to communicate our services that are supported by any technology we’re using and anyone we can use in the future. And we’re not talking just about pure Authentication but also Authorization because in a service-to-service communication we also need to provide a way to check if the microservice that is calling another one is allowed to do so and do that in an agile way not to stop all the new advantages that your cloud-native architecture provides because of its conception.

Routing requirements also have been changed in these new architectures. If you remember how we usually deploy in traditional architectures, we typically try to find a zero down-time approach (when possible) but a very standard procedure. Deploy a new version, validate its working, and open the traffic for anyone, but today the requirements claim for much more complex paradigms. The Service Mesh technologies support rollout strategies like A/B Testing, Weight-based routing, Canary deployments.

Rival or Companion?

So, after doing a quick view of the purpose of these technologies and the problem they tried to solve, are they rivals or companions? Should we choose one or the other or try to place both of them in our architecture?

Like always, the answer to those questions is the same: “It depends!”. It depends on what you’re trying to do, what your company is trying to achieve, what you’re building..

  • API Management solution is needed as long as you’re implementing an API Strategy in your organization. Service Mesh technology is not trying to fill that gap. They can provide technical capabilities to cover that traditional has been done the API Gateway component, but this is just one of the elements of the API Management Solution. The other parts that provide the management and the governance capabilities are not covered by any Service Mesh today.
  • Service Mesh is needed if you have a cloud-native architecture based on the container platform that is firmly based on HTTP communication for synchronous communication. It provides so many technical capabilities that will make your life much more manageable that as soon as you include it into your architecture, you cannot live without it.
  • Service Mesh is only going to provide its capabilities in a container platform approach. So, if you have a more heterogeneous landscape as much of the enterprise do today, (you have a container platform but also other platforms like SaaS application, some systems still on-prem and traditional architectures that all of them are providing capabilities that you’d like to leverage as part of the API products), you will need to include an API Management Solution.

So, these technologies can play together in a complete architecture to cover different kinds of requirements, especially when we’re talking about complex heterogeneous-architectures with a need to include an API Lead approach.

In upcoming articles, we will cover how we can integrate both technologies from the technical aspect and how the data flow among the different components of the architecture.

Welcome to the AsyncAPI Revolution!

Welcome to the AsyncAPI Revolution!
Welcome to the AsyncAPI Revolution!
Photo by Tarik Haiga on Unsplash

We’re living in an age where technologies are switching standards are changing all the time. You forget to read Medium/Stackoverflow/Reddit and you found there are at least five (5) new industry standards that are taking the place of the existing ones that you know (those that have been releasing something like a year ago 🙂 ).

Do you still remember the old ages when SOAP was the unbeatable format? How much time did we spend building our SOAP Services in our enterprises? REST replace it as the new standard.. but just a few years and we’re back in a new battle just for synchronous communication: gRPC, GraphQL, are here to conquer everything again. It is crazy, huh?

But the situation is similar to asynchronous communication. Asynchronous communication has been here for a long time. Even, a long time before the terms Event-Driven Architecture or Streaming was really a “cool” term or a thing you be aware of.

We’ve been using these patterns for so long in our companies. Big enterprises have been using this model into their enterprise integrations for so long. Pub/Sub based protocols and technologies like TIBCO Rendezvous has been using since the late 90, and then we also incorporate more standards approaches like JMS using a different kind of servers to have all these event-based communications.

But now with the cloud-native revolution, the need for distributed computing, more agility, more scalability, centralized solutions are not valid anymore, and we’ve seen an explosion in the number of options to communicate based on these patterns.

You could think that this is the same situation as we were discussing at the beginning of this article regarding REST predominance and new cutting-edge technologies trying to replace it, but this is something quite different. Because experience has told us that a single size doesn’t fit all.

You cannot find a single technology or component that can provide all the communication needs that you need for all your use-cases. You can name any technology or protocol that you want: Kafka, Pulsar, JMS, MQTT, AMQP, Thrift, FTL, and so on.

Think about each of them and you probably will find some use-cases that one technology plays better than the others, so it makes no sense to just trying to find a single technology solution to cover all the needs. What it is needed is more a polyglot approach when you have different technologies that play well together and use the one that works best for your use case (the right tool for the right job approach) as we’re doing for the different technologies we’re deploying in our cluster.

Probably we’re not going to use the same technology to do a Machine Learning based Microservice, than a Streaming Application, right? The same principle applies here.

But the problem here when we try to talk about different technologies playing together is about standardization. If we think about REST, gRPC, or GraphQL even that they’re different they play based some common grounds. They rely on the same base HTTP protocol for a standard so it is easy to support all of them in the same architecture.

But this is not true with the technologies about Asynchronous Communication. And I’d like to focus on standardization and specification today. And that’s what AsyncAPI Initiative is trying to solve. And to define what AsyncAPI is I’d like to use their own words from their official website:

AsyncAPI is an open source initiative that seeks to improve the current state of Event-Driven Architectures (EDA). Our long-term goal is to make working with EDA’s as easy as it is to work with REST APIs. That goes from documentation to code generation, from discovery to event management. Most of the processes you apply to your REST APIs nowadays would be applicable to your event-driven/asynchronous APIs too.

So, their goal is to provide a set of tools to have a better world in all those EDA architectures that all companies have or starting to have at this moment and everything pivots around one thing: The OpenAPI Specification.

Similar to the OpenAPI specification it allows us to define a common interface for our EDA Interfaces and the most important part is that this is multi-channel. So the same specification can be used for your MQTT-based API or your Kafka API. Let’s take a look at how it looks like this AsyncAPI Specification:

Welcome to the AsyncAPI Revolution!
AsyncAPI 2.0 Definition (from https://www.asyncapi.com/docs/getting-started/coming-from-openapi/)

As you can see it is very similar to the OpenAPI 3.0 and they already done that with the purpose to ease the transition between OpenAPI 3.0 and AsyncAPI and also to try to join both worlds together: It is more about just API, no matter if they’re synchronous or asynchronous and provide the same benefits regarding the ecosystem from one to the other.

Show me the code!!

But let’s stop talking and let’s start coding and to do that I’d like to use one of the tools that in my view has the greater support for AsyncAPI, and that’s it Project Flogo.

Probably you remember some of the different posts I’ve been done regarding Project Flogo and TIBCO Flogo Enterprise as a great technology to use for your microservices development (low-code/all-code approach, Golang based, a lot of connectors and open source extensions as well).

But today we’re going to use it to create our first AsyncAPI compliant microservice. And we’re going to rely on that because it provides a set of extensions to support the AsyncAPI initiative as you can see here:

So the first thing that we’re going to do is to create our AsyncAPI definition and to do it simpler, we’re going to use the sample one that we have available in the OpenAsync API with a simple change: We’re going to change from AMQP protocol to Kafka protocol because this is cool these days, isn’t it? 😉

asyncapi: '2.0.0'
info:
  title: Hello world application
  version: '0.1.0'
servers:
  production:
    url: broker.mycompany.com
    protocol: kafka
    description: This is "My Company" broker.
    security:
      - user-password: []
channels:
  hello:
    publish:
      message:
        $ref: '#/components/messages/hello-msg'
  goodbye:
    publish:
      message:
        $ref: '#/components/messages/goodbye-msg'
components:
  messages:
    hello-msg:
      payload:
        type: object
        properties:
          name:
            type: string
          sentAt:
            $ref: '#/components/schemas/sent-at'
    goodbye-msg:
      payload:
        type: object
        properties:
          sentAt:
            $ref: '#/components/schemas/sent-at'
  schemas:
    sent-at:
      type: string
      description: The date and time a message was sent.
      format: datetime
  securitySchemes:
    user-password:
      type: userPassword

As you can see something simple. Two operations “hello” and “goodbye” with easy payload:

  • name: Name that we’re going to use for the greeting.
  • sentAt: The date and time a message was sent.

So the first thing we’re going to do is to create a Flogo Application that complies to that AsyncAPI specification:

git clone https://github.com/project-flogo/asyncapi.git
cd asyncapi/
go install

Now we have the generator installer so we only need to execute and provide our YML as the input in the following command:

asyncapi -input helloworld.yml -type flogodescriptor

And we will create a HelloWorld application for us, that we need to tweak a little bit. Only to make you be up & running quickly, I’m just sharing the code in my GitHub repository that you can borrow from them (But I really encourage you to take the time to take a look at the code to see the beauty of the Flogo App Development 🙂 )

https://github.com/project-flogo/asyncapi

Now, that we already have the app, we have just a simple dummy application that allows us to receive the message that complies with the specification, and in our case just log the payload, which can be our starting point to build our new Event-Driven Microservices compliant with AsyncAPI.

So, let’s try it but to do so, we need a few things. First of all, we need a Kafka server running and to do that in a quick way we’re going to leverage on the following docker-compose.yml file:

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper:3.4.6
    expose:
    - "2181"
  kafka:
    image: wurstmeister/kafka:2.11-2.0.0
    depends_on:
    - zookeeper
    ports:
    - "9092:9092"
    environment:
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

And to run that we just need to fire the following command from the same folder we have this file named as docker-compose.yml:

docker-compose up -d

And after doing that, we just need a sample application and what better that use Flogo again to create it but this time, let’s use the Graphical Viewer to create it right away:

Welcome to the AsyncAPI Revolution!
Simple Flogo Application to send a AsyncAPI complaint-message each minute using Kafka as a protocol

So we need just to configure the Publish Kafka activity to provide the broker (localhost:9092), the topic (hello) and the message :

{
"name": "hello world",
"sentAt": "2020-04-24T00:00:00"
}

And that’s it! Let’s run it!!!:

First we start the AsyncAPI Flogo Microservice:

Welcome to the AsyncAPI Revolution!
Async API Flogo Microservices Started!

And then we just launch the tester, that is going to send the same message each minute, as you can see in the picture below:

Welcome to the AsyncAPI Revolution!
Sample Tester sending sample messages

And each time we sent that message, is going to be received in our Async API Flogo Microservice:

Welcome to the AsyncAPI Revolution!

So, I hope this first introduction to the AsyncAPI world has been of the interest of you, but don’t forget to take a look at more resources in their own website: