Why IT Investments Fail: The Real Reason Companies Don’t Achieve Expected Benefits

Why IT Investments Fail: The Real Reason Companies Don’t Achieve Expected Benefits

Achieve the benefits from an IT investment is much more than just buy or deploy a technology. Learn how you can be prepared to that.

If there is a single truth that this year has provided to most of the business is that we live in a digital world. It’s not the future anymore.

To be ready for this present most of the companies of all verticals have invested a lot in technology. They’ve heard all the benefits that the latest developments in technology have provided to some companies and they’d like the same benefits.

But after a while, they tried to put in place the same principles and tools and they’re not seeing the benefits. For sure, they saw some improvement but nothing compared to what they were expecting. Why this is happening? Why some of these companies are not being able to unlock these achievements

A tool is a tool, nothing more.

Any technologies principle or tool, no matter if we’re talking about a new paradigm like containerization or serverless, or a tool like an API Management platform or a new Event-Driven Architecture, they’re just a tools in hands of people.

And, in the end, what matters the most are the way those human works and how they use the tools they have at hand to achieve the optimal benefits. Companies have computers for maybe 30 years now, do you remember how it was the initial usage of those computers? Do you think people at that time were used at the optimum level? So, here is the same thing.

You shouldn’t expect that just because you’ve installed a tool, deploy a new technology or buy a new SaaS application, after that exactly moment the life of your company is going to change and you’re going to unlock all the benefits that comes with it. It’s the same story as an agenda will not make you more productive just because you have one.

Yes, it is a requirement, but this is far to be the single step you need to take to be able to achieve the success of that investment.

What matter is your thinking

A new paradigm in IT requires a different way of thinking, a trust feeling in this paradigm to be able to unlock those benefits.

If you’re not doing that way you are going to be the one stopping the progress and blocking the benefits you can get. And that’s always hard at the beginning. In the beginning, if we have a formula done in Excel and the same one in the paper we believe that the one on the computer was wrong.

Today is the other way around. We know for sure that the computer is doing it right, so we try to find our own mistake to get the same result.

Some IT managers have now the same feeling with other techniques and they try to manage it and control it using the same principles they’ve always applied. And let’s be honest: That’s normal and that’s human because we all try to use the patterns we know and the ones that have shown to be successful in the past when we face something similar.

But, let’s be honest: Do you think Netflix or Uber succeeded using the same patterns and rules they’ve been using in the path? Of course not.

But maybe you’re thinking that’s not a fair comparison because your company or your vertical is not at stake and in the middle of a revolution, you just need small changes to get those benefits. You don’t need to do everything from scratch. And that’s true.

At the end what’s relevant is if you’re ready to do the leap faith jump into the void. To introduce yourself into the jungle just with your gut and the knowledge you’ve gotten so far to guide you during that path.

Be a researcher

In reality the jump into the void is needed but this is more regarding the way that you think. It is to be ready to open your mind a leave behind some pre-thoughts you can have. At the end this is more similar to be Marie Curie rather than Indiana Jones.

Researchers and scientific always need to be open to a different way of doing things. They have their basis, their experience, the knowledge of everything that has been done in the past, but to go a step further they need to think outside the box, and open to things that were no true several years ago or things that were not the right way to do it until now. Because you’re going further than anyone has gone.

IT is similar, you’re not getting into the unknown but inside your company maybe you’re the one that needs to guide everything one else during that route and being open to thinking that maybe the old rules don’t apply to this new revolution and be ready to leave some old practices in order to unlock bigger benefits.

Summary

In the end, when you adopt a new technology you need to think about the implications that technology require in order to make it successful or even optimize the benefits that you can get from it.

Think about others than have done that path and learn from their rights and their wrongs, so you can be prepared and also be realistic. If you’re not going to get the change that the technology requires in your organization the investment makes no sense. You need to work first on prepare your organization to be ready to the change, and that moment is the moment to introduce yourself into the jungle and get all the benefits that are waiting for you.

4 Reasons Low-Code Applications Boost Developer Productivity and Business Agility

4 Reasons Low-Code Applications Boost Developer Productivity and Business Agility

How truly achieve agility on your organization focusing on what matters to your business and multiply the productivity of your development team

Fashion is cyclical and the same thing happens in Software Engineering. We live in a world where each innovation seems similar to one in the past; we advanced some time ago. That’s because what we’re doing is just refining over and over solutions for the same problems.

We’ve lived for the last years a “developer is the new black” rise, where anything related to writing code is excellent. Even devs are now observed as a cool character like the ones from Silicon Valley (the HBO show) instead of the one you can make fun of like in The I.T Crowd.

But, now, it seems we’re going back to a new rise of what is called Low-Code (or No Code) Applications.

Low-Code Application is a piece of software that helps you generate your applications or services without needing to write code in any programming language, instead of doing that, you can drag & drop some boxes that represent what you’d like to do instead of write it yourself.

That has provided advantages that are now again on the table. Let’s take a look at those advantages in more detail

1.- Provides more agility

That’s clear. No matter how high level your programming language is, no matter how many archetypes you have to generate your project skeleton or the framework and libraries that you use. Typing is always slower than drag some boxes into the white canvas and connects them with some links.

And I’m a person that is a terminal guy and VI power-user, and I realize the power of the keyboard, But let’s be honest and ask you one question:

How many of the keywords you type in your code are providing value to the business and how many are just needed for technical reasons?

Not only things like exception handling, auditing, logging, service discovery, configuration management, but stuff like loop structure, function signature definition, variable definition, class definition, and so on…

You can truly focus on the business value that you’re trying to provide to your business instead of spending time around how to manage any technical capabilities.

2.- Easier to maintain

One month after production only the developer and god knows what the code does. After a year, just god knows…

Coding is awesome but it is at the same time complex to maintain. Mainly on enterprises when developers are shifting from one project to the other, from some departments to others, and new people are being onboarded all the time to maintain and evolve some codes.

And the ones that have been in the industry for some time, know for example the situation when people said: “I prefer not to touch that because we don’t know what’s doing”, “We cannot migrate this Mainframe application because we don’t know it will be able to capture all the functionality they’re providing.”

And that’s bad for several reasons. First of all, it is costly to maintain, more complex to do it, but second is also avoiding you to evolve at the pace that you want to do it.

3.- Safer and Cost-Effective

Don’t get me wrong about this: Programming can be as safer as any low-code application. That’s clear because, in the end, any low-code app ends up generating the same binary or bytecode to be executed.

The problem is that this is going to depend on the skills of the programmer. We live in a situation that, even programming and developers are a cool thing, as you need a big number of devs in your team that implies that not all of them are as experienced and skill as you want them to be.

Reality is much more complex and also you need to deal with your budget reality and find the way to get the best of your team.

Using Low-code application, you are guaranteed the quality of the base components that are verified by a company and that they’ve improved with dedicated teams incorporating feedback for customers all over the world, which makes it safer.

4.- As ready as a code-base solution for specific needs

One of the myths that are saying against Low Code is that it is suitable for generic workloads and use-cases, but it is not capable of being adapted and optimized for your needs.

Regarding this usual push-back, first of all, we need to work on the misconception of the level of specification our software needs. In the end, the times when you need to do something so specific that is not covered by the options available out of the box are so low that it is complex to justify. Are you going to make a slower 99% of your development only to be able to do it quicker than 1%? How much of your workloads are not similar to what other companies are doing in the same industry?

But even for the sake of the discussion, let’s assume that’s true, and you need a single piece of logic a low-code application doesn’t provide out of the box. Ok, Low-Code means that you don’t need to write code, not that you cannot do it.

Most of the platforms support the option to add code if needed as an option to cover these cases. So, even in those cases, you still have the same tools to make it specific without losing all the advantages of your daily activities.

Summary

Low-code applications are one of the solutions you have at your disposal to improve your agility and your productivity in your developments to meet the pace of the changes in your business.

The solutions working on that space are not new, but they’ve been renovated to adapt to the modern developer paradigms (microservices, container-based, API-led, event-driven…) so you’re not going to miss anything but to get more time to provide even more value to your business.

Serverless Benefits Explained: Business Advantages, Cost Efficiency, and Trade-offs

Serverless Benefits Explained: Business Advantages, Cost Efficiency, and Trade-offs

Learn about the advantages and disadvantages of a serverless approach to make the right decision for your tech stack

Serverless is a passionate topic for many people, and it’s something that’s evolved a lot since its conception. It started by being an approach to getting rid of the servers, not on the technical side (obviously) but on the management side.

The idea behind it is to make developers, and IT people in general, focus on creating code and deploying it in some kind of managed infrastructure. The usual contraposition was against container-based platforms. With managed Kubernetes services, like EKS or AKS, you’re still responsible for the workers that are running your load. You don’t need to worry about the administration component. But again, you need to handle and manage some parts of the infrastructure.

This approach was also incorporated in other systems, like iPaaS and pure SaaS offerings regarding low code or no code. And we include the concept of function as a service to make the difference in this approach. So, the first question is: What’s the main difference between a function that you deploy on your serverless provider and an application that you can use your application on top of your iPaaS?

It depends on the perspective that you want to analyze.

I’m going to skip the technical details about scale to zero capabilities, warm-up techniques, and so on, and focus on the user perspective. The main difference is how these services are going to charge you based on your usage.

iPaaS or similar SaaS offerings are going to charge you based on application instance or something similar. But serverless/function as a service platform is going to cost you based on the usage that you do of that function. That means that they’re going to cost you based on the number of requests that your function receives.

That’s a game-changer. It provides the most accurate implementation of the optimized operations and elasticity approach. In this case, it’s just direct and clear that you’re going to pay only for the use of your platform. The economics are excellent. Take a look at the AWS Lambda offering today:

The AWS Lambda free usage tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.

And after, that first million of requests, they’re going to charge you 0.20 $ for every additional million requests.

Reading the sentences above, you’re probably thinking, “That’s perfect. I’m going to migrate everything to that approach!”

Not so fast. This architecture style is not valid for all the services that you may have. While the economics are excellent, these services come with limitations and anti-patterns that mean you should avoid this option.

First, let’s start with the restrictions most cloud providers define for these kinds of services:

  • Execution time: This is usually to be limited by your cloud provider to a maximum number of seconds of execution. That’s fair. If you’re going to be charged by request, and you can do all the work in a single request that takes 10 hours to complete using the resources of the provider, that’s probably not fair to the provider!
  • Memory resources: Also limited, for the same reasons.
  • Interface payload: Some providers also limit the payload that you can receive or send as part of one function — something to take into consideration when you’re defining the architecture style for your workload.

In a moment, we’ll look at the anti-patterns or when you should avoid using this architecture and approach

But first, a quick introduction to how this works at the technical level. This solution can be very economical because the time that your function’s not processing any requests is not loaded into their systems. So, it’s not using any resource at all (just a tiny storage part, but this is something ridiculous) and generating any cost for the cloud provider. But that also means that when someone needs to execute your function, the system needs to recover it, launch it and process the request. That’s usually called the “warm-up time,” and its duration depends on the technology you use.

  • Low-latency services and Services with critical response time: If your service needs low latency or the response time must be aggressive, this approach is probably is not going to work for you because of the warm-up time. Yes, there are workarounds to solve this, but they require dummy requests to the service to keep it loaded and they generate additional cost.
  • Batch or scheduled process: This is for stateless API for the cloud-native world. If you have a batch process that could take time, perhaps because it’s scheduled at nights or the weekend, it might not be the best idea to run this kind of workload.
  • Massive services: If you pay by request, it’s important to evaluate the number of requests that your service is going to receive to make sure that the numbers are still on your side. If you have a service with millions of requests per second, this probably isn’t going to be the best approach for your IT budget.

In the end, serverless/function as a service is so great because of its simplicity and how economical it is. At the same time, it’s not a silver bullet to cover all your workloads.

You need to balance it with other architectural approaches and container-based platforms, or iPaaS and SaaS offerings, to keep your toolbox full of options to find the best solution for each workload pattern that your company has.

Log Aggregation Architecture Explained: 3 Reasons You Need It Today

Log Aggregation Architecture Explained: 3 Reasons You Need It Today

Log Aggregation are not more a commodity but a critical component in container-based platforms

Log Management doesn’t seem like a very fantastic topic. It is not the topic that you see and says: “Oh! Amazing! This is what I was dreaming about my whole life”. No, I’m aware that this is not to fancy, but that doesn’t make it less critical than other capabilities that you’re architecture needs to have.

Since the start of time, we’ve been used log files as the single trustable data source when it was related to troubleshoot your applications or know what was failed in your deployment or any other actions regarding a computer.

The procedure was easy:

  • Launch “something”
  • “something” failed.
  • Check the logs
  • Change something
  • Repeat

And we’ve been doing it that way for a long, long time. Even with other more robust error handling and management approaches like Audit System, we also go back to logs when we need to get the fine-grained detail about the error. Look for a stack trace there, more detail about the error that was inserted into the Audit System or more data than just the error code and description thas was provided by a REST API.

Systems starting to grow, architecture became more complicated, but even with that, we end with the same method over and over. You’re aware of log aggregation architectures like the ELK stack or commercial solutions like Splunk or even SaaS offerings like Loggly, but you just think they’re not just for you.

They’re expensive to buy or expensive to set, and you know very well your ecosystem, and it’s easier to just jump into a machine and tail the log file. Probably you also have your toolbox of scripts to do this as quickly as anyone can open Kibana and try to search for something instance ID there to see the error for a specific transaction.

Ok, I need to tell you something: It’s time to change, and I’m going to explain to you why.

Things are changing, and IT and all the new paradigms are based on some common grounds:

  • You’re going to have more components that are going to run isolated with its log files and data.
  • Deployments will be more regular in your production environment, and that means that things are going to be wrong more usual (on a controlled way, but more usual)
  • Technologies are going to coexist, so logs are going to be very different in terms of patterns and layouts, and you need to be ready for that.

So, let’s discuss these three arguments that I hope make you think in a different way about Log Management architectures and approaches.

1.- Your approach just doesn’t scale

Your approach is excellent for traditional systems. How many machines do you manage? 30? 50? 100? And you’re able to do it quite fine. Imagine now a container-base platform for a typical enterprise. I think an average number could be around 1000 containers just for business purposes, not talking about architecture or basic services. Are you able to be ready to go container by container to check 1000 logs streams to know the error?

Even if that’s possible, are you going to be the bottleneck for the growth of your company? How many container logs do you can keep a trace on? 2000? As I was saying at the beginning, that just not scale.

2.- Logs are not there forever

And now, you read the first topic and probably are you just saying to the screen you’re using to read is. Come on! I already know that logs are not there, they’re getting rotated, they got lost, and so on.

Yeah, that’s true, this is even more important in cloud-native approach. With container-based platforms, logs are ephemeral, and also, if we follow the 12-factor app manifesto there is no file with the log. All log traces should be printed to the standard output, and that’s it.

And where the logs are deleted? When the container fails.. and which records are the ones that you need more? The ones that have been failed.

So, if you don’t do anything, the log traces that you need the most are the ones that you’re going to lose.

3.- You need to be able to predict when things are going to fail

But logs are not only valid when something goes wrong are adequate to detect when something is going to be wrong but to predict when things are going to fail. And you need to be able to aggregate that data to be able to generate information and insights from it. To be able to run ML models to detect if something is going as expected or something different is happening that could lead to some issue before it happens.

Summary

I hope these arguments have made you think that even for your small size company or even for your system, you need to be able to set up a Log Aggregation technique now and not wait for another moment when it will probably be too late.

Increased agility through modern digital connectivity

Increased agility through modern digital connectivity

Find how TIBCO Cloud Integration can help you increase business agility by connecting all your apps, devices, and data no matter where they are hosted

We live in a world where the number of digital assets that need to be integrated, the types of assets, and where they are hosted are all exploding. We’ve transitioned away from a simple enterprise landscape where all of our systems were hosted in a single datacenter, and the number of systems was small. If you still remember those days, you probably could name all the systems that you maintained. Could you imagine doing that today?

This has changed completely. Businesses today are operating more and more on apps and data rather than on manual, documented processes, and that has increased the demands to have them connected together to support the operations of the business. How does a traditional IT team keep up with all connectivity requests coming from all areas of the business to ensure these assets are fully integrated and working seamlessly?

Additionally, the business environment has changed completely. Today everything is hyper-accelerated. You can no longer wait six months to get your new marketing promotions online, or to introduce new digital services.

This is because markets change constantly over time. At times they grow, and at other times they contract. This forced enterprises to change how they do business rapidly.

So, if we need to summarize everything that we need from an application architecture to make sure that it can help us to meet our business requirements, that word is “agility”. And architectural agility creates business agility

Different IT paradigms have been adopted to help increase architectural agility from different perspectives that provide a quick way to adapt, connect, and offer new capabilities to customers:

  • Infrastructure Agility: Based on cloud adoption, cloud providers offer an agile way to immediately tap into the infrastructure capacity required, allowing for rapid innovation by quickly creating new environments and deploying new services on-demand.
  • Operation & Management Agility: SaaS-based applications allow you to adopt best-of-breed business suites without having to procure and manage the underlying infrastructure, as you do in your on-premises approach. This allows you to streamline and accelerate the operations of your business.
  • Development Agility: Based on the application technologies that create small, highly scalable components of software that can be evolved, deployed, and managed in an autonomous way. This approach embeds integration capabilities directly within deployed applications, making integration no longer a separate layer but something that is built-in inside each component. Microservices, API-led development, and event-driven architecture concepts play an essential role and expand the people involved in the development process.

So, all of these forms of agility help build an application architecture that is highly agile — able to quickly respond quickly to changes in the environment within which it operates. And you can achieve all of them with TIBCO® Cloud Integration (TCI).

TCI is an Integration Platform-as-a-Service (iPaaS), a cloud-based integration solution that makes it extremely easy for you to connect all your assets together no matter where they’re hosted. It is a SaaS offering that runs on both AWS and Microsoft Azure, so you don’t have to manage the underlying infrastructure to make sure the integration assets that are critical to your business are always available and scale to any level of demand.

From the development perspective, TCI provides you all the tools needed for your business to develop and connect all your digital assets — including your apps, data sources, devices, business suites, processes, and SaaS solutions — using the most modern standards within an immersive experience.

It addresses integration patterns from traditional approaches — such as in data replication — to modern approaches including API-Ied to Event-driven architectures. It also supports the latest connectivity standards such as REST, GraphQL, AsyncAPI, and gRPC. And to reduce the time-to-market of your integrations, it also includes a significant number of pre-packaged connectors that simplify connectivity to legacy and modern business suites, data sources, and more — no matter if they reside in your data center or in the cloud. These connectors are easily accessible within a connector marketplace embedded directly within the user experience to be used across the whole platform.

TCI improves team-based development. With TIBCO® Cloud Mesh, accessible via TCI, your integrators can easily share, discover, and reuse digital assets created across the enterprise within TIBCO Cloud — such as APIs and apps — and utilize them very quickly within integrations in a secure way without the need to worry about technical aspects.

This capability promotes the reuse of existing assets and better collaboration among teams. Combined with pre-packed connectors which are directly accessible within TCI, the development time to introduce new integrations is significantly reduced.

Increased agility through modern digital connectivity
Easily access pre-packaged connectors within an embedded connector marketplace

TCI also expands the number of people in your business that can create integrations, with multiple development experiences that are tailored for different roles providing their own experience and skills. Now not only can integration specialists participate in the integration process, but developers, API product owners, and citizen integrators can as well.

This dramatically increases business agility because your various business units can create integrations in a self-service manner, collaborate to provide solutions even if they span across business units, and reduce their dependencies on overburdened IT teams. This frees up your integration specialists to focus on providing integration best practices for your enterprise and architecting a responsive application architecture.

TCI addresses a number of integration use cases including:

  1. Connecting apps, data, and devices together that reside anywhere (e.g., on-premises, SaaS, private/public cloud)
  2. Designing,, orchestrating, and managing APIs & microservices
  3. Rearchitecting inflexible monolith apps into highly scalable cloud-native apps.
  4. Building event-driven apps that process streams of data (e.g., from IoT devices or Apache Kafka)

TCI also provides detailed insights on the performance and execution status of your integrations so you can optimize them as needed or easily detect and solve any potential issues with them. This ensures that business processes that depend on your integrations are minimally disrupted.

Increased agility through modern digital connectivity
Get at-a-glance views of application execution and performance details.
Increased agility through modern digital connectivity
Drill down for expanded insights on application execution histories and performance trends.

By bringing more people into your integration process, empowering them with an immersive view that helps them seamlessly work together on your integrations, proving capabilities such as TIBCO Cloud Mesh and pre-packaged connectors within a unified connector marketplace that accelerates integration development, your digital business can be connected and reconnected very quickly to respond to changing markets, which greatly increases your business agility.

To experience how easily you can connect all of your digital assets together to boost your business agility, sign up for a free 30-day trial of TIBCO Cloud Integration today.

Sign up for the free trial at https://www.tibco.com/products/cloud-integration

TIBCO Cloud Integration is a service provided within the TIBCO Connected Intelligence Platform, which provides a complete set of capabilities to connect your business.

Managed Container Platforms Explained: 3 Key Business Benefits You Can’t Ignore

Managed Container Platforms Explained: 3 Key Business Benefits You Can’t Ignore

Managed Container Platform provides advantages to any system inside any company. Take a look at the three critical ones.

Managed Container Platform is disrupting everything. We’re living in a time where development and the IT landscape are changing, new paradigms like microservices and containers seem to be out there for the last few years, and if we trust the reality that the blog posts and the articles show today, we’re all of the users already using them all the time.

Did you see any blog posts about how to develop a J2EE application running on your Tomcat server on-prem? Probably not. The most similar article should probably be how to containerize your tomcat-based application.

But do you know what? Most companies still are working that way. So even if all companies have a new digital approach in some departments, they also have other ones being more traditional.

So, that seems that we need to find a different way to translate the main advantages of a container-based platform to a kind of speech they can see and realize the tangible benefits they can get from there and have the “Hey, this can work for me!” kind of spirit.

1. You will get all components isolated and updated more quickly

That’s one of the great things about container-based platforms compared with previous approaches like application server-based platforms. When you have an application server cluster, you still have one cluster with several applications. So you usually do some isolation, keep related applications, provide independent infrastructure for the critical ones, and so on.

But even with that, at some level, the application continues to be coupled, so some issues with some applications could bring down another one that was not expected for business reasons.

With a container-based platform, you’re getting each application in its bubble, so any issue or error will affect that application and nothing more. Platform stability is a priority for all companies and all departments inside them. Just ask yourself: Do you want to end with those “domino’s chains” of failure? How much will your operations improve? How much will your happiness increase?

Additionally, based on the container approach, you will get smaller components. Each of them will do a single task providing a single capability to your business, which means that it will be much easier to update, test, and deploy in production. So that, in the end will generate more deployments into the production environment and reduce the time to market for your business capabilities.

You will be able to deploy faster and have more stable operations simultaneously.

2.- You will optimize the use of your infrastructure

Costs, everything is about costs. There are no single conversations with customers who are not trying to pay less for their infrastructure. So, let’s face it. We should be able to run operations in an optimized way. So, if our infrastructure cost is going higher, that needs to mean that our business increases.

Container-based platforms will allow optimizing infrastructure in two different ways. First, if using two main concepts: Elasticity and Infrastructure Sharing.

Elasticity is related because I’m only going to have the infrastructure I need to support the load I have at this moment. So, if the load increases, my infrastructure will increase to handle it, but after that moment goes away, it will go back to what it needs now after that peak happened.

Infrastructure sharing is about using each server’s part that is free to deploy other applications. Imagine a traditional approach where I have two servers for my set of applications. Probably I don’t have 100% usage of those servers because I need to have some spare computer to be able to act when the load increases. I probably have 60–70% percent. That means 30% free. If we have different departments with different systems, and each has its infrastructure 30% free, how much of our infrastructure are we just throwing away? How many dollars/euros/pounds are you just throwing off the window?

Container-based platforms don’t need specific tools or software installed on the platform to run a different kind of application. It is not required because everything resides inside the container, so I can use any free space to deploy other applications doing a more efficient usage of those.

3.- You will not need infrastructure for administration

Each system that is big enough has some resources dedicated to being able to manage it. However, even most of the recommended architectures recommend placing those components isolated from your runtime components to avoid any issue regarding administrator or maintenance that can affect your runtime workloads, which means specific infrastructure that you’re using for something that isn’t helping your business. Of course, you can explain to any business user that you need a machine to run that provides the capabilities required. But it is more complex than using additional infrastructure (and generating cost) to place other components that are not helping the business.

So, managed container platforms take that problem away because you’re going to provide the infrastructure you need to run your workloads, and you’re going to be given for free or such low fee the administration capabilities. And addition to that, you don’t even need to worry that administration features are always available and working fine because this is leverage to the provider itself.

Wrap up and next steps

As you can see, we describe very tangible benefits that are not industry-based or development focus. Of course, we can have so many more to add to this list, but these are the critical ones that affect any company in any industry worldwide. So, please, take your time to think about how these capabilities can help to improve your business. But not only that, take your time to quantify how that will enhance your business. How much can you save? How much can you get from this approach?

And when you have in front of you a solid business case based on this approach, you will get all the support and courage you need to move forward during that route!! So I wish you a peaceful transition!

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Harbor Registry Explained: Securing Container Images in Kubernetes and DevSecOps

Harbor Registry Explained: Securing Container Images in Kubernetes and DevSecOps

Learn how you can include Harbor registry in your DevSecOps toolset to increase the security and management on your container-based platform

With the transition to a more agile development process where the number of deployments has been increased in an exponential way. That situation has made it quite complex to keep pace to make sure we’re not just deploying code more often into production that provides the capabilities that are required by the business. But, also, at the same time, we’re able to do it securely and safely.

That need is leading toward the DevSecOps idea to include security as part of the DevOps culture and practices as a way to ensure safety from the beginning on development and across all the standard steps from the developer machine to the production environment.

Additional to that, because of the container paradigm we have a more polyglot approach with different kinds of components running on our platform using a different base image, packages, libraries, and so on. We need to make sure they’re still secure to use and we need tools to be able to govern that in a natural way. To help us on that duty is where components like Harbor help us to do that.

Harbor is a CNCF project at the incubator stage at the moment of writing this article, and it provides several capabilities regarding how to manage container images from a project perspective. It gives a project approach with its docker registry and also a chart museum if we’d like to use Helm Charts as part of our project development. But it includes security features too, and that’s the one that we’re going to cover in this article:

  • Vulnerabilities Scan: it allows you to scan all the docker images registered in the different repositories to check if they have vulnerabilities. It also provides automation during that process to make sure that every time we push a new image, this is scanned automatically. Also, it will enable defining policies to avoid pulling any image with vulnerabilities and also set the level of vulnerabilities (low, medium, high, or critical) that we’d like to tolerate it. By default, it comes with Clair as the default scanner, but you can introduce others as well.
  • Signed images: Harbor registry provides options to deploy notary as part of its components to be able to sign images during the push process to make sure that no modifications are done to that image
  • Tag Inmuttability and Retention Rules: Harbor registry also provides the option to define tag immutability and retention rules to make sure that we don’t have any attempt to replace images with others using the same tag.

Harbor registry is based on docker so you can run it locally using docker and docker-compose using the procedure that is available on its official web page. But it also supports being installed on top of your Kubernetes platform using the helm chart and operator that is available.

Once the tool is installed, we have access to the UI Web Portal, and we’re able to create a project that has repositories as part of it.

Harbor Registry: How to use to increase security on your platform?
Project List inside the Harbor Portal UI

As part of the project configuration, we can define the security policies that we’d like to provide to each project. That means that different projects can have different security profiles.

Harbor Registry: How to use to increase security on your platform?
Security settings inside a Project in Harbor Porta UI

And once we push a new image to the repository that belongs to that project we’re going to see the following details:

Harbor Registry: How to use to increase security on your platform?

In this case, I’ve pushed a TIBCO BusinessWorks Container Edition application that doesn’t contain any vulnerability and just shows that and also where this was checked.

Also, if we see the details, we can check additional information like if the image has been signed or not, or be able to check it again.

Harbor Registry: How to use to increase security on your platform?
Image details inside Harbor Portal UI

Summary

So, this is just a few features that Harbor provides from the security perspective. But Harbor is much more than only that so probably we cover more of its features in further articles I hope based on what you read today you’d like to give it a chance and start introducing it in your DevSecOps toolset.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

API Management vs Service Mesh: Differences, Use Cases, and When You Need Both

API Management vs Service Mesh: Differences, Use Cases, and When You Need Both

Service Mesh vs. API Management Solution: is it the same? Are they compatible? Are they rivals?

When we talk about communication in a distributed cloud-native world and especially when we are talking about container-based architectures based on Kubernetes platform like AKS, EKS, Openshift, and so on, two technologies generate a lot of confusion because they seem to be covering the same capabilities: Those are Service Mesh and API Management Solutions.

It is has been a controversial topic where different bold statements have been made: People who think that those technologies to work together in a complementary mode, others who believe that they’re trying to solve the same problems in different ways and even people who think one is just the evolution of the other to the new cloud-native architecture.

API Management Solutions

API Management Solutions have been part of our architectures for so long. It is a crucial component of any architecture nowadays that is created following the principles of the API-Led Architecture, and they’re an evolution of the pre-existent API Gateway we’ve included as an evolution of the pure proxies in the late 90s and early 2000.

API Management Solutions is a critical component of your API Strategy because it enable your company to work on an API Led Approach. And that is much more than the technical aspect of it. We usually try to simplify the API Led Approach to the technical side with the API-based development and the microservices we’re creating and the collaborative spirit in mind we use today to make any piece of software that is deployed on the production environment.

But it is pretty much more than that. API Lead Architectures is about creating products from our API, providing all the artifacts (technical and non-technical) that we need to do that conversion. A quick list of those artifacts (but it is not an exhaustive list are the following ones)

  • API Documentation Support
  • Package Plans Definition
  • Subscription capabilities
  • Monetization capabilities
  • Self-Service API Discovery
  • Versioning capabilities

Traditionally, the API Management solution also comes with API Gateway capabilities embedded to cover even the technical aspect of it, and that also provide some other capabilities more in the technical level:

  • Exposition
  • Routing
  • Security
  • Throttling

Service Mesh

Service Mesh is more a buzz word these days and a technology that is now trending because it has been created to solve some of the challenges that are inherent to the microservice and container approach and everything under the cloud-native label.

In this case, it comes from the technical side so, it is much more a bottom-top approach because their existence is to be able to solve a technical problem and try to provide a better user experience to the new developers and system administrators in this new world much more complicated. And what are the challenges that have been created in this transition? Let’s take a look at them:

Service Registry & Discovery is one of the critical things that we need to cover because with the elastic paradigm of the cloud-native world makes that the services are switching its location from time to time being started in new machines when needed, remove of them when there is no enough load to require its presence, so it is essential to provide a way to easily manage that new reality that we didn’t need in the past when our services were bounded to a specific machine or set of devices.

Security is another important topic in any architecture we can create today, and with the polyglot approach we’ve incorporated in our architectures is another challenging thing because we need to provide a secure way to communicate our services that are supported by any technology we’re using and anyone we can use in the future. And we’re not talking just about pure Authentication but also Authorization because in a service-to-service communication we also need to provide a way to check if the microservice that is calling another one is allowed to do so and do that in an agile way not to stop all the new advantages that your cloud-native architecture provides because of its conception.

Routing requirements also have been changed in these new architectures. If you remember how we usually deploy in traditional architectures, we typically try to find a zero down-time approach (when possible) but a very standard procedure. Deploy a new version, validate its working, and open the traffic for anyone, but today the requirements claim for much more complex paradigms. The Service Mesh technologies support rollout strategies like A/B Testing, Weight-based routing, Canary deployments.

Rival or Companion?

So, after doing a quick view of the purpose of these technologies and the problem they tried to solve, are they rivals or companions? Should we choose one or the other or try to place both of them in our architecture?

Like always, the answer to those questions is the same: “It depends!”. It depends on what you’re trying to do, what your company is trying to achieve, what you’re building..

  • API Management solution is needed as long as you’re implementing an API Strategy in your organization. Service Mesh technology is not trying to fill that gap. They can provide technical capabilities to cover that traditional has been done the API Gateway component, but this is just one of the elements of the API Management Solution. The other parts that provide the management and the governance capabilities are not covered by any Service Mesh today.
  • Service Mesh is needed if you have a cloud-native architecture based on the container platform that is firmly based on HTTP communication for synchronous communication. It provides so many technical capabilities that will make your life much more manageable that as soon as you include it into your architecture, you cannot live without it.
  • Service Mesh is only going to provide its capabilities in a container platform approach. So, if you have a more heterogeneous landscape as much of the enterprise do today, (you have a container platform but also other platforms like SaaS application, some systems still on-prem and traditional architectures that all of them are providing capabilities that you’d like to leverage as part of the API products), you will need to include an API Management Solution.

So, these technologies can play together in a complete architecture to cover different kinds of requirements, especially when we’re talking about complex heterogeneous-architectures with a need to include an API Lead approach.

In upcoming articles, we will cover how we can integrate both technologies from the technical aspect and how the data flow among the different components of the architecture.

Top 3 Bash Hacks To Boost Your Performance

Top 3 Bash Hacks To Boost Your Performance

Find the list of the bash performance hacks I use all the time and they can help you save a lot of time in your daily job.

Bash hacks knowledge is one of the ways to improve your performance. We spend many hours inside of them and have been developing patterns and habits when we log inside a computer. For example, if you have two people with a similar skill level and provide the same task to do, probably, they’re different tools aa nd different comet to the same path.

And that’s because the number of options available to do any task is so significant that each of us learns one or two way to do something, and we sticks to them, and we just automate those, so we’re not thinking when we’re typing them.

So, the idea today is to provide a list of some commands that I use all the time that probably you’re aware of, but for me are like time savings each day of my work life. So, let’s start with those.

1.- CTRL + R

This command is my predliect bash hack. It is the one I use all the time; as soon as I log into a remote machine that is new or just coming back to them, I use it for pretty much everything. Unfortunately, this command only allows you to search into the command history.

It helps you autocomplete based on the commands you’re already typing. It’s like the same thing as typing history | grep <something> but just faster and more natural for me.

This bash hack also allows me to autocomplete paths that I don’t remember the exact subfolder name or whatever. I do this tricky command every two weeks to clean some process memory or apply some configuration. Or just to troubleshoot in which machine I’m logging at a specific moment.

2.- find + action

Find command is something we all know, but most of you probably use it as a limited functionality from all the ones available from the find command. And that’s it because that command is incredible. It provides so many features. But this time, I’m just going to cover one specific topic. Actions after locating the files that we’re looking for.

We usually use the find command to find files or folders, which is evident for a command that it’s called that way. But also it allows us to add the -execparameter to concatenate an action to be done for each of the files that match your criteria, for example, something like this

Find all the yaml files in your current folder and just rename them to move them to a different folder. You can do it directly with this command:

find . -name “*.yaml” -exec mv {} /tmp/folder/yamlFiles/ \;

3.- cd –

So simple and so helpful bash hack. Just like our bash command to CTRL-Z. The command cd -allows us to go back to the previous folder we’re in.

So valuable for we got wrong the folder that we like to go, just to switch between two folders and so on quickly. It’s like going back to your browser or the CTRL -Z in your Word processor.

Wrap up

I hope you love this command as much as I do, and if you already know it, please let me in the responses to the bash hacks that are the most relevant for you daily. It can be because you use it all the time as I do with these or because even if you’re not using it so many times, the times that you do, it saves you a massive amount of time!

Observability in Polyglot Microservice Architectures: Tracing Without Friction

Observability in Polyglot Microservice Architectures: Tracing Without Friction

Learn how to manage observability requirements as part of your microservice ecosystem

“May you live in interesting times” is the English translation of the Chinese curse, and this couldn’t be more true as a description of the times that we’re living regarding our application architecture and application development.

All the changes from the cloud-native approach, including all the new technologies that come with it like containers, microservices, API, DevOps, and so on has transformed the situation entirely for any architecture, developer, or system administration.

It’s something similar if you went to bed in 2003, and you wake up in 2020 all the changes, all the new philosophies, but also all the unique challenges that come with the changes and new capabilities are things that we need to deal with today.

I think we all can agree the present is polyglot in terms of application development. Today is not expected for any big company or enterprise to find an available technology, an available language to support all their in-house products. Today we all follow and agree on the “the right tool for the right job principle” to try to create our toolset of technologies that we are going to use to solve different use cases or patterns that you need to face.

But that agreement and movement also come with its challenge regarding things that we usually don’t think about like Tracing and Observability in general.

When we use a single technology, everything is more straightforward. To define a common strategy to trace your end to end flows is easy; you only need to embed the logic into your common development framework or library all your developments are using. Probably define a typical header architecture with all the data that you need to be able to effectively trace all the requests and define a standard protocol to send all those traces to a standard system that can store and correlate all of them and explain the end to end flow. But try to move that to a polyglot ecosystem: Should I write my framework or library for each language or technology I’d need to use, or I can also use in the future? Does that make sense?

But not only that, should I slow the adoption of a new technology that can quickly help the business because I need to provide from a shared team this kind of standard components? That is the best case that I have enough people that know how the internals of my framework work and have the skills in all the languages that we’re adopting to be able to do it quickly and in an efficient way. It seems unlikely, right?

So, to new challenges also new solutions. I’m already have been talking about Service Mesh regarding the capabilities that provide from a communication perspective, and if you don’t remember you can take a look at those posts:

But it also provides capabilities from other perspectives and Tracing, and Observability is one of them. Because when we cannot include those features in any technology, we need to use, we can do it in a general technology that is supported by all of them, and that’s the case with Service Mesh.

As Service Mesh is the standard way to communicate synchronously, your microservices in an east-west communication fashion covering the service-to-service communication. So, if you’re able to include in that component also the tracing capability, you can have an end-to-end tracing without needed to implement anything in each of the different technologies that you can use to implement the logic that you need, so, you’ve been changing from Figure A to Figure B in the picture below:

Observability in Polyglot Microservice Architectures: Tracing Without Friction
In-App Tracing logic implementation vs. Service Mesh Tracing Support

And that what most of the Service Mesh technologies are doing. For example, Istio, as one of the default choices when it comes to Service Mesh, includes an implementation of the OpenTracing standard that allows integration with any tool that supports the standard to be able to collect all the tracing information for any technology that is used to communicate across the mesh.

So, that mind-change allows us to easily integrates different technologies without needed any exceptional support of those standards for any specific technology. Does that mean that the implementation of those standards for those technologies is not required? Not at all, that is still relevant, because the ones that also support those standards can provide even more insights. After all, the Service Mesh only knows part of the information that is the flow that’s happening outside of each technology. It’s something similar to a black-box approach. But also adding the support for each technology to the same standard provides an additional white-box approach as you can see graphically in the image below:

Observability in Polyglot Microservice Architectures: Tracing Without Friction
Merging White Box Tracing Data and Black Box Tracing Data

We already talked about the compliance of some technologies with the OpenTracing standard like TIBCO BusinessWorks Container Edition that you can remember it here:

So, also, the support from these technologies of the industry standards is needed and even a competitive advantage because without needing to develop your tracing framework, you’re able to achieve a Complete Tracing Data approach additional to that is already provided by the Service Mesh level itself.