CICD Docker: Top 3 Reasons Why Using Containers In Your DevSecOps pipeline

CICD Docker: Top 3 Reasons Why Using Containers In Your DevSecOps pipeline

Improve the performance and productivity of your DevSecOps pipeline using containers.

CICD Docker means the approach most companies are using to introduce containers also in the building and pre-deployment phase to implement a part of the CICD pipeline. Let’s see why.

DevSecOps is the new normal for deployments at scale in large enterprises to meet the pace required in digital business nowadays. These processes are orchestrated using a CICD orchestration tool that acts as the brain of this process. Usual tools for doing this job are Jenkins, Bamboo, AzureDevOps, GitLab, GitHub.

In the traditional approach, we have different worker servers doing stages of the DevOps process: Code, Build, Test, Deploy, and for each of them, we need different kinds of tools and utilities to do the job. For example, to get the code, we can need a git installed. To do the build, we can rely on maven or Gradle, and to test, we can use SonarQube and so on.

CICD Docker: 3 Reasons to use Containers in your DevSecOps pipeline
CICD Docker Structure and the relationship between Orchestrator and Workers

So, in the end, we need a set of tools to perform successfully, and that also requires some management. In the new days, with the rise of cloud-native development and the container approach in the industry, this is also affecting the way that you develop your pipelines to introduce containers as part of the stage.

In most of the CI Orchestrators, you can define a container image to run as any step of your DevSecOps process, and let me tell you that is great if you do so because this will provide you a lot of the benefits that you need to be aware of.

1.- Much more scalable solution

One of the problems when you use an orchestrator as the main element in your company, and that is being used by a lot of different technologies that can be open-source proprietary, code-based, visual development, and so on that means that you need to manage a lot of things and install the software in the workers.

Usually, what you do is that you define some workers to do the build of some artifacts, like the image shown below:

CICD Docker: Top 3 Reasons Why Using Containers In Your DevSecOps pipeline
Worker distribution based on its own capabilities

That is great because it allows segmentation of the build process and doesn’t require all software installed in all machines, even when they can be non-compatible.

But what happens if we need to deploy a lot of applications of one of the types that we have in the picture below, like TIBCO BusinessWorks applications? That you will be limited based on the number of workers who have the software installed to build it and deploy it.

With a container-based approach, you will have all the workers available because no software is needed, you just need to pull the docker image, and that’s it, so you are only limited by the infrastructure you use, and if you adopt a cloud platform as part of the build process, these limitations are just removed. Your time to market and deployment pace is improved.

2.- Easy to maintain and extend

If you remove the need to install and manage the workers because they are spin up when you need it and delete it when they are not needed and all the thing you need to do is to create a container image that does the job, the time and the effort the teams need to spend in maintaining and extending the solution will drop considerably.

Also the removal of any upgrade process for the components involved on the steps as they follow the usual container image process.

3.- Avoid Orchestrator lock-in

As we rely on the containers to do most of the job, the work that we need to do to move from one DevOps solution to another is small, and that gives us the control to choose at any moment if the solution that we are using is the best one for our use-case and context or we need to move to another more optimized without the problem to justify big investments to do that job.

You get the control back, and you can also even go to a multi-orchestrator approach if needed, like using the best solution for each use-case and getting all the benefits for each of them at the same time without needing to fight against each of them.

Summary

All the benefits that we all know from cloud-native development paradigms and containers are relevant for application development and other processes that we use in our organization, being one of those your DevSecOps pipeline and processes. Start today making that journey to get all those advantages in the building process and not wait until it is too late. Enjoy your day. Enjoy your life.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Serverless Benefits Explained: Business Advantages, Cost Efficiency, and Trade-offs

Serverless Benefits Explained: Business Advantages, Cost Efficiency, and Trade-offs

Learn about the advantages and disadvantages of a serverless approach to make the right decision for your tech stack

Serverless is a passionate topic for many people, and it’s something that’s evolved a lot since its conception. It started by being an approach to getting rid of the servers, not on the technical side (obviously) but on the management side.

The idea behind it is to make developers, and IT people in general, focus on creating code and deploying it in some kind of managed infrastructure. The usual contraposition was against container-based platforms. With managed Kubernetes services, like EKS or AKS, you’re still responsible for the workers that are running your load. You don’t need to worry about the administration component. But again, you need to handle and manage some parts of the infrastructure.

This approach was also incorporated in other systems, like iPaaS and pure SaaS offerings regarding low code or no code. And we include the concept of function as a service to make the difference in this approach. So, the first question is: What’s the main difference between a function that you deploy on your serverless provider and an application that you can use your application on top of your iPaaS?

It depends on the perspective that you want to analyze.

I’m going to skip the technical details about scale to zero capabilities, warm-up techniques, and so on, and focus on the user perspective. The main difference is how these services are going to charge you based on your usage.

iPaaS or similar SaaS offerings are going to charge you based on application instance or something similar. But serverless/function as a service platform is going to cost you based on the usage that you do of that function. That means that they’re going to cost you based on the number of requests that your function receives.

That’s a game-changer. It provides the most accurate implementation of the optimized operations and elasticity approach. In this case, it’s just direct and clear that you’re going to pay only for the use of your platform. The economics are excellent. Take a look at the AWS Lambda offering today:

The AWS Lambda free usage tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.

And after, that first million of requests, they’re going to charge you 0.20 $ for every additional million requests.

Reading the sentences above, you’re probably thinking, “That’s perfect. I’m going to migrate everything to that approach!”

Not so fast. This architecture style is not valid for all the services that you may have. While the economics are excellent, these services come with limitations and anti-patterns that mean you should avoid this option.

First, let’s start with the restrictions most cloud providers define for these kinds of services:

  • Execution time: This is usually to be limited by your cloud provider to a maximum number of seconds of execution. That’s fair. If you’re going to be charged by request, and you can do all the work in a single request that takes 10 hours to complete using the resources of the provider, that’s probably not fair to the provider!
  • Memory resources: Also limited, for the same reasons.
  • Interface payload: Some providers also limit the payload that you can receive or send as part of one function — something to take into consideration when you’re defining the architecture style for your workload.

In a moment, we’ll look at the anti-patterns or when you should avoid using this architecture and approach

But first, a quick introduction to how this works at the technical level. This solution can be very economical because the time that your function’s not processing any requests is not loaded into their systems. So, it’s not using any resource at all (just a tiny storage part, but this is something ridiculous) and generating any cost for the cloud provider. But that also means that when someone needs to execute your function, the system needs to recover it, launch it and process the request. That’s usually called the “warm-up time,” and its duration depends on the technology you use.

  • Low-latency services and Services with critical response time: If your service needs low latency or the response time must be aggressive, this approach is probably is not going to work for you because of the warm-up time. Yes, there are workarounds to solve this, but they require dummy requests to the service to keep it loaded and they generate additional cost.
  • Batch or scheduled process: This is for stateless API for the cloud-native world. If you have a batch process that could take time, perhaps because it’s scheduled at nights or the weekend, it might not be the best idea to run this kind of workload.
  • Massive services: If you pay by request, it’s important to evaluate the number of requests that your service is going to receive to make sure that the numbers are still on your side. If you have a service with millions of requests per second, this probably isn’t going to be the best approach for your IT budget.

In the end, serverless/function as a service is so great because of its simplicity and how economical it is. At the same time, it’s not a silver bullet to cover all your workloads.

You need to balance it with other architectural approaches and container-based platforms, or iPaaS and SaaS offerings, to keep your toolbox full of options to find the best solution for each workload pattern that your company has.