This article will cover how to quickly integrate Grafana and LDAP server to increase the security of your application
Grafana is one of the most used dashboard tools, mainly for the observability platforms on cloud-workload environments. But none of these tools is complete until you can configure them to meet your security standards. Security is becoming each time more and more important and this is especially true when we are talking about any cloud-related component. So this topic should always something that any cloud architect has in mind when they are defining or implementing any piece of software today.
And at that point, the LDAP integration is one of the main things you will always need to do. I don’t know any enterprise, big or small, that allows the usage of any tool with a Graphical User Interface if it is not connected to the corporate directory.
So, let’s see how we can implement this with a popular tool such as Grafana. In my case, I’m going to use a containerized version of grafana, but the steps and things to do remains the same no matter the deployment model.
The task that we are going to perform is based on three steps:
Enable the LDAP settings
Basic LDAP settings configuration
Group mapping configuration
The first thing we need to modify is the grafana.ini adding the following snippet:
That means that we are enabling the authentication based on LDAP and setting the location of the LDAP configuration file. This ldap.toml has all the configuration needed for the LDAP:
The first part is regarding the primary connection. We will establish the host location and port, the admin user, and the password. After that, we will need a second section that definesearch_filter and search_base_dns to define the users that can access the system.
Finally, we have another section to define the mapping between the LDAP attributes and the grafana attributes to be able to gather the data from the LDAP.
With all those changes in place, you need to restart the Grafana server to see all this configuration applied. After that point, you can log in using the LDAP credentials as you can see in the picture below and see all the data retrieved from the LDAP Server:
Using the default admin server, you can also use a new feature to test the LDAP configuration using the LDAP option that you can see in the picture below:
And then you can search for a User and you will see how this user will play on the Grafana Server:
Check the attributes will map from the LDAP server to the Grafana server
Also check the status of activity and the role allowed to this user
You can see one sample in the picture below:
I hope that this article helps you in a way to level up the security settings on your Grafana installations to integrate with the LDAP server. If you want to see more information about this and similar topics I encourage you to take a look at the links that you would find below these sentences.
Development Security is one of the big topics of today’s development practice. All the improvements that we got following the DevOps practices have generated many issues and concerns from the security perspective.
The explosion of components that the security teams need to deal with, container approaches, and polyglot environments gave us many benefits from the development and the operational perspective. Still, it made the security side of it more complex.
This is why there have been many movements regarding the “Shift left” approach and including security as part of the DevOps process creating the new term for DevSecOps that is becoming the new normal.
DevOps tech: Shifting left on security | Google Cloud
Security is everyone’s responsibility. The 2016 State of DevOps Report (PDF) research shows that high-performing teams spend 50 percent less time remediating security issues than low-performing teams. By better integrating information security (InfoSec) objectives into daily work, teams can achieve…
So, today what I would like to bring to you is a set of tools that I have just discovered that are created with the approach of making your life easier from the development security perspective because also developers need to be part of this and not leave all the responsibility to a different team.
This set of tools is name Anchore Toolbox, and they are open source and free to use, as you can see on the official webpage (https://anchore.com/opensource/)
So, what Anchore can provide to us? At the moment, we are talking about two different applications: Syft and Grype.
Syft
GitHub – anchore/syft: CLI tool and library for generating a Software Bill of Materials from container images and filesystems
CLI tool and library for generating a Software Bill of Materials from container images and filesystems – GitHub – anchore/syft: CLI tool and library for generating a Software Bill of Materials from…
Syft is a CLI tool and go library for generating a Software Bill of Materials (SBOM) from container images and filesystems. Installation is as easy as just executing the following command:
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
And after doing that, we need to type syft to see all the options at our disposal:
Syft help menu with all the options available
So, in our case, I will use to generate a bill of materials from an existing Docker image from bitnami/kafka to show how this works. I need to type the following command:
syft bitnami/kafka
And after a few seconds to have the image loaded and analyzed, I get as the output the list of all and each of the packages that this image has installed and the version of each of them as shown in the picture below. One great thing is that it shows not only the operating system packages like what we have installed using apk or apt but also other components like java libraries as well so we can have a complete bill of materials for this container image.
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-javadoc.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-javadoc.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-java8-compat_2.12–0.9.1.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-java8-compat_2.12–0.9.1.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-test-sources.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-test-sources.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/jackson-module-scala_2.12–2.10.5.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/jackson-module-scala_2.12–2.10.5.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka-streams-scala_2.12–2.7.0.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka-streams-scala_2.12–2.7.0.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-test.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-test.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-collection-compat_2.12–2.2.0.jar’
[0019] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-collection-compat_2.12–2.2.0.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-sources.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/kafka_2.12–2.7.0-sources.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-logging_2.12–3.9.2.jar’
[0020] WARN unexpectedly empty matches for archive ‘/opt/bitnami/kafka/libs/scala-logging_2.12–3.9.2.jar’
NAME VERSION TYPE
java-archive
acl 2.2.53–4 deb
activation 1.1.1 java-archive
adduser 3.118 deb
aopalliance-repackaged 2.6.1 java-archive
apt 1.8.2.2 deb
argparse4j 0.7.0 java-archive
audience-annotations 0.5.0 java-archive
base-files 10.3+deb10u8 deb
base-passwd 3.5.46 deb
bash 5.0–4 deb
bsdutils 1:2.33.1–0.1 deb
ca-certificates 20200601~deb10u2 deb
com.fasterxml.jackson.module.jackson.module.scala java-archive
commons-cli 1.4 java-archive
commons-lang3 3.8.1 java-archive
...
Grype
GitHub – anchore/grype: A vulnerability scanner for container images and filesystems
A vulnerability scanner for container images and filesystems – GitHub – anchore/grype: A vulnerability scanner for container images and filesystems
Grype is a vulnerability scanner for container images and filesystems. It is the next step because it checks the image’s components and checks if there is any known vulnerability.
To install this component again is as easy as type the following command:
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
After doing that, we need to type grype to have the help menu with all the options at our disposal:
Grype help menu with all the options available
Grype works in the following one. The first thing it does is load the vulnerability DB to check the different packages against this database to search for any known vulnerability. After doing that, follow the same pattern as syft and generate the bill of materials and check each of the components into the vulnerability database, and if there is a match. It just provides the ID of the vulnerability, the severity, and, if this is fixed into a higher version, provides the version where this vulnerability has been fixed.
Here you can see the output regarding the same image from bitnami/kafka with all the vulnerabilities detected
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
[0018] ERROR matcher failed for pkg=Pkg(type=java-archive, name=, version=): matcher failed to fetch by CPE pkg=’’: product name is required
NAME INSTALLED FIXED-IN VULNERABILITY SEVERITY
apt 1.8.2.2 CVE-2011–3374 Negligible
bash 5.0–4 CVE-2019–18276 Negligible
commons-lang3 3.8.1 CVE-2013–1907 Medium
commons-lang3 3.8.1 CVE-2013–1908 Medium
coreutils 8.30–3 CVE-2016–2781 Low
coreutils 8.30–3 CVE-2017–18018 Negligible
curl 7.64.0–4+deb10u1 CVE-2020–8169 Medium
..
Summary
These simple CLI tools help us a lot in the needed journey to keep our software current and free of known vulnerabilities and improve our development security. Also, as these are CLI apps and also can run on containers, it is effortless to include those as part of your CICD pipeline so vulnerabilities can check in an automated way.
They also provided a plugin to be included in the most used CI/CD systems such as Jenkins, Cloudbees, CircleCI, GitHub Actions, Bitbucket, Azure DevOps, and so on.
Learn how you can include Harbor registry in your DevSecOps toolset to increase the security and management on your container-based platform
With the transition to a more agile development process where the number of deployments has been increased in an exponential way. That situation has made it quite complex to keep pace to make sure we’re not just deploying code more often into production that provides the capabilities that are required by the business. But, also, at the same time, we’re able to do it securely and safely.
That need is leading toward the DevSecOps idea to include security as part of the DevOps culture and practices as a way to ensure safety from the beginning on development and across all the standard steps from the developer machine to the production environment.
Additional to that, because of the container paradigm we have a more polyglot approach with different kinds of components running on our platform using a different base image, packages, libraries, and so on. We need to make sure they’re still secure to use and we need tools to be able to govern that in a natural way. To help us on that duty is where components like Harbor help us to do that.
Harbor is a CNCF project at the incubator stage at the moment of writing this article, and it provides several capabilities regarding how to manage container images from a project perspective. It gives a project approach with its docker registry and also a chart museum if we’d like to use Helm Charts as part of our project development. But it includes security features too, and that’s the one that we’re going to cover in this article:
Vulnerabilities Scan: it allows you to scan all the docker images registered in the different repositories to check if they have vulnerabilities. It also provides automation during that process to make sure that every time we push a new image, this is scanned automatically. Also, it will enable defining policies to avoid pulling any image with vulnerabilities and also set the level of vulnerabilities (low, medium, high, or critical) that we’d like to tolerate it. By default, it comes with Clair as the default scanner, but you can introduce others as well.
Signed images: Harbor registry provides options to deploy notary as part of its components to be able to sign images during the push process to make sure that no modifications are done to that image
Tag Inmuttability and Retention Rules: Harbor registry also provides the option to define tag immutability and retention rules to make sure that we don’t have any attempt to replace images with others using the same tag.
Harbor registry is based on docker so you can run it locally using docker and docker-compose using the procedure that is available on its official web page. But it also supports being installed on top of your Kubernetes platform using the helm chart and operator that is available.
Once the tool is installed, we have access to the UI Web Portal, and we’re able to create a project that has repositories as part of it.
Project List inside the Harbor Portal UI
As part of the project configuration, we can define the security policies that we’d like to provide to each project. That means that different projects can have different security profiles.
Security settings inside a Project in Harbor Porta UI
And once we push a new image to the repository that belongs to that project we’re going to see the following details:
In this case, I’ve pushed a TIBCO BusinessWorks Container Edition application that doesn’t contain any vulnerability and just shows that and also where this was checked.
Also, if we see the details, we can check additional information like if the image has been signed or not, or be able to check it again.
Image details inside Harbor Portal UI
Summary
So, this is just a few features that Harbor provides from the security perspective. But Harbor is much more than only that so probably we cover more of its features in further articles I hope based on what you read today you’d like to give it a chance and start introducing it in your DevSecOps toolset.
OAuth 2.0 is a protocol that allows users to authorize third-parties to access their info without needing to know the user credentials. It usually relies on an additional system that acts as Identity Provider where the user is going to authenticate against, and then once authenticate you are provided with some secure piece of information with the user privileges and you can use that piece to authenticate your requests for some period.
This article is part of my comprehensive TIBCO Integration Platform Guide where you can find more patterns and best practices for TIBCO integration platforms.
OAuth v2.0 Authentication Flow Sample
OAuth 2.0 also defines a number of grant flows to adapt to different authentication needs. These authorization grants are the following ones:
Client Credentials
Authorization Code
Resource Owner Password Credentials
The decision to choose one flow or another depends on the needs of the invocation and of course, it is a customer personal decision but as a general way, the approximation showed in the Auth0 documentation is the usual recommendation:
Decision Graph to choose the Authorization Grant to use
These grants are based on JSON Web Token to transmit information between the different parties.
JWT
JSON Web Token is an industry standard for a token generation defined in the RFC 7519 standard. it defines a secure way to submit info between parties like a JSON object.
It is composed of three components (Header, Payload, and Signature) and can be used by a symmetric cipher or with a public/private key exchange mode using RSA or ECDSA.
JWT Composition Sample
Use Case Scenario
So, OAuth V2 and JWT are the usual way to authenticate requests in Microservice world, so it is important to have this standard supported in your framework, and this is perfectly covered in Flogo Enterprise as we’re going to see in this test.
AWS Cognito Setup
We’re going to use AWS Cognito as the Authorization Server, as this is quite easy to set up and it is one of the main actors in this kind of authentication. In Amazon words themselves:
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
In our case, we’re going to do a pretty straightforward setup and the steps down are shown below:
Create a User Pool named “Flogo”
Create an App Client with a generated secret named “TestApplication”
Create a Resource Server named “Test” with identifier “http://flogo.test1” and with a scope named “echo”
Resource Server Configuration
Set the following App Client Settings as shown in the image below with the following details:
Cognito User Pool selected as Enable Identity Provider
Client credentials used as Allowed OAuth Flows
http://flogo.test1/echo selected as an enabled scope
App Client Setting Configuration
And that’s all the configuration needed at the Cognito level, and now let’s go back to Flogo Enterprise Web UI.
Flogo Set-up
Now, we are going to create a REST Service and we’re going to skip all steps regarding how to create a REST service using Flogo but you can take a look at the detailed steps in the flow below:
Building your First Flogo Application
In the previous post, we introduce Flogo technology like one of the things in the cloud-native development industry and now we’re going to build our first Flogo Application and try to cover all the options that we describe in the previous post. NOTE .- If you’re new to Flogo and you didn’t read the previous post […]
We’re going to create an echo service hosted at localhost:9999/hello/ that received a path parameter after the hello that is the name we’d like to greet, and we’re going to establish the following restrictions:
We’re going to check the presence of a JWT Token
We’re going to validate the JWT Token
We’re going to check that the JWT included the http://flogo.test1/echo
In case, everything is OK, we’re going to execute the service, in case some prerequisite is not meet we’re going to return a 401 Unauthorized error.
We’re going to create two flows:
Main flow that is going to have the REST Service
Subflow to do all the validations regarding JWT.
MainFlow is quite straightforward is going to receive the request, execute the subflow and depending on their output is going to execute the service or not:
So, all important logic is inside the other flow that is going to do all the JWT Validation, and its structure is the one showed below:
We’re going to use the JWT activity hosted in GitHub available at the link shown below:
flogo-components/activity/jwt at master · ayh20/flogo-components
Contribute to ayh20/flogo-components development by creating an account on GitHub.
NOTE: If you don’t remember how to install a Flogo Enterprise extension take a look at the link below:
Installing Extensions in Flogo Enterprise
In previous posts, we’ve talked about capabilities of Flogo and how to build our first Flogo application, so at this moment if you’ve read both of them you have a clear knowledge about what Flogo provides and how easy is to create applications in Flogo. But in those capabilities, we’ve spoken about that one of […]
And after that, the configuration is quite easy, as the activity allows you to choose the action you want to do with the token. In our case, “Verify” and we provided the token, the algorithm (in our case “RS256”) and the public key we’re going to use for validating the signature of the token:
Test
Now, we’re going to launch the Flogo Enterprise Application from the binary generated:
And now, if we try to do an execution to the endpoint without providing any token we get the expected 401 code response
So, to get the access token first thing we need to do is to send a request to AWS Cognito endpoint (https://flogotest.auth.eu-west-2.amazoncognito.com/oauth2/token) using our app credentials:
And this token has all the info from the client and its permission, you can check it in the jwt.io webpage:
And to finally test it, we only need to add it to the first request we tried as we can see in the picture below:
Resources
GitHub – alexandrev/flogo-jwt-sample-medium: JWT Support in Flogo Enterprise Post Resource
JWT Support in Flogo Enterprise Post Resource. Contribute to alexandrev/flogo-jwt-sample-medium development by creating an account on GitHub.