Resolving Kubernetes Ingress Issues: Limitations and Gateway Insights

Resolving Kubernetes Ingress Issues: Limitations and Gateway Insights

Introduction

Ingresses have been, since the early versions of Kubernetes, the most common way to expose applications to the outside. Although their initial design was simple and elegant, the success of Kubernetes and the growing complexity of use cases have turned Ingress into a problematic piece: limited, inconsistent between vendors, and difficult to govern in enterprise environments.

In this article, we analyze why Ingresses have become a constant source of friction, how different Ingress Controllers have influenced this situation, and why more and more organizations are considering alternatives like Gateway API.

What Ingresses are and why they were designed this way

The Ingress ecosystem revolves around two main resources:

🏷️ IngressClass

Defines which controller will manage the associated Ingresses. Its scope is cluster-wide, so it is usually managed by the platform team.

🌐 Ingress

It is the resource that developers use to expose a service. It allows defining routes, domains, TLS certificates, and little more.

Its specification is minimal by design, which allowed for rapid adoption, but also laid the foundation for current problems.

The problem: a standard too simple for complex needs

As Kubernetes became an enterprise standard, users wanted to replicate advanced configurations of traditional proxies: rewrites, timeouts, custom headers, CORS, etc.
But Ingress did not provide native support for all this.

Vendors reacted… and chaos was born.

Annotations vs CRDs: two incompatible paths

Different Ingress Controllers have taken very different paths to add advanced capabilities:

📝 Annotations (NGINX, HAProxy…)

Advantages:

  • Flexible and easy to use
  • Directly in the Ingress resource

Disadvantages:

  • Hundreds of proprietary annotations
  • Fragmented documentation
  • Non-portable configurations between vendors

📦 Custom CRDs (Traefik, Kong…)

Advantages:

  • More structured and powerful
  • Better validation and control

Disadvantages:

  • Adds new non-standard objects
  • Requires installation and management
  • Less interoperability

Result?
Infrastructures deeply coupled to a vendor, complicating migrations, audits, and automation.

The complexity for development teams

The design of Ingress implies two very different responsibilities:

  • Platform: defines IngressClass
  • Application: defines Ingress

But the reality is that the developer ends up making decisions that should be the responsibility of the platform area:

  • Certificates
  • Security policies
  • Rewrite rules
  • CORS
  • Timeouts
  • Corporate naming practices

This causes:

  • Inconsistent configurations
  • Bottlenecks in reviews
  • Constant dependency between teams
  • Lack of effective standardization

In large companies, where security and governance are critical, this is especially problematic.

NGINX Ingress: the decommissioning that reignited the debate

The recent decommissioning of the NGINX Ingress Controller has highlighted the fragility of the ecosystem:

  • Thousands of clusters depend on it
  • Multiple projects use its annotations
  • Migrating involves rewriting entire configurations

This has reignited the conversation about the need for a real standard… and there appears Gateway API.

Gateway API: a promising alternative (but not perfect)

Gateway API was born to solve many of the limitations of Ingress:

  • Clear separation of responsibilities (infrastructure vs application)
  • Standardized extensibility
  • More types of routes (HTTPRoute, TCPRoute…)
  • Greater expressiveness without relying on proprietary annotations

But it also brings challenges:

  • Requires gradual adoption
  • Not all vendors implement the same
  • Migration is not trivial

Even so, it is shaping up to be the future of traffic management in Kubernetes.

Conclusion

Ingresses have been fundamental to the success of Kubernetes, but their own simplicity has led them to become a bottleneck. The lack of interoperability, differences between vendors, and complex governance in enterprise environments make it clear that it is time to adopt more mature models.

Gateway API is not perfect, but it moves in the right direction.
Organizations that want future stability should start planning their transition.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Kubernetes Ingress on OpenShift: Routes Explained and When to Use Them

Kubernetes Ingress on OpenShift: Routes Explained and When to Use Them

Introduction
OpenShift, Red Hat’s Kubernetes platform, has its own way of exposing services to external clients. In vanilla Kubernetes, you would typically use an Ingress resource along with an ingress controller to route external traffic to services. OpenShift, however, introduced the concept of a Route and an integrated Router (built on HAProxy) early on, before Kubernetes Ingress even existed. Today, OpenShift supports both Routes and standard Ingress objects, which can sometimes lead to confusion about when to use each and how they relate.

This article explores how OpenShift handles Kubernetes Ingress resources, how they translate to Routes, the limitations of this approach, and guidance on when to use Ingress versus Routes.

OpenShift Routes and the Router: A Quick Overview


OpenShift Routes are OpenShift-specific resources designed to expose services externally. They are served by the OpenShift Router, which is an HAProxy-based proxy running inside the cluster. Routes support advanced features such as:

  • Weighted backends for traffic splitting
  • Sticky sessions (session affinity)
  • Multiple TLS termination modes (edge, passthrough, re-encrypt)
  • Wildcard subdomains
  • Custom certificates and SNI
  • Path-based routing

Because Routes are OpenShift-native, the Router understands these features natively and can be configured accordingly. This tight integration enables powerful and flexible routing capabilities tailored to OpenShift environments.

Using Kubernetes Ingress in OpenShift (Default Behavior)


Starting with OpenShift Container Platform (OCP) 3.10, Kubernetes Ingress resources are supported. When you create an Ingress, OpenShift automatically translates it into an equivalent Route behind the scenes. This means you can use standard Kubernetes Ingress manifests, and OpenShift will handle exposing your services externally by creating Routes accordingly.

Example: Kubernetes Ingress and Resulting Route

Here is a simple Ingress manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: www.example.com
    http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test-service
            port:
              number: 80

OpenShift will create a Route similar to:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: example-route
spec:
  host: www.example.com
  path: /testpath
  to:
    kind: Service
    name: test-service
    weight: 100
  port:
    targetPort: 80
  tls:
    termination: edge

This automatic translation simplifies migration and supports basic use cases without requiring Route-specific manifests.

Tuning Behavior with Annotations (Ingress ➝ Route)

When you use Ingress on OpenShift, only OpenShift-aware annotations are honored during the Ingress ➝ Route translation. Controller-specific annotations for other ingress controllers (e.g., nginx.ingress.kubernetes.io/*) are ignored by the OpenShift Router. The following annotations are commonly used and supported by the OpenShift router to tweak the generated Route:

Purpose Annotation Typical Values Effect on Generated Route
TLS termination route.openshift.io/termination edge ¡ reencrypt ¡ passthrough Sets Route spec.tls.termination to the chosen mode.
HTTP→HTTPS redirect (edge) route.openshift.io/insecureEdgeTerminationPolicy Redirect · Allow · None Controls spec.tls.insecureEdgeTerminationPolicy (commonly Redirect).
Backend load-balancing haproxy.router.openshift.io/balance roundrobin ¡ leastconn ¡ source Sets HAProxy balancing algorithm for the Route.
Per-route timeout haproxy.router.openshift.io/timeout duration like 60s, 5m Configures HAProxy timeout for requests on that Route.
HSTS header haproxy.router.openshift.io/hsts_header e.g. max-age=31536000;includeSubDomains;preload Injects HSTS header on responses (edge/re-encrypt).

Note: Advanced features like weighted backends/canary or wildcard hosts are not expressible via standard Ingress. Use a Route directly for those.

Example: Ingress with OpenShift router annotations

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress-https
  annotations:
    route.openshift.io/termination: edge
    route.openshift.io/insecureEdgeTerminationPolicy: Redirect
    haproxy.router.openshift.io/balance: leastconn
    haproxy.router.openshift.io/timeout: 60s
    haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload
spec:
  rules:
  - host: www.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: test-service
            port:
              number: 80

This Ingress will be realized as a Route with edge TLS and an automatic HTTP→HTTPS redirect, using least connections balancing and a 60s route timeout. The HSTS header will be added by the router on HTTPS responses.

Limitations of Using Ingress to Generate Routes
While convenient, using Ingress to generate Routes has limitations:

  • Missing advanced features: Weighted backends and sticky sessions require Route-specific annotations and are not supported via Ingress.
  • TLS passthrough and re-encrypt modes: These require OpenShift-specific annotations on Routes and are not supported through standard Ingress.
  • Ingress without host: An Ingress without a hostname will not create a Route; Routes require a host.
  • Wildcard hosts: Wildcard hosts (e.g., *.example.com) are only supported via Routes, not Ingress.
  • Annotation compatibility: Some OpenShift Route annotations do not have equivalents in Ingress, leading to configuration gaps.
  • Protocol support: Ingress supports only HTTP/HTTPS protocols, while Routes can handle non-HTTP protocols with passthrough TLS.
  • Config drift risk: Because Routes created from Ingress are managed by OpenShift, manual edits to the generated Route may be overwritten or cause inconsistencies.

These limitations mean that for advanced routing configurations or OpenShift-specific features, using Routes directly is preferable.

When to Use Ingress vs. When to Use Routes
Choosing between Ingress and Routes depends on your requirements:

  • Use Ingress if:
  • You want portability across Kubernetes platforms.
  • You have existing Ingress manifests and want to minimize changes.
  • Your application uses only basic HTTP or HTTPS routing.
  • You prefer platform-neutral manifests for CI/CD pipelines.
  • Use Routes if:
  • You need advanced routing features like weighted backends, sticky sessions, or multiple TLS termination modes.
  • Your deployment is OpenShift-specific and can leverage OpenShift-native features.
  • You require stability and full support for OpenShift routing capabilities.
  • You need to expose non-HTTP protocols or use TLS passthrough/re-encrypt modes.
  • You want to use wildcard hosts or custom annotations not supported by Ingress.

In many cases, teams use a combination: Ingress for portability and Routes for advanced or OpenShift-specific needs.

Conclusion


On OpenShift, Kubernetes Ingress resources are automatically converted into Routes, enabling basic external service exposure with minimal effort. This allows users to leverage existing Kubernetes manifests and maintain portability. However, for advanced routing scenarios and to fully utilize OpenShift’s powerful Router features, using Routes directly is recommended.

Both Ingress and Routes coexist seamlessly on OpenShift, allowing you to choose the right tool for your application’s requirements.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Istio Proxy DNS Explained: How DNS Capture Improves Service Mesh Traffic Control

Istio Proxy DNS Explained: How DNS Capture Improves Service Mesh Traffic Control

Istio is a popular open-source service mesh that provides a range of powerful features for managing and securing microservices-based architectures. We have talked a lot about its capabilities and components, but today we will talk about how we can use Istio to help with the DNS resolution mechanism.

As you already know, In a typical Istio deployment, each service is accompanied by a sidecar proxy, Envoy, which intercepts and manages the traffic between services. The Proxy DNS capability of Istio leverages this proxy to handle DNS resolution requests more intelligently and efficiently.

Traditionally, when a service within a microservices architecture needs to communicate with another service, it relies on DNS resolution to discover the IP address of the target service. However, traditional DNS resolution can be challenging to manage in complex and dynamic environments, such as those found in Kubernetes clusters. This is where the Proxy DNS capability of Istio comes into play.

 Istio Proxy DNS Capabilities

With Proxy DNS, Istio intercepts and controls DNS resolution requests from services and performs the resolution on their behalf. Instead of relying on external DNS servers, the sidecar proxies handle the DNS resolution within the service mesh. This enables Istio to provide several valuable benefits:

  • Service discovery and load balancing: Istio’s Proxy DNS allows for more advanced service discovery mechanisms. It can dynamically discover services and their corresponding IP addresses within the mesh and perform load balancing across instances of a particular service. This eliminates the need for individual services to manage DNS resolution and load balancing.
  • Security and observability: Istio gains visibility into the traffic between services by handling DNS resolution within the mesh. It can apply security policies, such as access control and traffic encryption, at the DNS level. Additionally, Istio can collect DNS-related telemetry data for monitoring and observability, providing insights into service-to-service communication patterns.
  • Traffic management and control: Proxy DNS enables Istio to implement advanced traffic management features, such as routing rules and fault injection, at the DNS resolution level. This allows for sophisticated traffic control mechanisms within the service mesh, enabling A/B testing, canary deployments, circuit breaking, and other traffic management strategies.

 Istio Proxy DNS Use-Cases

There are some moments when you cannot or don’t want to rely on the normal DNS resolution. Why is that? Starting because DNS is a great protocol but lacks some capabilities, such as location discovery. If you have the same DNS assigned to three IPs, it will provide each of them in a round-robin fashion and cannot rely on the location.

Or you have several IPs, and you want to block some of them for some specific service; these are great things you can do with Istio Proxy DNS.

Istio Proxy DNS Enablement

You need to know that Istio Proxy DNS capabilities are not enabled by default, so you must help if you want to use it. The good thing is that you can allow that at different levels, from the full mesh level to just a single pod level, so you can choose what is best for you in each case.

For example, if we want to enable it at the pod level, we need to inject the following configuration in the Istio proxy:

    proxy.istio.io/config: |
		proxyMetadata:   
         # Enable basic DNS proxying
         ISTIO_META_DNS_CAPTURE: "true" 
         # Enable automatic address allocation, optional
         ISTIO_META_DNS_AUTO_ALLOCATE: "true"

The same configuration can be part of the Mesh level as part of the operator installation, as you can find the documentation here on the Istio official page.

Conclusion

In summary, the Proxy DNS capability of Istio enhances the DNS resolution mechanism within the service mesh environment, providing advanced service discovery, load balancing, security, observability, and traffic management features. Istio centralizes and controls DNS resolution by leveraging the sidecar proxies, simplifying the management and optimization of service-to-service communication in complex microservices architectures.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.

Enable Access Logs on OpenShift Default Routes (HAProxy Ingress Debugging)

white and gray spiral stairs

Put some brain when a route is not working as expected, or your consumers are not able to reach the service

We all know that Openshift is an outstanding Kubernetes Distribution and one of the most used mainly when talking about private-cloud deployments. Based on the solid reputation of Red Hat Enterprise Linux, Openshift was able to create a solid product that is becoming almost a standard for most enterprises.

It provides a lot of extensions from the Vanilla Kubernetes style, including some of the open-source industry standards such as Prometheus, Thanos, and Grafana for Metrics Monitoring or ELK stack for Logging Aggregation but also including its extensions such as the Openshift Routes.

Openshift Routes was the initial solution before the Ingress concept was a reality inside the standard. Now, it also implements following that pattern to keep it compatible. It is backed by HAProxy, one of the most known reverse-proxy available in the open-source community.

One of the tricky parts by default is knowing how to debug when one of your routes is not working as expected. The way you create routes is so easy that anyone can make it in a few clicks, and if everything works as expected, that’s awesome.

But if it doesn’t, the problems start because, by default, you don’t get any logging about what’s happening. But that’s what we are going to solve here.

First, we will talk a little more about how this is configured. Currently (Openshift 4.8 version), this is implemented, as I said, using HAProxy by default so if you are using other technology as ingresses such as Istio or Nginx, this article is not for you (but don’t forget to leave a comment if a similar kind of article would be of your interest so I can also bring it to the back-log 🙂 )

From the implementation perspective, this is implemented using the Operator Framework, so the ingress is deployed as an Operator, and it is available in the openshift-ingress-operator namespace.

Enable Access Logs on OpenShift Default Routes (HAProxy Ingress Debugging)
ingress-operator pods on Openshift ecosystem

So, as this is an operator, several Custom Resources Definition (CRD) have been installed to work with this, one of the most interesting of this article. This CRD is Ingress Controllers.

Enable Access Logs on OpenShift Default Routes (HAProxy Ingress Debugging)
Ingress instances on Openshift Ecosystem

By default, you will only see one instance named default. This is the one that includes the configuration of the ingress that is being deployed, so we need to add here an additional configuration to have also the logs.

Enable Access Logs on OpenShift Default Routes (HAProxy Ingress Debugging)
Ingress controller YAML file

The snippet that we need to that is the one shown below under the spec parameter that starts the definition of the specification of the IngressController itself:

   logging:
    access:
      destination:
        type: Container
      httpLogFormat: >-
        log_source="haproxy-default" log_type="http" c_ip="%ci" c_port="%cp"
        req_date="%tr" fe_name_transport="%ft" be_name="%b" server_name="%s"
        res_time="%TR" tot_wait_q="%Tw" Tc="%Tc" Tr="%Tr" Ta="%Ta"
        status_code="%ST" bytes_read="%B" bytes_uploaded="%U"
        captrd_req_cookie="%CC" captrd_res_cookie="%CS" term_state="%tsc"
        actconn="%ac" feconn="%fc" beconn="%bc" srv_conn="%sc" retries="%rc"
        srv_queue="%sq" backend_queue="%bq" captrd_req_headers="%hr"
        captrd_res_headers="%hs" http_request="%r"
 

This will make another container deployed on the router pods in the openshift-ingressnamespace following the sidecar pattern named logs.

Enable Access Logs on OpenShift Default Routes (HAProxy Ingress Debugging)
Router pods on Openshift Installation

This container will print the logs from the requests reaching the ingress component, so next time your consumer is not able to call your service, you will be able to see the incoming requests with all their metadata and know at least what is doing wrong:

Enable Access Logs on OpenShift Default Routes (HAProxy Ingress Debugging)
Openshift Route Access Logs

As you can see, simple and easy!! If you don’t need it anymore, you can again remove the configuration and save it, and the new version will be rolled out and go back to normal.

📚 Want to dive deeper into Kubernetes? This article is part of our comprehensive Kubernetes Architecture Patterns guide, where you’ll find all fundamental and advanced concepts explained step by step.