The Kubernetes Gateway API is no longer a future concept—it’s the present standard for traffic management. With the deprecation of Ingress NGINX’s stable APIs signaling a definitive shift, platform teams and architects are now faced with a critical decision: which Gateway API provider to adopt. The official implementations page lists numerous options, but the real-world picture is one of fragmented support, varying stability, and significant gaps that can derail multi-cluster strategies.
In this evaluation, we move beyond marketing checklists to analyze the practical state of Gateway API support across major cloud providers, ingress controllers, and service meshes. We’ll examine which versions are truly production-ready, where the interoperability pitfalls lie, and what you must account for before standardizing across your infrastructure.
The Gateway API Maturity Spectrum: From Experimental to Standard
Not all Gateway API resources are created equal. The API’s unique versioning model—with features progressing through Experimental, Standard, and Extended support tracks—means provider support is inherently uneven. An implementation might fully support the stable Gateway and HTTPRoute resources while offering only partial or experimental backing for GRPCRoute or TCPRoute.
This creates a fundamental challenge for architects: designing for the lowest common denominator or accepting provider-specific constraints. The decision hinges on accurately mapping your traffic management requirements (HTTP, TLS termination, gRPC, TCP/UDP load balancing) against what each provider actually delivers in a stable form.
Core API Support: The Foundation
Most providers now support the v1 (GA) versions of the foundational resources:
- GatewayClass & Gateway: Nearly universal support for v1. These are the control plane resources for provisioning and configuring load balancers.
- HTTPRoute: Universal support for v1. This is the workhorse for HTTP/HTTPS traffic routing and is considered the most stable.
However, support for other route types reveals the fragmentation:
- GRPCRoute: Often in beta or experimental stages. Critical for modern microservices architectures but not yet universally reliable.
- TCPRoute & UDPRoute: Patchy support. Some providers implement them as beta, others ignore them entirely, forcing fallbacks to provider-specific annotations or custom resources.
- TLSRoute: Frequently tied to specific certificate management integrations (e.g., cert-manager).
Major Provider Deep Dive: Implementation Realities
AWS Elastic Kubernetes Service (EKS)
AWS offers an official Gateway API controller for EKS. Its support is pragmatic but currently limited:
- Supported Resources:
GatewayClass,Gateway,HTTPRoute, andGRPCRoute(all v1beta1 as of early 2024). Note the use of v1beta1 for GRPCRoute, indicating it’s not yet at GA stability. - Underlying Infrastructure: Maps directly to AWS Application Load Balancer (ALB) and Network Load Balancer (NLB). This is a strength (managed AWS services) and a constraint (you inherit ALB/NLB feature limits).
- Critical Gap: No support for
TCPRouteorUDPRoute. If your workload requires raw TCP/UDP load balancing, you must use the legacy Kubernetes Service typeLoadBalanceror a different ingress controller alongside the Gateway API controller, creating a disjointed management model.
Google Kubernetes Engine (GKE) & Azure Kubernetes Service (AKS)
Both Google and Azure have integrated Gateway API support directly into their managed Kubernetes offerings, often with a focus on their global load-balancing infrastructures.
- GKE: Offers the GKE Gateway controller. It supports v1 resources and can provision Google Cloud Global External Load Balancers. Its integration with Google’s certificate management and CDN is a key advantage. However, advanced routing features may require GCP-specific backend configs.
- AKS: Provides the Application Gateway Ingress Controller (AGIC) with Gateway API support, mapping to Azure Application Gateway. Support for newer route types like
GRPCRoutehas historically lagged behind other providers.
The pattern here is clear: cloud providers implement the Gateway API as a facade over their existing, proprietary load-balancing products. This ensures stability and performance but can limit portability and advanced cross-provider features.
NGINX & Kong Ingress Controller
These third-party, cluster-based controllers offer a different value proposition: consistency across any Kubernetes distribution, including on-premises.
- NGINX: With its stable Ingress APIs deprecated in favor of Gateway API, its Gateway API implementation is now the primary path forward. It generally has excellent support for the full range of experimental and standard resources, as it’s not constrained by a cloud vendor’s underlying service. This makes it a strong choice for hybrid or multi-cloud deployments where feature parity is crucial.
- Kong Ingress Controller: Kong has been an early and comprehensive supporter of the Gateway API, often implementing features quickly. It leverages Kong Gateway’s extensive plugin ecosystem, which can be a major draw but also introduces vendor lock-in.
Critical Gaps for Enterprise Architects
Beyond checking resource support boxes, several deeper gaps can impact production deployments, especially in complex environments.
1. Multi-Cluster & Hybrid Environment Support
The Gateway API specification includes concepts like ReferenceGrant for cross-namespace and future cross-cluster routing. In practice, very few providers have robust, production-ready multi-cluster stories. Most implementations assume a single cluster. If your architecture spans multiple clusters (for isolation, geography, or failure domains), you will likely need to:
- Manage separate
Gatewayresources per cluster. - Use an external global load balancer (like a cloud DNS/GSLB) to distribute traffic across cluster-specific gateways.
- This negates some of the API’s promise of a unified, abstracted configuration.
2. Policy Attachment and Extension Consistency
Gateway API is designed to be extended through policy attachment (e.g., for rate limiting, WAF rules, authentication). There is no standard for how these policies are implemented. One provider might use a custom RateLimitPolicy CRD, while another might rely on annotations or a separate policy engine. This creates massive configuration drift and vendor lock-in, breaking the portability goal.
3. Observability and Debugging Interfaces
While the API defines status fields, the richness of operational data—detailed error logs, granular metrics tied to API resources, distributed tracing integration—varies wildly. Some providers expose deep integration with their monitoring stack; others offer minimal visibility. You must verify that the provider’s observability model meets your SRE team’s needs.
Evaluation Framework: Questions for Your Team
Before selecting a provider, work through this technical checklist:
- Route Requirements: Do we need stable support for HTTP only, or also gRPC, TCP, UDP? Is beta support acceptable for non-HTTP routes?
- Infrastructure Model: Do we want a cloud-managed load balancer (simpler, less control) or a cluster-based controller (more portable, more operational overhead)?
- Multi-Cluster Future: Is our architecture single-cluster today but likely to expand? Does the provider have a credible roadmap for multi-cluster Gateway API?
- Policy Needs: What advanced policies (auth, WAF, rate limiting) are required? How does the provider implement them? Can we live with vendor-specific policy CRDs?
- Observe & Debug: What logging, metrics, and tracing are exposed for Gateway API resources? Do they integrate with our existing observability platform?
- Upgrade Path: What is the provider’s track record for supporting new Gateway API releases? How painful are version upgrades?
Strategic Recommendations
Based on the current landscape, here are pragmatic paths forward:
- For Single-Cloud Deployments: Start with your cloud provider’s native controller (AWS, GKE, AKS). It’s the path of least resistance and best integration with other cloud services (IAM, certificates, monitoring). Just be acutely aware of its specific limitations regarding unsupported route types.
- For Hybrid/Multi-Cloud or On-Premises: Standardize on a portable, cluster-based controller like Ingress-NGINX or Kong. The consistency across environments will save significant operational complexity, even if it means forgoing some cloud-native integrations.
- For Greenfield Projects: Design your applications and configurations against the stable v1 resources (
Gateway,HTTPRoute) only. Treat any use of beta/experimental resources as a known risk that may require refactoring later. - Always Have an Exit Plan: Isolate Gateway API configuration YAMLs from provider-specific policies and annotations. This modularity will make migration less painful when the next generation of providers emerges or when you need to switch.
The Gateway API’s evolution is a net positive for the Kubernetes ecosystem, offering a far more expressive model than the original Ingress. However, in 2026, the provider landscape is still maturing. Support is broad but not deep, and critical gaps in multi-cluster management and policy portability remain. The successful architect will choose a provider not based on a feature checklist, but based on how well its specific constraints and capabilities align with their organization’s immediate traffic patterns and long-term platform strategy. The era of a universal, write-once-run-anywhere Gateway API configuration is not yet here—but with careful, informed provider selection, you can build a robust foundation for it.
