After NGINX Ingress Controller: Alternatives and Migration Guide

If you manage Kubernetes clusters in production, the last 18 months have been uncomfortable. Two of the most widely deployed NGINX-based Ingress Controllers have faced critical security vulnerabilities, deprecation announcements, and shifting maintenance responsibilities — all while the Kubernetes project accelerates its push toward a new traffic management standard. This is not a drill. Teams running ingress-nginx or the F5/NGINX Ingress Controller need a clear picture of what changed, what it means for their clusters, and what their realistic options are going forward.

First, Clear the Confusion: There Are Two NGINX Ingress Controllers

One of the most persistent sources of confusion in the Kubernetes networking space is that there are two completely different projects both called “NGINX Ingress Controller,” maintained by different organizations, with different architectures and different licensing.

ingress-nginx (kubernetes/ingress-nginx)

This is the community-maintained controller under the Kubernetes project umbrella, hosted at github.com/kubernetes/ingress-nginx. It uses the open-source NGINX as its data plane, configured via Lua scripting and dynamically generated nginx.conf files. This is the controller most teams end up with when they follow the official Kubernetes documentation or install from the Helm chart referenced in the ingress guide. It is free, open-source, and until recently was considered the default choice.

NGINX Ingress Controller (nginxinc/kubernetes-ingress)

This is the commercial and open-source controller maintained by F5/NGINX, hosted at github.com/nginxinc/kubernetes-ingress. It also supports NGINX Plus (the commercial version with enhanced features like active health checks, JWT authentication, and advanced load balancing). The architecture is different — it uses native NGINX APIs rather than the Lua-heavy approach — and it targets enterprise customers looking for support contracts and advanced capabilities.

These two controllers are not interchangeable. Configuration annotations differ, Helm chart values differ, and behavior under edge cases differs substantially. Understanding which one your cluster runs is the necessary starting point for any decision about migration.

# Check which NGINX IC you are actually running
kubectl get pods -n ingress-nginx -o jsonpath='{.items[*].spec.containers[*].image}'

# Community controller image looks like:
# registry.k8s.io/ingress-nginx/controller:v1.x.x

# F5/NGINX controller image looks like:
# nginx/nginx-ingress:x.x.x  or  private-registry.nginx.com/nginx-ic/nginx-plus-ingress:x.x.x

What Actually Happened: A Timeline of Disruption

The ingress-nginx CVEs (2024)

In March 2024, security researchers disclosed a set of critical vulnerabilities in ingress-nginx under the collective name IngressNightmare (CVE-2025-1097, CVE-2025-1098, CVE-2025-1974, CVE-2025-24514). The most severe of these, rated CVSS 9.8, allowed unauthenticated remote code execution against the ingress-nginx admission webhook. An attacker with network access to the admission controller could craft a malicious Ingress object to inject arbitrary NGINX configuration, ultimately achieving code execution in the controller pod — which in many clusters runs with elevated permissions and access to service account tokens across namespaces.

The vulnerabilities affected the vast majority of ingress-nginx deployments in the wild. Wiz Research, which discovered and disclosed the issues, estimated that approximately 43% of cloud environments were exposed. Patches were released in versions 1.11.5 and 1.12.1, but the incident forced uncomfortable questions about the controller’s security posture and the architecture decisions (particularly the admission webhook design) that made it possible.

Maintenance Concerns in ingress-nginx

Beyond the CVEs, the ingress-nginx project has faced ongoing concerns about maintainer bandwidth. The project is maintained by a small group of volunteers and relies heavily on community contributions. Issue response times slowed, pull requests aged, and the pace of feature development declined relative to alternatives. For a component as critical as the cluster ingress layer, this created legitimate concern about long-term sustainability without corporate backing or broader contributor growth.

F5/NGINX Deprecation Announcement

On the commercial side, F5/NGINX announced in early 2025 that the nginxinc/kubernetes-ingress controller — particularly its open-source tier — would undergo significant changes. F5 signaled a strategic shift toward NGINX Gateway Fabric, their implementation of the Kubernetes Gateway API specification. The message was clear: investment in the Ingress-based controller would be reduced, and customers were encouraged to plan migrations toward Gateway API-native solutions.

For teams running NGINX Plus-based ingress with support contracts, this was a significant business concern. The product they had licensed and standardized on was being steered toward end-of-life on the Ingress API, even if exact timelines remained somewhat ambiguous in the initial announcements.

Real Impact on Production Clusters

The practical consequences depend heavily on which controller you run and how your clusters are configured. Here is an honest assessment:

Immediate Security Risk

If you run ingress-nginx and have not patched to 1.11.5+ or 1.12.1+, your admission webhook is a critical attack surface. Patching is non-negotiable and should have happened already. The admission webhook can be disabled if you are not using it for validation (many teams are not), which significantly reduces the attack surface while you plan a longer-term migration.

# Check your current ingress-nginx version
kubectl get deployment ingress-nginx-controller -n ingress-nginx \
  -o jsonpath='{.spec.template.spec.containers[0].image}'

# Verify admission webhook is configured
kubectl get validatingwebhookconfigurations | grep ingress

# If you need to disable the webhook temporarily (reduces but does not eliminate risk):
kubectl delete validatingwebhookconfiguration ingress-nginx-admission

Operational Uncertainty

Even after patching, the underlying questions remain. Teams are now asking: should we invest in hardening and tuning ingress-nginx knowing it may not be the strategic direction? Should we migrate now when it is our choice, rather than later when it may be forced? For NGINX IC customers, they are evaluating whether their licensing costs justify continued investment in a product being steered toward deprecation.

Configuration Migration Complexity

The real cost of migration is in the annotation-heavy configurations that accumulate over time. Teams that have built complex routing logic using nginx.ingress.kubernetes.io/* annotations — custom headers, rate limiting, auth snippets, rewrite rules, canary traffic splitting — face significant rework when switching controllers. This is the primary reason many teams are reluctant to move despite clear signals that a transition is coming.

The Alternatives: An Honest Evaluation

There is no shortage of Ingress controller options. The question is which alternatives are mature enough for production workloads at scale, and what trade-offs each brings.

Traefik

Traefik Proxy (and its Kubernetes-native version via Traefik Hub) has emerged as the most popular alternative for teams leaving ingress-nginx. It supports the standard Kubernetes Ingress API for drop-in compatibility, its own IngressRoute CRDs for advanced features, and Kubernetes Gateway API. It is written in Go, has strong TLS automation via Let’s Encrypt, and has excellent observability with built-in metrics and a real-time dashboard.

Trade-offs: Traefik’s configuration model is different enough from NGINX that complex routing logic requires rethinking rather than translating. Performance under very high connection counts is generally good but NGINX has a longer track record in extreme-scale deployments. The commercial offering (Traefik Hub) adds API gateway capabilities but introduces vendor dependency.

Envoy Gateway

Envoy Gateway is now a CNCF project and implements the Kubernetes Gateway API natively using Envoy as its data plane. This is arguably the most strategically aligned option for teams that want to bet on the future of Kubernetes networking. Envoy is battle-tested (it powers Istio, Contour, and large-scale service meshes at companies like Lyft and Google), and the Gateway API implementation is comprehensive and actively developed.

Trade-offs: Envoy Gateway is relatively young as a standalone project. Teams unfamiliar with Envoy will face a steeper learning curve for debugging and custom configuration. The operational model differs significantly from NGINX-based controllers. However, for greenfield deployments or teams willing to invest in the transition, this is a strong forward-looking choice.

Cilium Gateway API

If your cluster already runs Cilium as the CNI, enabling Gateway API support is a natural evolution. Cilium’s Gateway API implementation leverages eBPF for high-performance packet processing, avoiding the overhead of userspace proxy hops entirely. It is deeply integrated with Cilium’s network policy model and observability stack (Hubble).

Trade-offs: This option is only relevant if you are already committed to Cilium as your CNI, or are willing to make that switch simultaneously. Migrating both the CNI and the ingress layer at the same time is a significant operational risk. For Cilium shops, however, this consolidates complexity and provides excellent performance and observability.

HAProxy Ingress

HAProxy Ingress Controller is maintained by the HAProxy Technologies team and has a strong reputation for raw performance and precise traffic control. It supports both Ingress and Gateway API and has a long track record in high-throughput production environments. For teams with existing HAProxy expertise, it provides a familiar mental model for load balancing configuration.

Trade-offs: Smaller community than Traefik or NGINX. Less ecosystem tooling and fewer tutorials. Best suited for teams that specifically want HAProxy’s capabilities (fine-grained connection management, advanced health checking, TCP/HTTP mode flexibility) rather than as a default choice.

Kong Ingress Controller

Kong bridges the gap between an Ingress controller and a full API gateway. It supports Ingress and Gateway API resources alongside its own Kong-native plugin system for authentication, rate limiting, transformation, and observability. For teams that need API gateway capabilities rather than pure L7 routing, Kong provides a unified platform.

Trade-offs: Kong adds operational complexity. Running Kong requires either a PostgreSQL database (DB-mode) or careful management of declarative configuration (DB-less mode). The plugin ecosystem is powerful but introduces additional configuration surface. For teams that just need ingress routing, Kong may be more than necessary. For teams building API platforms, it is worth the overhead.

Istio Gateway

Istio’s ingress gateway (now aligned with Gateway API via its Kubernetes Gateway API integration) provides entry-point traffic management as part of a full service mesh. If your organization is planning or running Istio for east-west traffic, using Istio’s gateway for north-south traffic creates a unified data plane (Envoy) and consistent observability across all service communication.

Trade-offs: Istio is a serious operational commitment. The control plane overhead, the learning curve, and the impact on pod scheduling and sidecar management are significant. Choosing Istio purely for ingress replacement is like buying a race car because you needed a vehicle with good brakes. Consider this path only if service mesh capabilities are on your roadmap.

NGINX Gateway Fabric (F5’s Gateway API implementation)

F5/NGINX is building NGINX Gateway Fabric as their strategic forward path — an NGINX-based implementation of the Kubernetes Gateway API. For teams heavily invested in NGINX and wanting to stay in that ecosystem while moving to Gateway API, this provides a migration path within familiar territory. It is still maturing but represents where F5 is putting its development resources.

Comparison Matrix

ControllerIngress APIGateway APIMaturityBest ForComplexity
ingress-nginxYesPartialHighExisting deployments, familiar configLow
TraefikYesYesHighGeneral purpose, rapid migrationLow-Medium
Envoy GatewayNoYes (native)MediumGreenfield, future-alignedMedium
Cilium GatewayYesYesMediumCilium CNI clustersLow (if Cilium)
HAProxy IngressYesYesHighHigh-throughput, HAProxy expertiseMedium
KongYesYesHighAPI gateway requirementsHigh
Istio GatewayVia Gateway APIYesHighService mesh adoptersVery High
NGINX Gateway FabricNoYes (native)Low-MediumNGINX shops moving to Gateway APIMedium

Gateway API: The Strategic Direction You Cannot Ignore

The Kubernetes Gateway API is not simply “Ingress v2.” It is a fundamentally richer traffic management model designed to address the limitations that drove teams to annotation-based workarounds for the past several years. Understanding it is essential regardless of which controller you choose, because the ecosystem is clearly converging on it.

The core resource hierarchy consists of GatewayClass (defines a type of gateway, created by infrastructure providers), Gateway (a specific instance of a listener configuration, typically managed by platform teams), and HTTPRoute, TCPRoute, GRPCRoute, and other route resources (managed by application teams). This separation of concerns maps cleanly onto organizational roles — infrastructure teams control the gateway, application teams control their routing rules.

# Example Gateway API resources replacing an ingress-nginx Ingress
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: production-gateway
  namespace: infra
spec:
  gatewayClassName: nginx  # or envoy, traefik, cilium, etc.
  listeners:
  - name: https
    protocol: HTTPS
    port: 443
    tls:
      mode: Terminate
      certificateRefs:
      - name: wildcard-tls
    allowedRoutes:
      namespaces:
        from: All
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: my-app
  namespace: my-app-namespace
spec:
  parentRefs:
  - name: production-gateway
    namespace: infra
  hostnames:
  - "app.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /api
    backendRefs:
    - name: my-api-service
      port: 8080
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: my-frontend-service
      port: 3000

Gateway API reached v1.0 (GA for HTTPRoute and Gateway) in October 2023, and v1.1 followed in 2024 with GRPCRoute graduation and expanded features. The project has broad support across controllers (Traefik, Envoy Gateway, Cilium, NGINX Gateway Fabric, Kong, Istio, and others all implement it). The Ingress API is not being removed from Kubernetes, but new feature development is effectively frozen — Gateway API is where capabilities like traffic weighting, header manipulation, request mirroring, and backend protocol configuration are being built.

Decision Framework: Stay, Migrate, or Evaluate?

There is no universal right answer. The following framework helps teams make a context-appropriate decision rather than following hype or panic.

Stay on ingress-nginx if:

  • You have patched to 1.11.5+ or 1.12.1+ and have disabled or hardened the admission webhook
  • Your cluster is stable, heavily annotation-dependent, and migration cost outweighs risk
  • You have internal NGINX expertise and can take ownership of monitoring the project’s maintenance health
  • Your organization has a short-term horizon (decommissioning or major platform change within 12-18 months)

Migrate now if:

  • You are running the F5/NGINX IC with a support contract that is being deprecated
  • Your cluster has moderate annotation complexity and you have engineering cycles available
  • You are planning a major Kubernetes version upgrade or cluster rebuild — do it at the same time
  • Your security team has flagged the CVE history as unacceptable for your risk profile
  • You are building a new cluster or platform team and want to standardize on Gateway API from the start

Evaluate before committing if:

  • Your workloads have complex traffic requirements (WebSockets, gRPC, canary deployments, header-based routing) that differ significantly across controllers
  • You are considering Gateway API but the specific controllers in your environment have not graduated their Gateway API implementations yet
  • You have multi-cluster or multi-tenant requirements that change the analysis
  • You need to assess total cost including commercial support, tooling changes, and team retraining

Migration Checklist

For teams that have decided to migrate, the following sequence reduces risk and ensures nothing critical is missed:

Phase 1: Inventory and Assessment

  • Enumerate all Ingress resources across all namespaces and document their annotations
  • Identify annotations with no direct equivalent in your target controller
  • Map TLS certificate sources (cert-manager, Secrets, external providers) and confirm compatibility
  • Document any custom NGINX configuration snippets (nginx.ingress.kubernetes.io/configuration-snippet, server-snippet) — these are high-risk items that require manual translation
  • Inventory any rate limiting, authentication, or WAF configurations layered on the controller
# Enumerate all ingress resources and their annotations across the cluster
kubectl get ingress -A -o json | jq -r '
  .items[] |
  {
    namespace: .metadata.namespace,
    name: .metadata.name,
    annotations: (.metadata.annotations // {} | keys)
  }
'

Phase 2: Target Controller Validation

  • Deploy target controller to a non-production cluster with identical Ingress/HTTPRoute resources
  • Validate TLS termination, redirect behavior, and timeout configurations
  • Run load tests to confirm performance characteristics match expectations
  • Validate observability — metrics, logs, and traces integrate with your existing stack
  • Test failure scenarios: backend unavailability, certificate expiry, controller pod restart

Phase 3: Staged Production Migration

  • Deploy new controller to production alongside existing controller (different IngressClass)
  • Migrate low-risk, low-traffic Ingress resources first by updating their ingressClassName
  • Use DNS-based canary switching (weighted routing at the DNS level) rather than switching entire IngressClass at once
  • Monitor error rates and latency for 24-48 hours after each batch migration
  • Migrate critical services during low-traffic windows with rollback plan documented
  • Decommission old controller only after all resources are migrated and validated
# Migrate individual Ingress to new controller by changing ingressClassName
kubectl patch ingress my-app -n my-namespace \
  --type='json' \
  -p='[{"op": "replace", "path": "/spec/ingressClassName", "value": "traefik"}]'

# Or if migrating to Gateway API, create equivalent HTTPRoute first,
# test it, then remove the old Ingress resource
kubectl apply -f my-app-httproute.yaml
# Validate, then:
kubectl delete ingress my-app -n my-namespace

Phase 4: Gateway API Adoption (Optional but Recommended)

  • Install Gateway API CRDs if not already present (kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml)
  • Define GatewayClass resources matching your chosen controller
  • Migrate Ingress resources to HTTPRoute progressively, starting with simpler configurations
  • Update CI/CD pipelines and Helm charts to generate HTTPRoute instead of Ingress resources for new services
  • Establish a policy: new services use Gateway API; legacy services migrate on their next significant update

Recommendation

For most platform engineering teams reading this in 2025, the pragmatic recommendation is as follows:

Short term (next 30 days): Patch ingress-nginx to the latest release if you are still on it. Assess and harden or disable the admission webhook. This is not optional.

Medium term (3-6 months): Evaluate Traefik or Envoy Gateway against your specific workload requirements. Traefik is the lower-friction migration for teams coming from ingress-nginx on the Ingress API. Envoy Gateway is the stronger strategic choice if you are willing to commit to Gateway API fully. Either way, run a parallel deployment in a non-production environment and measure the delta in operational overhead.

Long term (6-18 months): Plan migration to Gateway API resources regardless of which data plane you choose. The Ingress API will not disappear overnight, but feature parity with Gateway API capabilities will never arrive. Teams that standardize on Gateway API now build the institutional knowledge that will be valuable as the ecosystem continues to evolve.

If you are running F5/NGINX IC under a support contract: engage your F5 account team now to get a clear timeline on the deprecation path and evaluate NGINX Gateway Fabric as a within-ecosystem migration before looking at alternatives. The question is not whether to migrate but when and to what.

Avoid the temptation to treat this as a purely technical decision. The switch of an ingress controller touches CI/CD pipelines, monitoring dashboards, runbooks, on-call playbooks, and engineering team knowledge. Factor in the total transition cost, not just the YAML changes.

Frequently Asked Questions

Is the Kubernetes Ingress API being deprecated or removed?

No. The networking.k8s.io/v1 Ingress API is not deprecated and there are no current plans to remove it from Kubernetes. It will continue to work. What is happening is that the Kubernetes SIG Network has frozen new feature development on the Ingress API and is directing all new traffic management capabilities to Gateway API. In practical terms, if you need a capability that Ingress does not currently provide, you will not get it through Ingress. You will need Gateway API. Existing Ingress resources will continue to function for the foreseeable future.

Can I run two Ingress controllers simultaneously during migration?

Yes, and this is the recommended approach for production migrations. Kubernetes supports multiple IngressClass resources in a cluster, each backed by a different controller. Ingress resources select their controller via the spec.ingressClassName field (or the legacy kubernetes.io/ingress.class annotation). You can run ingress-nginx and Traefik side-by-side, migrating individual Ingress resources by updating their ingressClassName. Once migration is complete and validated, decommission the old controller. Just ensure both controllers are not both marked as the default IngressClass simultaneously, as this causes conflicts.

What happens to cert-manager if I switch controllers?

cert-manager is independent of your Ingress controller and will continue to work regardless of which controller you use. The HTTP-01 challenge solver in cert-manager creates temporary Ingress resources to complete ACME challenges — these will use whichever IngressClass you configure in your Issuer or ClusterIssuer. If you migrate to Gateway API, cert-manager has added Gateway API support (HTTPRoute-based HTTP-01 challenges) starting from version 1.14. DNS-01 challenges are entirely unaffected by controller choice. Update your Issuer configuration to reference the new IngressClass during migration.

How severe is the performance difference between ingress-nginx and alternatives?

For the vast majority of production workloads, the performance difference between mature controllers (ingress-nginx, Traefik, HAProxy, Envoy) is not the deciding factor. All of them can handle tens of thousands of requests per second on reasonable hardware, and the bottleneck is typically the backend services, not the ingress layer. The notable exception is Cilium with eBPF-based forwarding, which eliminates userspace proxy overhead entirely and can show measurable latency reduction at high percentiles for latency-sensitive workloads. If you are running at a scale where ingress controller throughput is actually the constraint, you already have the engineering resources to benchmark your specific workload profile against candidate controllers before committing.

Should we just move everything to a cloud provider’s managed load balancer and skip the in-cluster controller?

This is a legitimate option for teams on managed Kubernetes (EKS, GKE, AKS). Cloud-native load balancers (AWS ALB via AWS Load Balancer Controller, GKE Gateway, Azure Application Gateway Ingress Controller) eliminate the operational burden of managing an in-cluster controller and integrate deeply with cloud IAM, WAF, and observability services. The trade-offs are cost (cloud LBs charge per rule and per hour), vendor lock-in, and reduced portability. For purely cloud-native workloads with no multi-cloud or on-premises requirements, cloud-managed load balancers are worth serious consideration and sidestep the ingress-nginx problem entirely. For hybrid or multi-cluster environments, in-cluster controllers maintain an advantage in consistency and portability.