Summary of the top pain points we've heard from customers performing ingress-nginx migrations and how Pomerium can help.
Top pain points when migrating from ingress-NGINX (Nginx ingress controller)
Controller-specific config creep
The core Ingress spec is small, so real-world behavior ends up living in controller-specific annotations, snippets, or CRDs. If you switch controllers, you are not “migrating an Ingress.” You’re translating a pile of controller dialect.
Not the same feature, even when it has the same name
“Canary,” “TLS passthrough,” “rewrites,” “auth,” “timeouts,” “buffering,” and “rate limits” exist across controllers, but the semantics and knobs differ. The migration risk is less “missing features” and more “same label, different behavior.” That’s how you get a clean deploy and a messy incident.
Inventory and blast-radius are hard to see
NGINX usage patterns tend to be distributed across lots of Ingress objects via annotations and snippets. Before you move, you need an accurate map of what’s in use, where, and why. Otherwise you only discover dependencies during the cutover. Kubernetes calls this “observability.” Everyone else calls it “surprises.”
Cutover mechanics: load balancers, DNS, and rollback
Changing controllers often means new Services, new load balancers/IPs, DNS updates, and a plan for parallel-run plus rollback. You can do it safely, but “safely” usually means “temporarily more expensive and more complicated.”
Standards help, but extensions are still reality
Gateway API is a real step forward for portability, but is extremely verbose. But plenty of teams still rely on capabilities that either live behind implementation-specific policies, are newly introduced, or require extensions. So you end up standardizing the easy 80 percent and still translating the sharp edges.
Pomerium Ingress is most valuable when your ingress pain is driven by access control and identity, not micro-tuning NGINX proxy internals.
Instead of embedding auth behavior in controller-specific snippets or external-auth wiring, you attach authorization policy to the route. Policies can express "who can access what" in a way that's reviewable and consistent across services.
Why it helps migrations: you stop carrying fragile snippet logic from controller to controller and move auth decisions into a supported, versioned policy layer.
Apps often need user identity context. Pomerium passes identity headers in a documented, consistent way, so you're not relying on ad-hoc configurations that can drift over time.
Why it helps migrations: fewer custom patches, fewer one-off conventions, less tribal knowledge.
Like most ingress controls, Pomerium Ingress requires TLS for every route it serves, integrates with cert-manager out of the box (including automatic HTTP01 challenge handling), supports mutual TLS to upstreams, and routes traffic directly to Endpoints by default rather than through kube-proxy. However, where Pomerium Ingress shines is that we do all that plus authenticate and authorize every single request. That way, security policies can be managed centrally and applied equally across all instances, and on a per-team basis.
Why it helps migrations: you stop reimplementing TLS and cert provisioning differently in each controller, get centralized AuthN and AuthZ, and get encrypted upstream communication without bespoke annotation gymnastics.
Because auth and routing intent live with the Ingress resources and policy definitions, it's easier to review "what changed" in GitOps workflows and easier to answer "who can access this endpoint" without spelunking through snippets. Pomerium also exposes per-Ingress Prometheus metrics and posts events to both the Pomerium CRD and individual Ingress objects, so access patterns are observable, not inferred.
Why it helps migrations: discovery becomes policy-driven instead of archaeology-driven.
You can run Pomerium alongside an existing ingress controller using distinct IngressClasses and migrate service-by-service. That keeps blast radius under control and makes rollback boring, which is the best kind.
Why it helps migrations: you can adopt it where it adds the most value (protected apps, internal tools, admin UIs) without rewriting everything at once.
If your heavy usage is low-level NGINX tuning (buffering minutiae, bespoke rewrite gymnastics, highly custom L7 behaviors), Pomerium isn't trying to be "NGINX but with different labels." It shines when the problem is "how do we gate and identify access cleanly across apps," not "how do we tweak proxy buffers like we're tuning a race car."
You can read more about Pomerium's Kubernetes ingress controller in our docs: https://www.pomerium.com/docs/deploy/k8s/ingress
Stay up to date with Pomerium news and announcements.
Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.