Kubernetes Ingress, or just Ingress, is an API object that helps manage outside access to services within a K8s cluster. This functionality is common (and important) in app infrastructures that process external traffic for multiple services, thus requiring intelligent load balancing, scalability, and access to a K8s cluster's private network. 

Ingress helps bridge this gap by securely exposing containerized services to the outside world. It also consolidates numerous services behind one reachable IP address. This simplicity (versus the one-load-balancer-per-service approach) comes with cost savings, easier observability, and stronger security—by reducing an infrastructure's footprint and including extra features, such as a WAF.

How does Kubernetes Ingress work?

When an external client sends HTTP traffic over the network (often in a load-balanced environment) to a Kubernetes-based application, those clients can only connect to containerized services in limited ways. This degree of separation is similar to having a house full of people ready to communicate—without an exterior door or access road available to reach them. 

Ingress and ingress controllers change this by acting as a proxy inside the cluster. Ingress controllers help map requests to the correct backend services by exposing ports and protocols—primarily assigning externally-accessible URLs to running services. Meanwhile, a load balancer can enable routing for applications that leverage another protocol, such as TCP or gRPC.

haproxy-enterprise-kubernetes-ingress-controller-diagram-update

How Kubernetes Ingress works within HAProxy, similar to other Ingress setups.

While Ingress and Kubernetes as a whole are pretty complex, here's a summary of the steps involved:

  1. A client makes a web request to your service. However, because the service is running inside Kubernetes, and therefore does not have a public IP address, the client connects to a public IP address assigned to your Kubernetes cluster as a whole. This might be assigned to a cloud load balancer in front of the cluster.

  2. The load balancer forwards the request to one of your Kubernetes cluster's nodes  ("node" being the term for a Linux server in the cluster). That node relays it to the Ingress Controller pod. However, because nodes act as a pool of CPU and memory resources, pods can land on any one of them. The target application pod might be running on a different node, and so the request might need to hop from one node to another. 

  3. Once the request lands on the node that's running the Ingress Controller pod, the Ingress Controller takes over. It checks where the request is headed. To determine that, it inspects the host header and URL path, either of which you can define ingress routing rules for. After finding which application handles requests for the given host or path, it sends the request there.

  4. A pod running the service receives the request. Again, this pod might be running on a different node, requiring another hop. If you were thinking that the request needed to go through a Kubernetes Service object first, you'd usually be right, but not in this case. Here, the ingress controller has already discovered the addresses of the application pods themselves and routes requests to them directly.

  5. The application pod handles the request and then returns a response. The response bubbles up through the layers in reverse order. From the application pod to the ingress controller pod, back to the original node, through the cloud load balancer, and back to the client.

Ingress needs some important pieces to work properly. You need to deploy the Ingress controller and then define an Ingress resource (with optional hosts, and paths, and HTTP routing rules). If you create an Ingress setup that leverages virtual hosting, for example, you can tell your load balancer to route requests based on host headers.

Ingress also reduces the number of cloud load balancers you need through consolidation. Often called "fan out," hosting multiple services behind a common IP address reduces complexity.

Does HAProxy support Kubernetes Ingress?

Yes! HAProxy offers a comprehensive Kubernetes solution for organizations with different needs. HAProxy Enterprise Kubernetes Ingress Controller fulfills part of this, providing stable and Kubernetes API compatible ingress while cutting costs. It's ideal for organizations running K8s in a public cloud that need additional security or high availability features. 

Alternatively, HAProxy Enterprise customers can leverage HAProxy Fusion Control Plane to harness intelligent external load balancing, multi-cluster routing, and service discovery within K8s environments. 

To learn more about HAProxy and Kubernetes, check out our Kubernetes solution page or our webinar on External Load Balancing and Multi-Cluster Routing for Kubernetes.