HAProxy Kubernetes Ingress Controller
Overview
An ingress controller implements traffic routing in your Kubernetes cluster by interpreting Ingress rules. In this section, you will learn its benefits and how it works.
About storing data
When you deploy HAProxy Enterprise Kubernetes Ingress Controller, HAProxy Technologies, LLC. does not store or process any data of your customers related to any traffic flows through your load balancers.
What is Ingress? Jump to heading
Before jumping into describing Kubernetes Ingress, let’s take a step back and revisit what we mean by ingress. A general definition of ingress means going in, or entering. In networking, ingress refers to traffic entering a network, for example HTTP requests traveling into your corporate network towards a web server. Traffic exiting, which is to say responses leaving your network and traveling outwards towards a user, is called egress.
In the context of Kubernetes, ingress means web traffic entering your Kubernetes cluster, destined for one pod or another. The Kubernetes API provides a way for you to control how ingress traffic routes to the appropriate pod through a rule-based syntax written in YAML. The YAML files define resources that have an apiVersion
set to networking.k8s.io/v1
and a kind
attribute set to Ingress
.
Here’s an example of a YAML-based Ingress resource:
example-ingress.yamlyaml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: example-ingressspec:ingressClassName: haproxyrules:- host: "example.com"http:paths:- path: /examplepathType: Prefixbackend:service:name: example-serviceport:number: 8080
example-ingress.yamlyaml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: example-ingressspec:ingressClassName: haproxyrules:- host: "example.com"http:paths:- path: /examplepathType: Prefixbackend:service:name: example-serviceport:number: 8080
In this example, we’ve defined a rule that tells Kubernetes to forward requests for example.com/example
to the pods grouped under the service named example-service. That service listens at port 8080 (the service’s cluster IP address is discovered dynamically). HTTP routing rules defined on the Ingress resource provide externally reachable URLs to clients outside the Kubernetes cluster, and they can also configure extra functionality such as rate limiting, custom HTTP headers, path rewriting, CORS, and SSL.
What is a Kubernetes ingress controller? Jump to heading
While the Kubernetes API provides Ingress resources that allow you to define routing rules, the rules do not take effect until you have created an ingress controller to implement them.
HAProxy Kubernetes Ingress Controller implements the routing rules defined in the Kubernetes Ingress resources. It adds and removes routes in its underlying HAProxy load balancer configuration when it detects that pods have been added or removed from the cluster. Unlike a traditional load balancer, the ingress controller runs as a pod inside the cluster.
The controller part of its name indicates that it implements a control loop. A control loop is a never-ending operation that continuously monitors the state of something and adjusts its settings based on the current state of the thing being monitoring. For example, when you stand up, your brain continuously makes micro adjustments to your posture to keep you upright while external forces try to knock you over (strong wind, gravity, cats running underfoot). Similarly, an ingress controller continuously monitors Kubernetes for changes to pods so that it can adjust its load balancing registry. That tight integration with Kubernetes makes it superior to simply deploying HAProxy as a container into your Kubernetes cluster.
What are the benefits of an ingress controller? Jump to heading
Flexibility
A Kubernetes ingress controller simplifies the configuration required to make cluster services accessible to clients. Although an ingress controller is not absolutely necessary to configure such access, it offers a more feature-rich and flexible paradigm than other strategies such as deploying a cloud load balancer for every Kubernetes service, which can lack the configurability of HAProxy.
Cost savings
Rather than exposing each of your services directly to the outside world via NodePort or LoadBalancer service types, the ingress controller acts as a single, unified gateway for all of your services, which reduces the number of TCP ports and DNS domain names you need. This has cost savings benefits when operating in the cloud because you don’t need to allocate a cloud load balancer for every service. A single ingress controller can serve as the gateway to many applications running in your cluster.
Separation of concerns
Ingress resources also allow cluster administrators to delegate responsibility for defining HTTP routes to the AppDev teams delivering applications. Each team can enable or disable the load balancing features they need on a case-by-case basis, without involving the cluster administrator.
Teams within your organization can be responsible for configuring their own Ingress routing rules, while cluster administrators own the implementation of the ingress controller that interprets and applies those rules within the Kubernetes network. Administrators can also deploy multiple ingress controllers to handle ingress rules from different departments and teams. For example, one ingress controller could watch for internal-facing services while another watches for and implements routing for public, internet-facing services. Both would run within the same Kubernetes cluster.
External resources Jump to heading
- Learn more about Ingress Controllers in general
- Learn how to use Ingress objects to define routes
- Read HAProxy Technologies blog posts about Kubernetes
- View the GitHub code repository for the HAProxy Kubernetes Ingress Controller source code
- Learn what sets apart the HAProxy Enterprise Kubernetes Ingress Controller
Do you have any suggestions on how we can improve the content of this page?