We’re excited to introduce HAProxy Enterprise Kubernetes Ingress Controller 3.0, packed with powerful new features that bring greater control, performance, and observability to managing Kubernetes environments.
This release delivers TCP custom resource definitions (CRDs) to improve mapping, structuring, and validation for TCP services within HAProxy Enterprise Kubernetes Ingress Controller. It also includes new Runtime optimizations, improved Prometheus metrics, enhanced backend customization, and updated certificate handling to reduce reloads.
Additionally, we’ve aligned the version numbering with HAProxy Enterprise, jumping from version 1.11 to version 3.0. We hope this clarifies the link between HAProxy Enterprise Kubernetes Ingress Controller and its baseline version of HAProxy Enterprise moving forward.
Let’s dive deeper into HAProxy Enterprise Kubernetes Ingress Controller 3.0.
New to HAProxy Enterprise Kubernetes Ingress Controller?
HAProxy Enterprise Kubernetes Ingress Controller is built to supercharge your Kubernetes environment by adding advanced TCP and HTTP routing that connects clients outside your Kubernetes cluster with containers inside. Built upon HAProxy Enterprise, this adds an important layer of security via the integrated Web Application Firewall. HAProxy Enterprise Kubernetes Ingress Controller is backed by our authoritative expert technical support.
Lifecycle of versions
To enhance transparency about supported versions, we’ve introduced an End-of-Life table that outlines which versions are supported in parallel.
Additionally, we’ve published a list of tested Kubernetes versions. Among the versions supported we have Kubernetes 1.32 released in December 2024. While HAProxy Enterprise Kubernetes Ingress Controller is expected to work with versions beyond those listed, only tested versions are explicitly documented.
Ready to upgrade?
When you are ready to start the upgrade procedure, go to the upgrade instructions for HAProxy Enterprise Kubernetes Ingress Controller.
Updating certificates through the Runtime API
In this release, HAProxy Enterprise Kubernetes Ingress Controller now uses HAProxy's Runtime API to update certificates without requiring a reload. Previously, certificate updates required an HAProxy Enterprise reload, but this new approach streamlines the process and reduces resource usage.
Parallelization in writing maps
Both HAProxy Enterprise and the file system can handle writing maps in parallel. With version 3.0, HAProxy Enterprise Kubernetes Ingress Controller parallelizes writing maps both to HAProxy and to the file system. To maintain I/O efficiency and reduce latency, a maximum of 10 maps can be written in parallel.
Support thread pinning on frontend/status/healthz
You can pin threads using the following new arguments for HAProxy Enterprise Kubernetes Ingress Controller:
http-bind-thread
https-bind-thread
healthz-bind-thread
stats-bind-thread
These arguments offer advanced optimization for specific use cases.
Runtime improvements
When calculating the number of server slots to add to a backend after detecting a scaling event, HAProxy Enterprise Kubernetes Ingress Controller now ensures that we always have at least scale-server-slots number of empty servers. This is a slightly different approach, but it will produce slightly fewer reloads of HAProxy Enterprise.
To further reduce the number of reloads, you can use a new annotation named haproxy.com/deployment
on your Service definition to link a Deployment resource to the service. This will connect the service to a single deployment so that the number of desired replicas can be extracted and directly used as the required number of server slots.
Additionally, a new and more efficient way of doing backend updates uses fewer connections to the HAProxy Enterprise Runtime API.
Prometheus metrics
We added two new counters to the list of Prometheus metrics:
haproxy_reloads_total
haproxy_runtime_socket_connections_total
These counters start when the container is started and do not reset.
Logging
The logs now show additional messages about changes to the content of map files. Also, the number of repeating messages has been reduced in certain scenarios (for example, for the same service in the same ingress).
Custom resource definitions: Backend CRD
To further allow customization of backends, the Backend CRD now has options to add ACLs and http-request
options to the backend.
Custom resource definitions: TCP
Until now, mapping for TCP services was available through a custom ConfigMap using the --configmap-tcp-services
flag. While this worked as expected, there were a few limitations we needed to address.
For example, ConfigMap alone doesn't have a standardized structure or validation. Therefore, keeping a larger list of services tidy can be challenging. Additionally, only some HAProxy options (such as service, port, and SSL/TLS offloading) were available for those types of services.
The tcps.ingress.v1.haproxy.com
definition, conversely, lets us define and use more HAProxy options than we could with ConfigMap.
Installing and getting to know TCP CRDs
If you're using Helm, the TCP services definition will be installed automatically. Otherwise, it's available as a raw YAML file via GitHub.
TCP Custom Resources (CRs) are namespaced, and you can deploy several of them in a shared namespace.
You can filter TCP Custom Resources managed by the ingress controller using the ingress.class
annotation. It behaves the same way as an Ingress object.
A TCP CR contains a list of TCP service definitions. Each service definition has:
a name
a frontend section containing two permitted components:
any setting from client-native frontend model
a list of binds coupled with any settings from client-native bind models
a service definition that's a Kubernetes upstream Service/Port (the K8s Service and the deployed TCP CR must be in the same namespace).
Here's a simple example of a TCP service:
apiVersion: ingress.v1.haproxy.org/v1 | |
kind: TCP | |
metadata: | |
name: tcp-1 | |
spec: | |
- name: tcp-http-echo-8443 | |
frontend: | |
name: http-echo-445 | |
tcplog: true | |
binds: | |
- name: mytcpapp | |
port: 20000 | |
service: | |
name: http-echo | |
port: 8443 |
How do we configure service and backend options? You can use the Backend Custom Resource
(and reference it in the Ingress Controller ConfigMap, Ingress, or the Service) in conjunction with the TCP CR.
Mitigating TCP collisions
TCP services are tricky since they allow for unwanted naming and configuration duplications. This overlap can cause transmission delays and other performance degradations while impacting reliability.
Luckily, HAProxy Enterprise can detect and manage two types of collisions:
Collisions on frontend names
Collisions on bind addresses and ports
If several TCP services across all namespaces encounter these collisions, HAProxy Enterprise will only apply the one that was created first based on the older CreationTimestamp of the custom resource. This will generate a message in the log.
SSL/TLS in a TCP custom resource
Here's a quick example of a TCP service with SSL/TLS enabled:
apiVersion: ingress.v1.haproxy.org/v1 | |
kind: TCP | |
metadata: | |
name: tcp-1 | |
spec: | |
- name: tcp-http-echo-8443 | |
frontend: | |
name: fe-http-echo-8443 | |
tcplog: true | |
log_format: "%{+Q}o %t %s" | |
binds: | |
- name: v4 | |
ssl: true | |
ssl_certificate: tcp-test-cert | |
port: 2000 | |
- name: v4v6 | |
address: "::" | |
port: 2000 | |
v4v6: true | |
service: | |
name: "http-echo" | |
port: 8443 |
Keep in mind that ssl_certificate
can be the following:
The name of a Kubernetes Secret (in the same namespace as the TCP CR) containing the certificate and key
A folder or filename on the pod's local filesystem, which was mounted as a Secret Volume
For example, you can mount an SSL/TLS Secret in the Ingress Controller Pod on a volume and reference the volume mount path in ssl_certificate
. Without changing the Pod (or deployment manifest), you can instead use a Secret name within the ssl_certificate
configuration. As a result, the certificate and key will be written in the Pod's filesystem at the etc/haproxy/certs/tcp
path.
The TCP ConfigMap and TCP Custom Resources aren't compatible. If you use both (a TCP CR and the TCP ConfigMap with a TCP service on the same address/Port), this would lead to random configuration. Please ensure you deploy TCP Custom Resources and your TCP ConfigMap services using unique addresses and ports.
Additional changes
In order to help with experimenting, the nano editor was added to the container image. To allow more precise debugging and testing, we added the nano editor to the container image. Using the nano editor, configuration changes can be tested temporarily (to see their effects) before applying them permanently.
The Unix socket is now used when mixing SSL passthrough and offloading. This will allow better performance compared to the previous implementation.
Conclusion
HAProxy Enterprise Kubernetes Ingress Controller 3.0 represents our commitment to delivering a flexible and efficient platform for managing ingress traffic. With the introduction of TCP CRDs, improved Runtime efficiency, streamlined certificate updates, and expanded customization options, this release provides powerful tools to meet diverse Kubernetes use cases.
To learn more about HAProxy Enterprise Kubernetes Ingress Controller, follow our blog and browse our documentation. To see how HAProxy Technologies also provides external load balancing and multi-cluster routing alongside our ingress controller, check out our Kubernetes solutions and our webinar.
Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.