HAProxy Kubernetes Ingress Controller 1.10 is now the latest version. Learn more
We’re proud to announce the release of version 1.6 of the HAProxy Kubernetes Ingress Controller. This version provides the ability to add raw configuration snippets to HAProxy frontends, allows for ACL/Map files to be managed through a ConfigMap, and enables complex routing decisions to be made based on anything found within the request headers or metadata. It also unlocks the ability to supply a secondary HAProxy configuration that can be used for loading additional sections that are not directly managed by the Ingress Controller.
This release was guided by the active discussions of our community members in GitHub and Slack.
Config Snippets for Frontends
The ingress controller exposes much of HAProxy’s functionality through annotations on the Service, Ingress, and ConfigMap resources. However, given the vast scope of features and flexibility within HAProxy, every option is not yet accessible. Backend and global config snippets, which were introduced in version 1.5, allow you to write raw HAProxy configuration directives to access advanced features in the underlying HAProxy engine.
In this version, the frontend-config-snippet
annotation has been added for inserting directives into frontends that are managed by the controller. There are two such frontends, one for HTTP and the other for HTTPS. Initially, this was not planned in order to avoid scenarios where the controller-managed portion of the HAProxy configuration conflicts with the user-managed configuration. However, it was decided that advanced and experienced users would benefit from having this access.
That being said, it is safer to use backend-config-snippet
in most cases, especially since most of the frontend configuration directives can also be used in a backend, except for:
bind
lines to listen on other addresses in addition to the default ones;use_backend
rules to enforce a different routing strategy than the controller-generated one. Although, it is possible to handle this via a dedicated service annotation named route-acl;Some frontend options like
unique-id-format
,option clitcpka
, etc.
ACL/Map Files via ConfigMap
HAProxy ACL and Map files are powerful features that allow you to match anything found within the request and response headers or metadata and perform routing decisions based on that data. They are powered by Elastic Binary Trees, which means they are extremely performant and can easily handle millions of entries. This release allows for importing ACL and Map file patterns to a ConfigMap, which can then be referenced through annotations...
First, consider how it’s done without a pattern file. The most common example is filtering access depending on the client’s source IP address. The example below shows how you can use the backend-config-snippet
annotation to define a filtering rule. Notice that you must list each IP and IP range directly within the expression:
backend-config-snippet: | | |
http-request deny if !{ src 127.0.0.1 10.0.0.0/8 1.2.3.4/24 } |
Another use-case would be to set the value of a header named Exp-Date to an expiration date based on a key in the client’s request:
backend-config-snippet: | | |
http-request set-header Exp-Date 1619481600 if !{ hdr(Key) Katotosh6Rae } | |
http-request set-header Exp-Date 1619568000 if !{ hdr(Key) laeP2oweu0ri } | |
http-request set-header Exp-Date 1619654400 if !{ hdr(Key) Xaib0ovao3ae } |
The longer a list of patterns grows the more difficult it will be to maintain it in its current format. In version 1.6 of the ingress controller, you can move these types of lists into an external file, which you’ll load into a ConfigMap. You will then specify the controller argument –configmap-patternfiles to provide the name of a ConfigMap holding the pattern files.
In the example below, we store two pattern files in a ConfigMap resource:
$ cat /tmp/ips.acl | |
127.0.0.1 | |
10.0.0.0/8 | |
1.2.3.4/24 | |
$ cat /tmp/keys.map | |
Katotosh6Rae 1619481600 | |
laeP2oweu0ri 1619568000 | |
Xaib0ovao3ae 1619654400 | |
$ kubectl create -n default configmap staging-patterns \ | |
--from-file=/tmp/ips.acl \ | |
--from-file=/tmp/keys.map | |
configmap/staging-patterns created |
The resulting ConfigMap will be:
apiVersion: v1 | |
kind: ConfigMap | |
metadata: | |
name: staging-patterns | |
namespace: default | |
data: | |
ips.acl: | | |
127.0.0.1 | |
10.0.0.0/8 | |
1.2.3.4/24 | |
keys.map: | | |
Katotosh6Rae 1619481600 | |
laeP2oweu0ri 1619568000 | |
Xaib0ovao3ae 1619654400 |
When installing the ingress controller, you would specify the argument --configmap-patternfiles=default/staging-patterns
. With Helm, this looks like this:
$ helm install kubernetes-ingress haproxytech/kubernetes-ingress \ | |
--set-string "controller.extraArgs={--configmap-patternfiles=default/staging-patterns}" |
The previous examples then change to use the pattern files by providing the -f
parameter followed by a filename prefixed with /patterns:
backend-config-snippet: | | |
http-request deny if !{ src -f patterns/ips.acl } | |
backend-config-snippet: | | |
http-request set-header Exp-Date hdr(key),map(patterns/keys.map) | |
The ingress controller will monitor changes to the ConfigMap and update pattern files accordingly.
Custom Routing
While the ingress controller implements traffic routing to your services, it gets its routing rules from Ingress resources. Following the Kubernetes specification, the only routing parameters available on an Ingress resource are the requested host and URL path, which you may find constraining.
Did You Know? The new Gateway API specification addresses this by expanding the routing decision to also match content in HTTP request headers. Work to support this is already underway.
A solution was found for users who wish to go beyond routing by host or URL path by providing a dedicated service annotation named route-acl. This annotation can be set on a specific Kubernetes service (not an Ingress or other resource) to provide an HAProxy ACL expression that can be used to route traffic to that specific service.
This makes it easy to set up canary deployment with the HAProxy Kubernetes Ingress Controller, as explained in this implementation guide.
Secondary HAProxy Config File
The ingress controller now supports loading a secondary HAProxy configuration file where you can define additional sections such as resolvers
, cache
, and ring
.
The main configuration file, haproxy.cfg, which is generated by the ingress controller, reflects the state of pods and services within your Kubernetes cluster. The secondary configuration file is loaded alongside it, but remains completely under your control.
There are two main reasons to use the secondary configuration:
Configure anything not supported by Ingress Controller annotations,
Provide a stepping stone for migrating a legacy HAProxy config into one compatible with the HAProxy Kubernetes Ingress Controller
In the following example, we define a secondary config file in order to support runtime DNS resolution in HAProxy by creating a resolvers
section named mydns.
First, create a file named haproxy-aux.cfg and add a resolvers
section to it, as shown in the following example configuration:
resolvers mydns | |
nameserver local 127.0.0.1:53 | |
nameserver google 8.8.8.8:53 |
The config file defines a resolvers section where we provide two resolvers, a local one and the Google one. More tuning parameters are described in the documentation.
Next, load the file into a ConfigMap:
$ kubectl create configmap haproxy-aux-cfg \ | |
--from-file ./haproxy-aux.cfg | |
configmap/haproxy-aux-cfg created |
Then mount the ConfigMap as a volume in the ingress controller pod by editing the pod YAML installation manifest to add volumeMounts
and volumes
. The haproxy-aux.cfg file is expected to be in the /etc/haproxy directory:
containers: | |
- name: haproxy-ingress | |
image: haproxytech/kubernetes-ingress:latest | |
volumeMounts: | |
- name: haproxy-cfg-vol | |
mountPath: /etc/haproxy/haproxy-aux.cfg | |
volumes: | |
- name: haproxy-cfg-vol | |
configMap: | |
name: haproxy-aux-cfg |
Our official Helm Chart supports mounting extra volumes in the Ingress controller pod too.
Each time that you update the ConfigMap, Kubernetes will automatically update the mounted volume.
The resolvers
section can then be referenced via the backend-snippet annotation:
backend-config-snippet: default-server init-addr none resolvers mydns |
This sets the default DNS resolution behavior for resolving the IP addresses of backend services:
They should start in a down state without any valid IP.
They should use
resolvers
from the mydns section.
Tls Client Certificate Authentication
Since version 1.5, you can enable mTLS authentication between the ingress controller and the services it’s proxying traffic to by setting the server-ca and server-crt annotations. This release adds that same feature on the client side too.
This is enabled via the client-ca annotation, which takes the path of a Kubernetes secret containing a Certificate Authority (CA) certificate. HAProxy will use the provided CA to accept connections from clients with trusted TLS certificates only.
In addition to mTLS authentication, you can also log information from client certificates, use that information to send the client to a different backend service, or populate a header. In the example below, the request-set-header annotation adds a header named client-cn to the HTTP request before sending it to the backend service. The header will hold the client certificate’s common name (CN) value fetched with the ssl_c_s_dn(CN) fetch method.
request-set-header: client-cn %[ssl_c_s_dn(CN)] |
Contributions
We’d like to thank the code contributors who helped make this version possible:
Moemen Mhedhbi | FEATURE REORG CLEANUP BUG FIX DOC TEST BUILD OPTIMIZATION |
Ivan Matmati | BUG FIX FEATURE, TEST DOC |
Marko Juraga | BUILD |
Zlatko Bratkovic | BUILD CLEANUP FEATURE |
Aleksandr Dubinsky | BUG FIX |
Marin Rukavina | DOC |
Nick Ramirez | DOC |
Frank Villaro-Dixon | BUG FIX |
Conclusion
Version 1.6 of the HAProxy Kubernetes Ingress Controller gives you more access to the features present in the underlying HAProxy engine. You can now define raw configuration snippets for frontends and/or add a secondary configuration file—options that reinforce why using an HAProxy-powered ingress controller is a smart choice: to use the core capabilities of the world’s fastest and most widely used software load balancer.
This release also unlocks the ability to use Map and ACL files, define custom routing rules, and enable certificate authentication on the client side. These features strengthen the flexibility and security of your ingress solution.
Interested in learning more about the HAProxy Kubernetes Ingress Controller? Subscribe to our blog!
Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.