HAProxy Technologies 2025 . All rights reserved. https://www.haproxy.com/feed en https://www.haproxy.com daily 1 https://cdn.haproxy.com/assets/our_logos/feedicon-xl.png <![CDATA[HAProxy Technologies]]> https://www.haproxy.com/feed 128 128 <![CDATA[Reviewing Every New Feature in HAProxy 3.1]]> https://www.haproxy.com/blog/reviewing-every-new-feature-in-haproxy-3-1 Mon, 03 Feb 2025 13:13:00 +0000 https://www.haproxy.com/blog/reviewing-every-new-feature-in-haproxy-3-1 ]]> HAProxy 3.1 makes significant gains in performance and usability, with better capabilities for troubleshooting. In this blog post, we list all of the new features and changes.

All these improvements (and more) will be incorporated into HAProxy Enterprise 3.1, releasing Spring 2025.

Watch our webinar HAProxy 3.1: Feature Roundup and listen to our experts as we examine new features and updates and participate in the live Q&A. 

Log profiles

The way that HAProxy emits its logs is more flexible now with the introduction of log profiles, which let you assign names to your log formats. By defining log formats with names, you can choose the one best suited for each log server and even emit logs to multiple servers at the same time, each with its own format.

In the example below, we define a log profile named syslog that uses the syslog format and another profile named json that uses JSON. For syslog, we set the log-tag directive inside to change the syslog header's tag field, to give a hint to the syslog server about how to process the message. Notice that we also get to choose when to emit the log message. We're emitting the log message on the close event, when HAProxy has finalized the request-response transaction and has access to all of the data:

]]> blog20250109-01.cfg]]> Our frontend uses both log profiles. By setting the profile argument on each log line, the frontend will send syslog to one log server and JSON to another:

]]> blog20250109-02.cfg]]> By default, HAProxy emits a log message when the close event fires, but you can emit messages on other events, too. By tweaking the syslog profile to include more on lines, we have logged a message at each step of HAProxy's processing:

]]> blog20250109-03.cfg]]> To enable these extra messages, set the log-steps directive to all or to a comma-separated list of steps:

]]> blog20250109-04.cfg]]> Log profiles present plenty of opportunities:

  • Create a log profile for emitting timing information to see how long HAProxy took to handle a request.

  • Create another log profile containing every bit of information you can squeeze out of the load balancer to aid debugging.

  • Switch the log format just by changing the profile argument on the log line.

  • Reuse profiles across multiple frontends.

  • Decide whether you want to emit messages for every step defined in a profile or for only some of them by setting the log-steps directive. 

do-log action

With the new do-log action, you can emit custom log messages throughout the processing of a request or response, allowing you to add debug statements that help you troubleshoot issues. Add the do-log action at various points of your configuration. In the example below, we set a variable named req.log_msg just before invoking a do-log directive:

]]> blog20250109-05.cfg]]> Update your syslog log-profile section (see the section on log profiles) so that it includes the line on http-req, which defines the log format to use whenever http-request do-log is called. Notice that this log format prints the value of the variable req.log_msg:

]]> blog20250109-06.cfg]]> Your log will show the custom log message:

]]> blog20250109-07.txt]]> The do-log action works with other directives too. Each matches up with a step in the log-profile section: 

  • http-response do-log matches the step http-res.

  • http-after-response do-log matches the step http-after-res.

  • quic-initial do-log matches the step quic-init.

  • tcp-request connection do-log matches the step tcp-req-conn.

  • tcp-request session do-log matches the step tcp-req-sess.

  • tcp-request content do-log matches the step tcp-req-cont.

  • tcp-response content do-log matches the step tcp-res-cont.

set-retries action

The tcp-request content and http-request directives have a new action named set-retries that dynamically changes the number of times HAProxy will try to connect to a backend server if it fails to connect initially. Because HAProxy supports layer 7 retries via the retry-on directive, this new action also lets you retry on several other failure conditions.

In the example below, we use the set-retries action to change the number of retries from 3 to 10 when there's only one server up. In other words, when all the other servers are down and we've only got one server left, we make more connection attempts.

]]> blog20250109-08.cfg]]> quic-initial directive

The new quic-initial directive, which you can add to frontend, listen, and named defaults sections, gives you a way to deny QUIC (Quick UDP Internet Connections) packets early in the pipeline to waste no resources on unwanted traffic. You have several options, including: 

  • reject, which closes the connection before the TLS handshake and sends a CONNECTION_REFUSED error code to the client.

  • dgram-drop, which silently ignores the reception of a QUIC initial packet, preventing a QUIC connection in the first place.

  • send-retry, which sends a Retry packet to the client.

  • accept, which allows the packet to continue.

Here's an example that rejects the initial QUIC packet from all source IP addresses, essentially disabling QUIC on this frontend:

]]> blog20250109-09.cfg]]> You can test it with the HTTP/3 enabled curl command. Below, the client's connection is rejected:

]]> blog20250109-10.sh]]> After failing to connect via HTTP/3 over QUIC, the client (browser) will typically fall back to using HTTP/2 over TCP. So, if you want to block the client completely, you need to add additional rules that block the TCP traffic.

Server initial state

Add the new init-state argument to a server directive or server-template directive to control how quickly each server can return to handling traffic after restarting, coming out of maintenance mode, or adding the server through service discovery. The default setting, up, optimistically marks the server as ready to receive traffic immediately. But it will be marked as down if it fails its initial health check. Available options include:

  • up - up immediately, but it will be marked as down if it fails the initial health check.

  • fully-up - up immediately, but it will be marked as down if it fails all of its health checks.

  • down - down initially and unable to receive traffic until it has passed the initial health check.

  • fully-down - down initially and unable to receive traffic until it has passed all of its health checks.

In the example below, we use fully-down so that the server remains unavailable after coming out of maintenance mode until it has passed all ten of its health checks. In this case, the health checks happen five seconds apart.

]]> blog20250109-11.cfg]]> Use the Runtime API's set server command to put servers into and out of maintenance mode:

]]> blog20250109-12.sh]]> ]]> SPOE

The Stream Processing Offloading Engine (SPOE) filter forwards streaming load balancer data to an external program. It enables you to implement custom functions at the proxy layer using any programming language to extend HAProxy.

What's new? A multiplexer-based implementation that allows idle connection sharing between threads and load balancing, queueing, and stickiness per request instead of per connection.This greatly improves reliability as the engine is no longer applet-based and is better aligned with the other proven mux-based mechanisms. This mux-based implementation allows for management of SPOP (Stream Processing Offload Protocol) through a new backend mode called spop. It also adds flexibility to SPOE, optimizes traffic distribution among servers, improves performance, and will ultimately make the entire system more reliable, as future changes to the SPOE engine will only affect pieces specific to SPOE.

In a configuration file, specify the mode for your backend as spop. This mode is now mandatory and automatically set for backends referenced by SPOEs. Configuring your backend in this way means that you are no longer required to use a separate configuration file for SPOE.

When an SPOE is used on a stream, a separate stream is created to handle the communication with the external program. The main stream is now the "parent" stream of this newly created "child" stream, which allows you to retrieve variables from it and perform some processing in the child stream based on the properties of the parent stream.

The following SPOE parameters were removed in this version and are silently ignored when present in the SPOE configuration: 

  • maxconnrate

  • maxerrrate 

  • max-waiting-frames 

  • timeout hello

  • timeout idle

Variables for SPOA child streams

You can now pass variables from the main stream that's processing a request to the child stream of a Stream Processing Offload Agent (SPOA). Passing data like the source IP address to the agent was never a problem; that's already supported. What was missing was the ability to pass variables to the backend containing the agent servers. That prevented users from configuring session stickiness for agent servers or selecting a server based on a variable.

In the example below, we try to choose an agent server based on a URL parameter named target_server. The variable req.target_server gets its value from the URL parameter. Then, we check the value in the backend to choose which server to use. However, this method fails because the agents backend can't access the variables from the frontend. The agents backend is running in a child stream, not the stream that's processing the request, so it can't access the variables.

]]> blog20250109-13.cfg]]> But in this version of HAProxy, you can solve this by prefixing the variable scope with the letter p for parent stream. Here, req becomes preq:

]]> blog20250109-14.cfg]]> This works for these scopes: psess, ptxn, preq, and pres. Use this feature for session stickiness based on the client's source IP or other scenarios that require reading variables set by the parent stream.

TCP log supports CLF

HAProxy 3.1 updates the option tcplog directive to allow an optional argument: clf. When enabled, CLF (Common Log Format) sends the same information as the non-CLF option, but in a standardized format that CLF log servers can parse.

It's equivalent to the following log-format definition:

]]> blog20250109-15.cfg]]> Send a host header with option httpchk

As of version 2.2, you can send HTTP health checks to backend servers like this:

]]> blog20250109-16.cfg]]> Before version 2.2, the syntax for performing HTTP health checks was this:

]]> blog20250109-17.cfg]]> If you prefer the traditional way, this version of HAProxy allows you to pass a host header to backend servers without having to specify carriage return and newline characters, and you don’t have to escape spaces with backslashes. Just add it as the last parameter on the option httpchk line, like this:

]]> blog20250109-18.cfg]]> Size unit suffixes

Many size-related directives now correctly support unit suffixes. For example, a ring buffer size set to 10g will now be understood as 1073741824 bytes, instead of incorrectly interpreting it as 10 bytes.

New address family: abnsz

To become compatible with other software that supports Linux abstract namespaces, this version of HAProxy adds a new address family, abnsz, which stands for zero-terminated abstract namespace. So HAProxy can interconnect with software that determines the length of the namespace's name by the length of the string, terminated by a null byte. In contrast, the abns address family, which continues to exist, expects that the name is always 108 characters long, with null bytes filling in the trailing spaces.

The syntax when using abnsz is the same as with abns:

]]> blog20250109-19.cfg]]> New address family: mptcp

MultiPath Transmission Control Protocol (MPTCP) is an extension of TCP and is described in RFC 8684. MPTCP, according to its RFC, "enables a transport connection to operate across multiple paths simultaneously". MPTCP improves resource utilization, increases throughput, and responds quicker to failures. MPTCP addresses can be explicitly specified using the following prefixes: mptcp@, mptcp4@, and mptcp6@.

  • If you declare mptcp@<address>[:port1[-port2]] in your configuration file, the IP address is considered as an IPv4 or IPv6 address depending on its syntax. 

  • If you declare mptcp4@<address>[:port1[-port2]] in your configuration file, the IP address will always be considered as an IPv4 address.

  • If you declare mptcp6@<address>[:port1[-port2]] in your configuration file, the IP address will always be considered as an IPv6 address.

With all three MPTCP prefixes, the socket type and transport method is forced to "stream" with MPTCP. Depending on the statement using this MPTCP address, a port or a port range must be specified.

New sample fetches

HAProxy 3.1 adds new sample fetch methods related to SSL/TLS client certificates:

  • ssl_c_san - Returns a string of comma-separated Subject Alt Name fields contained in the client certificate.

  • ssl_fc_sigalgs_bin - Returns the content of the signatures_algorithms (13) TLS extension presented during the Client Hello.

  • ssl_fc_supported_versions_bin - Returns the content of the supported_versions (43) TLS extension presented during the Client Hello.

New converters

This version introduces new converters. Converters transform the output from a fetch method.

  • date - Converts an HTTP date string to a UNIX timestamp.

  • rfc7239_nn - Converts an IPv4 or IPv6 address to a compliant address that you can use in the from field of a Forwarded header. The nn here stands for node name. You can use this converter to build a custom Forwarded header.

  • rfc7239_np - Converts an integer into a compliant port that you can use in the from field of a Forwarded header. The np here stands for node port. You can use this converter to build a custom Forwarded header.

]]> ]]> HAProxy Runtime API

This version of HAProxy updates the Runtime API with new commands and options.

debug counters

A new Runtime API command debug counters shows all internal counters placed in the code. Primarily aimed at developers, these debug counters provide insight for analyzing glitch counters and counters placed in the code using the new COUNT_IF() macro. Developers can use this macro during development to place arbitrary event counters anywhere in the code and check the counters' values at runtime using the Runtime API. For example, glitch counters can provide useful information when they are increasing even though no request is instantiated or no log is produced.

While diagnosing a problem, you might be asked by a developer to run the command debug counters show or debug counters all to list all available counters. The counters are listed along with their count, type, location in the code (file name and line number), function name, the condition that triggered the counter, and any associated description. Here is an example for debug counters all:

]]> blog20250109-20.sh]]> Please note that the format and contents of this output may change across versions and should only be used when requested during a debugging session.

dump ssl cert

The new dump ssl cert command for the Runtime API will display an SSL certificate directly in PEM format; useful for placing delimiters and saving certificates when it was updated on the CLI and not on the filesystem yet. You can also dump a transaction by prefixing the filename with an asterisk. This command is restricted and can only be issued on sockets configured for level admin.

The syntax for the command is: 

]]> blog20250109-21.sh]]> echo

The echo command with syntax echo <text> will print what's contained in <text> to the console output; it's useful for writing comments in between multiple commands. For example:

]]> blog20250109-22.sh]]> show dev

This version improves the show dev Runtime API command  by printing more information about arguments provided on the command line as well as the Linux capabilities set at process start and the current capabilities (the ability to preserve capabilities was introduced in Version 2.9 and improved in Version 3.0). This information is crucial for engineers troubleshooting the product.

To view this development and debug information, issue the the show dev command:

]]> blog20250109-23.sh]]> You can see in the output that the command-line arguments and capabilities are present:

]]> blog20250109-24.txt]]> Note that the format and contents of this output may change per version, and is most useful for providing current system status to developers that are diagnosing issues.

show env

The command show env dumps environment variables known to the process, and you can specify which environment variable you would like to see as well:

]]> blog20250109-25.sh]]> Here's an example output:

]]> blog20250109-26.txt]]> show sess

The new show-uri option for command show sess dumps to the console output a list of active streams and displays the transaction URI, if available and captured during the request analysis.

show quic

The show quic command produces more internal information about the internal state of the congestion control algorithm and other dynamic metrics (such as window size, bytes in flight, and counters).

show info

The show info command will now report the current and total number of streams. It can help quickly detect if a slowdown is caused on the client side or the server side and facilitate the export of activity metrics. Here's an example output that shows the new CurrStreams and CumStreams:

]]> blog20250109-27.txt]]> ]]> Troubleshooting 

This release includes a number of troubleshooting and debugging improvements in order to reduce the number of round trips between developers and users and to provide better insights for debugging. The aim is to minimize impact to the user while also being able to gather crucial logs, traces, and core dumps. Improvements here include new log fetches, counters, and converters, improved log messages in certain areas, improved verbosity and options for several Runtime API commands, the new traces section, and improvements to the thread watchdog. 

Traces

Starting in version 3.1, traces get a dedicated configuration section named traces, providing a better user experience compared to previous versions. Traces report more information than before, too.

Traces let you see events as they happen inside the load balancer during the processing of a request. They're useful for debugging, especially since you can enable them on a deployed HAProxy instance. Traces were introduced in version 2.1, but at that time you had to configure them through the Runtime API. In version 2.7, you could configure traces from the HAProxy configuration file, but the feature was marked as experimental. The new traces section, which is not experimental, offers better separation from other process-level settings and a more straightforward syntax. Use traces cautiously, as it could impact performance.

To become familiar with them, read through the Runtime API documentation on traces. Then, try out the new syntax in the configuration file. In the following configuration example, we trace HTTP/2 requests:

]]> blog20250109-28.cfg]]> We restarted HAProxy and used the journalctl command to follow the output of this trace:

]]> blog20250109-29.sh]]> The output shows the events happening inside the load balancer:

]]> blog20250109-30.txt]]> You can list multiple trace statements in a traces section to trace various requests simultaneously. Also new to traces is the ability to specify an additional source to follow along with the one you are tracing; this is useful for tracing backend requests while also tracing their associated frontend connections, for example.

Major improvements to the underlying muxes' debugging and troubleshooting information make all of this possible. Thanks to these improvements, traces for H1, H2, and H3/QUIC now expose much more internal information. This aids in more easily piecing together requests through their entire path through the system, which was not possible previously. 

when() converter

Consider a case where you may want to log some information or pass data to a converter only when certain conditions are met. Thanks to the new when() converter, you can! The new when() converter enables you to pass data, such as debugging information, only when a condition is met, such as an error condition. 

Along with the when() converter, there are several new fetches as well that can produce data related to debugging and troubleshooting. The first new fetches are the debug string fetches, fs.debug_str for a frontend stream and bs.debug_str for a backend stream. These two fetches return debugging information from the lower layers of the stream and connection. The next set of fetches are the entity fetches last_entity and waiting_entity where the former returns the ID of the last entity that was evaluated during stream analysis and the former returns the ID of the entity that was waiting to continue its processing when an error or timeout occurred. In this context, entity refers to a rule or filter.

You can use these fetches on their own to always print this debug information, which may be too verbose to log on every request, or you can use these fetches with the when() converter as follows to log this information only when an error condition occurs, so as to avoid flooding the logs:

For the debug string fetches, you can provide the when() converter with a condition that tells HAProxy to log the debug information only when there is an error. The when() converter is flexible in terms of the conditions you are able to provide to it, and you can prefix a condition with ! to negate it. You can also specify an ACL to evaluate. The available conditions are listed here:

  • error: returns true when an error was encountered during stream processing

  • forwarded: returns true when the request was forwarded to a backend

  • normal: returns true when no error occurred

  • processed: returns true when the request was either forwarded to a backend server or processed by an applet

  • stopping: returns true if the process is currently stopping

  • acl: returns true when the ACL condition evaluates to true. Use this condition like so, specifying the ACL condition and ACL name separated by a comma: when(acl,<acl_name>).

Note that if the condition evaluates to false, then the fetch or converter associated with it will  not be called. This may be useful in cases where you want to customize when certain items are logged or you want to call a converter only when some condition is met.

For example, to log upon error in a frontend, add a log format statement like this to your frontend, using the condition normal and prefix it with ! to negate the condition:

]]> blog20250109-31.cfg]]> That is to say "log the frontend debug string only when the results of the expression are not normal." When this condition is met, HAProxy will log a message that contains the content of the debug string:

]]> blog20250109-32.txt]]> You can do the same for a backend, replacing fs.debug_str with bs.debug_str.

As for the last_entity and waiting_entity fetches, you can use them with when() to log the ID of the last entity or the waiting entity only when an error condition is met. In this case, you can set the condition for when() to error, which means it will log the entity ID only when there is an error. You can add a log format line as follows, specifying which entity's, last or waiting, ID to log:

]]> blog20250109-33.cfg]]> If the condition for logging is not met, a dash "-" is logged in the message instead. 

fc/bc_err fetches

As of version 2.5, you can use the sample fetches fc_err for frontends and bc_err for backends to help determine the cause of an error on the current connection. In this release, these fetches have been enhanced to include connection-level errors that occur during data transfers. This is useful for detecting network misconfigurations at the OS level, for example incorrect firewall rules, resource limits of the TCP stack, or a bug in the kernel, as would be indicated by an error such as ERESET or ECONNRESET

You can use the intermediary fetches fc_err_name and bc_err_name to get the short name of the error instead of just the error code (as would be returned from fc_err or bc_err) or the long error message returned by fc_err_str or bc_err_str. As with the fc_err and bc_err sample fetches, use the intermediary fetches prefixed with fc_* for frontends and bc_* for backends.

Post_mortem structure for core dumps

The system may produce a core dump on a fatal error or when the watchdog fires, which detects deadlocks. While crucial to diagnosing issues, sometimes these files are truncated or can be missing information vital to analysis. This release includes an internal post_mortem structure to be included in core dumps, which contains pointers to the most important internal structures. This structure, present in all core dumps, allows developers to more easily navigate the process's memory, reducing analysis time, and prevents the user from needing to change their settings to produce different debug output. Additionally, more hints have been added to the crash output to help in decoding the core dump. To view this debugging information without producing a core dump, use the improved show dev command. 

Improved thread dump

In previous versions, sometimes stderr outputs of the thread backtraces in core dumps would be missing, or only the last one was present due to the reuse of the same output buffer for each thread. Core dumps now include backtraces for all threads, as each thread's backtrace is now dumped in its own buffer. Also present in core dumps as of this version are the output messages for each thread, which assists developers in determining the causes of issues even when debug symbols are not present. 

Watchdog and stuck threads

This version includes improvements to HAProxy's watchdog, which detects deadlocks and kills runaway processes. The watchdog will now watch for stuck threads more often, by default every 100ms, and it will emit warnings regarding a stuck thread's backtrace before killing it. It will stop the thread if after the first warning the thread makes no progress for one second. In this way, you should see ten warnings about a stuck thread before the watchdog kills it. 

Note that you can adjust the time delay after which HAProxy will emit a warning for a stuck thread using the global debugging directive warn-blocked-traffic-after. We do not advise that you change this value, but changing it may be necessary during a debugging session. 

Also note that you may see this behavior where the watchdog warns about a thread when you are doing computationally-heavy operations, such as Lua parsing loops in sample fetches or while using map_reg or map_regm

An issue regarding the show threads Runtime API command that caused it to take action on threads sooner than expected has also been remedied. 

GDB core inspection scripts

This release includes  GDB (GNU debugger) scripts that are useful for inspecting core dumps. You can find them here: /haproxy/haproxy/tree/v3.1.0/dev/gdb

Memory profiling

This version enhances the accuracy of the memory profiler by improving the tracking of the association between memory allocations and releases and by intercepting more calls such as strdup() as well as non-portable calls such as strndup() and memalign(). This improvement in accuracy applies to the per-DSO (dynamic shared object) summary as well, and should fix some rare occurrences where it incorrectly appeared that there was more memory free than allocated. New to this version, a summary is provided per external dependency, which can help to determine if a particular library is leaking memory and where.

Logged server status

In this version, HAProxy now logs the correct server status after an L7 retry occurs. Previously it reported only the first code that triggered the retry.

Short timeouts

Under high load, unexpected behavior may arise due to extremely short timeouts. Given that the default unit for timeouts is milliseconds, it is not so obvious that the timeout value you specify may be too small if you do not also specify the unit. HAProxy will now emit a warning for a timeout value less than 100ms if you do not provide a unit with the timeout value. The warning will suggest how to configure the directive to avoid the warning, typically by appending "s" if you are specifying a value in seconds or "ms" for milliseconds.

File descriptor limits

A new global directive fd-hard-limit sets the maximum number of file descriptors the process can use. By default, it is set to 1048576 (roughly one million, the long-standing default for most operating systems). This value is used to remedy an issue that can be caused by a new operating system default declaring that the process can have up to one billion file descriptors, thus resulting in either slow boot times or failing on an out-of-memory exception. HAProxy uses the value of this directive to set the maximum number of file descriptors and to determine a reasonable limit based on the available resources (for example RAM size). If you require a custom maximum number of file descriptors, use this global directive as follows:

]]> blog20250109-34.cfg]]> Time jumping

To remedy an issue some users have been facing regarding incorrect rate counters as a result of time jumps, that is, a sudden, significant jump forward or backwards in the system time, HAProxy will now use the precise monotonic clock as the main clock source whenever the operating system supports it. In previous versions, measures were put in place to detect and correct these jumps, leaving a few hard-to-detect cases, but now the use of the precise monotonic clock helps to better detect small time jumps and to provide a finer time resolution.

Log small H2/QUIC anomalies

HAProxy 3.0 introduced the ability to track protocol glitches, or those requests that are valid from a protocol perspective but have potential to pose problems anyway. This version enables the HTTP/2 and QUIC multiplexers to count small anomalies that could force a connection to close. You can capture and examine this information in the logs. These could help to identify to what level a request is suspicious.

]]> ]]> Performance

HAProxy 3.1 improved performance in the following ways.

H2 

The H2 mux is significantly more performant in this version. This was accomplished by optimizing the H2 mux to wake up only when there are requests ready to process, saving CPU cyles, and resulting in using 30% fewer instructions on average when downloading. The POST upload performance has been increased up to 24x with default settings and it now also avoids head-of-line blocking when downloading from H2 servers. 

Two new global directives, tune.h2.be.rxbuf and tune.h2.fe.rxbuf allow for further tuning of this behavior. Specify a buffer size in bytes using tune.h2.fe.rxbuf for incoming connections and tune.h2.be.rxbuf for outgoing connections. For both uploads and for downloads, one buffer is granted to each stream and 7/8 of the unused buffers is shared between streams that are uploading / downloading, which is the mechanism that significantly improves performance.

QUIC

New to this version are two new global directives for tuning QUIC performance. The first, tune.quic.cc.cubic.min-losses takes a number that defines a threshold for how many packets
must be missed before the Cubic congestion control algorithm determines that a loss has occurred. This setting allows the algorithm to be slightly more tolerant to false losses, though you should exercise caution when changing the value from the default value of 1. A value of 2 may prove to show some performance improvement, though we do not recommend running this way for extended periods of time, only for analysis, and you should avoid providing a value larger than 2. 

As for tune.quic.frontend.default-max-window-size, you can use this global directive to define the default maximum window size for the congestion controller of a single QUIC connection, by specifying an integer value between 10k and 4g, with a suffix of "k", "m" or "g".

This version sees an efficiency improvement in regards to the QUIC buffer allocator and using this tunable, you are able to vary the size of the memory required per-connection, thus reducing overallocation.

Regarding the transmission path for QUIC, its performance has been significantly improved in this version so that it will now adapt to the current send window size and will use Generic Send Offload to let the kernel send multiple packets in a single system call. This offloads processing from HAProxy and the kernel and places it onto the hardware. This is especially meaningful when used on virtual machines where system calls have potential to be expensive.

Process priorities

To help improve performance in the case of large configurations that consume a lot of CPU on reload, two new global configuration directives tune.renice.startup and tune.renice.runtime are new to this version. These global directives take a value between -20 and 19 to apply a scheduling priority to configuration parsing. A lower value will lower the priority of the parsing, for example, a priority value of 10 will be scheduled before a priority value of 8. These values correspond to the scheduling priority values accepted by the setpriority() Linux system call. Once the parsing is complete, the priority of the parsing returns to its previous value, or to the value of tune.renice.runtime, if also present in the configuration. See the Linux manual page on scheduling priority (sched()) for more information. 

TCP logs

TCP logs saw a 56% performance gain in this version thanks to the implementation of the line-by-line parser into the TCP log forwarder. In regards to log servers, the ring sending mechanism sees improvement in this version, as the load is better balanced across available threads, assigning new server connections to threads with the least load. You can now use the max-reuse directive for TCP connections served by rings. When used for this reason, the sink TCP connection processors will not reuse a server connection more times than the indicated maximum. This means that connections to the servers will be forcefully removed and re-created, which helps to better distribute the load across available threads, thus increasing performance. Make sure that when using this directive that the connections are not closed more than a couple of times per second.

Pattern cache

In previous versions, some users may have seen intense CPU usage by the pattern LRU cache when performing lookups with low cardinality. To remedy this, in this version the cache will be skipped for maps or expressions with patterns with low cardinality, that is, less than 5 for regular expressions, less than 20 for others. Depending on your setup, you could see a savings of 5-15% CPU in these cases.

Config checking

As of this version, configured servers for backends are now properly indexed, which saves time in detecting duplicate servers. As such, the startup time for a configuration with a large number of servers could see a reduction of up to a factor of 4.

Variables

Variables have been moved from a list to a tree, resulting in a 67% global performance gain for a configuration including 100 variables.

Expressions

We saw a performance gain of, on average, 7% regarding arithmetic and string expressions by removing the need for trivial casts samples and converters of the same types. 

Lua

The Lua function core.set_map() has doubled its performance in speed by avoiding duplicate lookups.

QUIC buffer

Small frames for the QUIC buffer handling now use small buffers. This improves both the memory and CPU usage, as the buffers are now more appropriately sized and do not require realignment.

QUIC will always send a NEW_TOKEN frame to new clients for reuse in the next connection. This behavior permits clients to reconnect after being validated without going through the address validation process again on the next connection. In other words, the next established connection will improve network performance when a listener is attacked or when dealing with a lossy network.

File descriptors

This version includes a performance gain regarding smoother reloads for large systems, that is, systems requiring a large number of file descriptors and a large number of threads. This gain is due to how file descriptors are handled on boot, shortening initialization time from 1.6s to 10ms for a setup with 2M configured file descriptors. 

Master-worker

HAProxy's master-worker mode was heavily reworked in this version to improve stability and maintainability. Its previous architecture model proved difficult in maintaining forward compatibility for seamless upgrades; the rework aims to remedy this problem. Per the new model, the master process does nothing after starting until it confirms the worker is ready, and it no longer re-executes itself to read the configuration, which greatly reduces the number of potential race conditions. The configuration is now buffered once for both the master and worker and as such will be identical for both. As such, environment variables shared by both will be more consistent, and the worker will be isolated from variables applicable to the master only. This all improves the separation between the processes. An additional improvement is that this rework will reduce file descriptor leaks across the processes as they are now better separated. All of this to say: you should not notice anything as a result of this change except for improved reliability.

HAProxy test suite

An additional milestone regarding reliability that is worth a mention is that the regtests, that is, HAProxy's test suite, have now exceeded 5000 expect rules, spread over 216 files. These tests are set to strict evaluation, which means that, when run, any warning will produce an error. Know that reliability is a top priority, and these tests are executed on 20-30 platform combinations on every push, and are run locally by developers on each commit. This ensures that HAProxy continues to shine in regards to reliability and robustness.

Deprecation

The program section is deprecated in HAProxy 3.1 and will no longer be supported starting HAProxy 3.3. To replace them, we suggest using process managers such as Systemd, SysVinit, Supervisord, or Docker s6-overlays. The program section will also behave differently in HAProxy 3.1. During a reload of HAProxy, the master load balancer process will start a configured program, but a worker process will execute the rest of the program instead. A program can execute even if the worker process has a faulty configuration at reload.

The configuration options accept-invalid-http-request and accept-invalid-http-response are deprecated. Instead, use accept-unsafe-violations-in-http-request and accept-unsafe-violations-in-http-response. The accept-unsafe-violations-in-http-request will enable or disable relaxing of HTTP request parsing, while accept-unsafe-violations-in-http-response will enable or disable relaxing of HTTP response parsing. 

Duplicate names in various families of proxies—for example frontend, listen, backend, defaults, and log-forward sections—and between servers are detected and reported with a deprecation warning, specifying that the duplicate names will not be supported in HAProxy 3.3. Update your configurations as the deprecation warnings appear before upgrading to HAProxy 3.3. Addressing these deprecation warnings will result in faster configuration parsing times, better visibility in logs since there are no duplicate names, and a reliable configuration at the end of the day.

The legacy C-based mailers are deprecated and will be removed in HAProxy 3.3. Set up mailers using Lua mailers instead.

Breaking changes

Visit /haproxy/wiki/wiki/Breaking-changes to see the latest on upcoming breaking changes in HAProxy and which releases they are planned for. The breaking changes here aids users upgrading from older versions of HAProxy to newer versions.

Conclusion

HAProxy 3.1 was made possible through the work of contributors who pour immense effort into open-source projects like this one. This work includes participating in discussions, bug reporting, testing, documenting, providing help, writing code, reviewing code, and hosting packages.

While it's impossible to include every contributor's name here, you are all invaluable members of the HAProxy community. Thank you for contributing!


]]> Reviewing Every New Feature in HAProxy 3.1 appeared first on HAProxy Technologies.]]>
<![CDATA[Announcing HAProxy Kubernetes Ingress Controller 3.1]]> https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-31 Tue, 28 Jan 2025 12:38:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-31 ]]> We’re excited to announce the release of HAProxy Kubernetes Ingress Controller 3.1!

This release introduces expanded support for TCP custom resource definitions (CRDs), runtime improvements, and parallelization when writing maps.

Version compatibility with HAProxy 

As announced with the previous version, HAProxy Kubernetes Ingress Controller's version number now matches the version of HAProxy it uses. HAProxy Kubernetes Ingress Controller 3.1 is built with HAProxy version 3.1.

Lifecycle of versions

To enhance transparency about supported versions, we’ve introduced an End-of-Life table that outlines which versions are supported in parallel.

Additionally, we’ve published a list of tested Kubernetes versions. Among the versions supported we have Kubernetes 1.32 released in December 2024. While HAProxy Kubernetes Ingress Controller is expected to work with versions beyond those listed, only tested versions are explicitly documented.

Ready to Upgrade?

When you are ready to start the upgrade procedure, go to the upgrade instructions for HAProxy Kubernetes Ingress Controller.

]]> ]]> Updating certificates through the Runtime API

In this release, HAProxy Kubernetes Ingress Controller now uses HAProxy's Runtime API to update certificates without requiring a reload. Previously, certificate updates required an HAProxy reload, but this new approach streamlines the process and reduces resource use. 

Parallelization in writing maps

Both HAProxy and the file system can handle writing maps in parallel. With version 3.1, HAProxy Kubernetes Ingress Controller parallelizes writing maps both to HAProxy and to the file system. To maintain I/O efficiency and reduce latency, a maximum of 10 maps can be written in parallel.

]]> ]]> ingress.class annotation in TCP custom resource

TCP Custom Resources managed by HAProxy Kubernetes Ingress Controller can now be filtered using the ingress.class annotation, aligning behavior with an Ingress object.

Breaking Change

If you’re upgrading from version 3.0 to 3.1, take note of the following regarding the ingress.class annotation:

  • For TCP CRs deployed with HAProxy Kubernetes Ingress Controller versions ≤ 3.0, if the ingress controller has an ingress.class flag, you must set the same value for the ingress.class annotation in the TCP CR.

  • If the annotation is not set, the corresponding backends and frontends in the HAProxy configuration will be deleted, except if the controller empty-ingress-class flag is set (the same behavior as the Ingress object).

]]> Support thread pinning on http/https/stats/healthz

You can pin threads using the following new arguments for HAProxy Kubernetes Ingress Controller:

  • http-bind-thread

  • https-bind-thread

  • healthz-bind-thread

  • stats-bind-thread

These arguments offer advanced optimization for specific use cases.

Contributions

]]> ]]> HAProxy Kubernetes Ingress Controller's development thrives on community feedback and feature input. We’d like to thank the code contributors who helped make this version possible!

Contributor

Area

Ivan Matmati

FEATURE, BUG, TEST

Hélène Durand

FEATURE, BUG, TEST

Dinko Korunić

FEATURE, BUILD, OPTIM

Nicholas Ramirez

DOC

Daniel Skrba

DOC

Andjelko Iharos

DOC

Olivier Doucet

FEATURE

Xuefeng Chen

FEATURE

Will Weber

BUG

Ali Afsharzadeh

BUILD

Zlatko Bratković

BUILD, FEATURE, DOC, CLEANUP

Conclusion 

HAProxy Kubernetes Ingress Controller 3.1 introduces features that enhance flexibility and efficiency for managing ingress traffic. With expanded support for TCP CRDs, enhanced certificate updates through the Runtime API, and improved parallelization when writing maps, this release empowers users to handle more complex Kubernetes environments. 

To learn more about HAProxy Kubernetes Ingress Controller, follow our blog and browse our Ingress Controller documentation. If you want to see how HAProxy Technologies also provides external load balancing and multi-cluster routing alongside our ingress controller, check out our Kubernetes solutions and our webinar.

]]> Announcing HAProxy Kubernetes Ingress Controller 3.1 appeared first on HAProxy Technologies.]]>
<![CDATA[Announcing HAProxy Enterprise Kubernetes Ingress Controller 3.0]]> https://www.haproxy.com/blog/announcing-haproxy-enterprise-kubernetes-ingress-controller-3-0 Wed, 22 Jan 2025 01:56:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-enterprise-kubernetes-ingress-controller-3-0 ]]> We’re excited to introduce HAProxy Enterprise Kubernetes Ingress Controller 3.0, packed with powerful new features that bring greater control, performance, and observability to managing Kubernetes environments.

This release delivers TCP custom resource definitions (CRDs) to improve mapping, structuring, and validation for TCP services within HAProxy Enterprise Kubernetes Ingress Controller. It also includes new runtime optimizations, improved Prometheus metrics, enhanced backend customization, and updated certificate handling to reduce reloads.

Additionally, we’ve aligned the version numbering with HAProxy Enterprise, jumping from version 1.11 to version 3.0. We hope this clarifies the link between HAProxy Enterprise Kubernetes Ingress Controller and its baseline version of HAProxy Enterprise moving forward.

Let’s dive deeper into HAProxy Enterprise Kubernetes Ingress Controller 3.0.

New to HAProxy Enterprise Kubernetes Ingress Controller?

HAProxy Enterprise Kubernetes Ingress Controller is built to supercharge your Kubernetes environment by adding advanced TCP and HTTP routing that connects clients outside your Kubernetes cluster with containers inside. Built upon HAProxy Enterprise, this adds an important layer of security via the integrated Web Application Firewall. HAProxy Enterprise Kubernetes Ingress Controller is backed by our authoritative expert technical support.

Lifecycle of versions

To enhance transparency about supported versions, we’ve introduced an End-of-Life table that outlines which versions are supported in parallel.

Additionally, we’ve published a list of tested Kubernetes versions. Among the versions supported we have Kubernetes 1.32 released in December 2024. While HAProxy Enterprise Kubernetes Ingress Controller is expected to work with versions beyond those listed, only tested versions are explicitly documented.

Ready to upgrade?

When you are ready to start the upgrade procedure, go to the upgrade instructions for HAProxy Enterprise Kubernetes Ingress Controller.

]]> ]]> Updating certificates through the Runtime API

In this release, HAProxy Enterprise Kubernetes Ingress Controller now uses HAProxy's Runtime API to update certificates without requiring a reload. Previously, certificate updates required an HAProxy Enterprise reload, but this new approach streamlines the process and reduces resource usage. 

Parallelization in writing maps

Both HAProxy Enterprise and the file system can handle writing maps in parallel. With version 3.0, HAProxy Enterprise Kubernetes Ingress Controller parallelizes writing maps both to HAProxy Enterprise and to the file system. To maintain I/O efficiency and reduce latency, a maximum of 10 maps can be written in parallel.

Support thread pinning on http/https/stats/healthz

You can pin threads using the following new arguments for HAProxy Enterprise Kubernetes Ingress Controller:

  • http-bind-thread

  • https-bind-thread

  • healthz-bind-thread

  • stats-bind-thread

These arguments offer advanced optimization for specific use cases.

Runtime improvements

When calculating the number of server slots to add to a backend after detecting a scaling event, HAProxy Enterprise Kubernetes Ingress Controller now ensures that we always have at least scale-server-slots number of empty servers. This is a slightly different approach, but it will produce slightly fewer reloads of HAProxy Enterprise. 

To further reduce the number of reloads, you can use a new annotation named haproxy.com/deployment on your Service definition to link a Deployment resource to the service. This will connect the service to a single deployment so that the number of desired replicas can be extracted and directly used as the required number of server slots.

Additionally, a new and more efficient way of doing backend updates uses fewer connections to the HAProxy Enterprise Runtime API.

Prometheus metrics

We added two new counters to the list of Prometheus metrics:

  • haproxy_reloads_total

  • haproxy_runtime_socket_connections_total

These counters start when the container is started and do not reset.

Logging

The logs now show additional messages about changes to the content of map files. Also, the number of repeating messages has been reduced in certain scenarios (for example, for the same service in the same ingress).

Custom resource definitions: Backend CRD

To further allow customization of backends, the Backend CRD now has options to add ACLs and http-request options to the backend.

]]> ]]> Custom resource definitions: TCP

Until now, mapping for TCP services was available through a custom ConfigMap using the --configmap-tcp-services flag. While this worked as expected, there were a few limitations we needed to address. 

For example, ConfigMap alone doesn't have a standardized structure or validation. Therefore, keeping a larger list of services tidy can be challenging. Additionally, only some HAProxy options (such as service, port, and SSL/TLS offloading) were available for those types of services. 

The tcps.ingress.v1.haproxy.com definition, conversely, lets us define and use more HAProxy options than we could with ConfigMap.

Installing and getting to know TCP CRDs

If you're using Helm, the TCP services definition will be installed automatically. Otherwise, it's available as a raw YAML file via GitHub.

TCP Custom Resources (CRs) are namespaced, and you can deploy several of them in a shared namespace. 

You can filter TCP Custom Resources managed by the ingress controller using the ingress.class annotation. It behaves the same way as an Ingress object.

A TCP CR contains a list of TCP service definitions. Each service definition has:

  • a name

  • a frontend section containing two permitted components:

    • any setting from client-native frontend model 

    • a list of binds coupled with any settings from client-native bind models 

  • a service definition that's a Kubernetes upstream Service/Port (the K8s Service and the deployed TCP CR must be in the same namespace).

Here's a simple example of a TCP service:

]]> blog20250122-01.yml]]> How do we configure service and backend options? You can use the Backend Custom Resource (and reference it in the Ingress Controller ConfigMap, Ingress, or the Service) in conjunction with the TCP CR.

Mitigating TCP collisions

TCP services are tricky since they allow for unwanted naming and configuration duplications. This overlap can cause transmission delays and other performance degradations while impacting reliability. 

Luckily, HAProxy Enterprise can detect and manage two types of collisions:

  • Collisions on frontend names

  • Collisions on bind addresses and ports

If several TCP services across all namespaces encounter these collisions, HAProxy Enterprise will only apply the one that was created first based on the older CreationTimestamp of the custom resource. This will generate a message in the log.

SSL/TLS in a TCP custom resource

Here's a quick example of a TCP service with SSL/TLS enabled:

]]> blog20250122-02.yml]]> Keep in mind that ssl_certificate can be the following:

  • The name of a Kubernetes Secret (in the same namespace as the TCP CR) containing the certificate and key

  • A folder or filename on the pod's local filesystem, which was mounted as a Secret Volume

For example, you can mount an SSL/TLS Secret in the Ingress Controller Pod on a volume and reference the volume mount path in ssl_certificate. Without changing the Pod (or deployment manifest), you can instead use a Secret name within the  ssl_certificate configuration. As a result, the certificate and key will be written in the Pod's filesystem at the etc/haproxy/certs/tcp path.

]]> Additional changes
  • In order to help with experimenting, the nano editor was added to the container image. To allow more precise debugging and testing, we added the nano editor to the container image. Using the nano editor, configuration changes can be tested temporarily (to see their effects) before applying them permanently.

  • The Unix socket is now used when mixing SSL passthrough and offloading. This will allow better performance compared to the previous implementation.

Conclusion 

HAProxy Enterprise Kubernetes Ingress Controller 3.0 represents our commitment to delivering a flexible and efficient platform for managing ingress traffic. With the introduction of TCP CRDs, improved Runtime efficiency, streamlined certificate updates, and expanded customization options, this release provides powerful tools to meet diverse Kubernetes use cases.

To learn more about HAProxy Enterprise Kubernetes Ingress Controller, follow our blog and browse our documentation. To see how HAProxy Technologies also provides external load balancing and multi-cluster routing alongside our ingress controller, check out our Kubernetes solutions and our webinar.

]]> Announcing HAProxy Enterprise Kubernetes Ingress Controller 3.0 appeared first on HAProxy Technologies.]]>
<![CDATA[January 2025 – Multiple rsync CVEs impacting memory and file handling in Linux virtual images]]> https://www.haproxy.com/blog/january-2025-multiple-rsync-cves-impacting-memory-and-file-handling-in-linux-virtual-images Wed, 22 Jan 2025 01:00:00 +0000 https://www.haproxy.com/blog/january-2025-multiple-rsync-cves-impacting-memory-and-file-handling-in-linux-virtual-images ]]> The latest versions of HAProxy Fusion fix multiple rsync vulnerabilities related to memory handling and file management in HAProxy Fusion’s Linux-based virtual images. Specifically, attackers can take advantage of weaknesses in rsync checksum mechanisms and symbolic link verification processes. 

These five CVEs only affect components within HAProxy Fusion binaries. We'll cover each in greater detail before sharing remediation steps. 

If you are using HAProxy Fusion virtual images, you should upgrade to the fixed version as soon as possible. There are no workarounds available.

If you are using HAProxy Fusion installation packages, you should upgrade your rsync packages to the latest version following the usual procedure for your operating system.

High-impact CVEs

CVE-2024-12085

This CVE exposes a flaw within the rsync daemon which could be triggered when rsync compares file checksums. This allows an attacker to manipulate the checksum length (s2length) to cause a comparison between a checksum and uninitialized memory, and leak one byte of uninitialized stack data at a time.

This impacts all rsync versions.

CVE-2024-12086

This CVE exposes a flaw within rsync that could allow a server to enumerate the contents of an arbitrary file from the client's machine. This issue occurs as files are copied from client to server. During this process, the rsync server will send checksums of local data to the client for comparison to determine what data needs to be sent back. By sending carefully constructed checksum values for arbitrary files, an attacker may be able to reconstruct file data byte-by-byte based on responses from the client.

This impacts all rsync versions.

CVE-2024-12087

This CVE exposes a path traversal vulnerability within rsync stemming from behavior enabled by the --inc-recursive option. This is enabled by default for many client options and can be enabled by the server even if not explicitly enabled by the client. 

When using the --inc-recursive option, a lack of proper symlink verification coupled with deduplication checks occurring on a per-file-list basis could allow a server to write files outside of the client's intended destination directory. A malicious server could write malicious files to arbitrary locations named after valid client directories and paths.

This impacts all rsync versions.

CVE-2024-12088

This CVE exposes a verification flaw within rsync. When using the --safe-links option, rsync fails to properly verify if a symbolic link destination contains another symbolic link within it. This results in a path traversal vulnerability, which may lead to arbitrary file write outside the intended directory.

This impacts all rsync versions.

CVE-2024-12747

This CVE exposes a flaw within rsync. This vulnerability arises from a race condition during rsync's handling of symbolic links. By default, rsync skips symbolic links upon encountering them. If an attacker replaces a regular file with a symbolic link at a precise time, it is possible to bypass the default behavior and traverse symbolic links. Depending on the privileges of the rsync process, an attacker could leak sensitive information — potentially leading to privilege escalation.

This impacts all rsync versions.

Affected versions and remediation

HAProxy Technologies released new versions of HAProxy Fusion virtual images on Tuesday, 21 January 2025. You can identify the fixed versions by the release date 20250121 or later.

These releases patch the vulnerability described in CVE-2024-12085, CVE-2024-12086, CVE-2024-12087, CVE-2024-12088, and CVE-2024-12747 (CVSSv3 scores ranging from 5.6 to 7.5). 

Users should immediately upgrade to these fixed HAProxy Fusion virtual images by following our HAProxy Fusion upgrade instructions.

Support

If you are a customer and have questions about upgrading to the latest version, please get in touch with the HAProxy support team.

]]> January 2025 – Multiple rsync CVEs impacting memory and file handling in Linux virtual images appeared first on HAProxy Technologies.]]>
<![CDATA[Lasting Impressions and Technical Tidbits From AWS re:Invent 2024]]> https://www.haproxy.com/blog/lasting-impressions-and-technical-tidbits-from-aws-reinvent-2024 Fri, 13 Dec 2024 07:58:00 +0000 https://www.haproxy.com/blog/lasting-impressions-and-technical-tidbits-from-aws-reinvent-2024 ]]> AWS re:Invent 2024 has officially wrapped up, but not everything that happens in Vegas stays in Vegas. We're still gushing over the reception our booth team received from attendees—and we were excited to see even more organizations going all-in on application delivery and security. Naturally, AWS integration was a constant backdrop throughout the week as we showcased the HAProxy platform. 

As always, our interactions with booth visitors—both new and familiar—defined the team's AWS re:Invent experience. The constant buzz generated by tens of thousands of attendees was incredibly energizing. However, re:Invent was also a strong reminder that the tech world remains smaller (and more human) than expected, despite the conference's staggering scale. 

Here are some key takeaways from our five days spent alongside AWS and application delivery enthusiasts.

Getting cozy with our booth visitors

]]> ]]> AWS re:Invent means a lot to us. It's an opportunity to check the pulse of the tech community, connect with attendees, and showcase our recent product development wins to a diverse audience. Our booth was absolutely swamped with visitors (a problem we're happy to have!) all hoping to see the latest and greatest updates from HAProxy Technologies. 

We'd like to give two big shoutouts coming out of AWS re:Invent. First, a massive thank you to our booth visitors! Seeing old and new faces was fantastic, and your patience as we worked our way through the crowd was truly appreciated.

]]> ]]> And to whomever anonymously screamed, "I love HAProxy!" while passing by: we love you too! 

Attendees showed just how amazing the tech community is—and proved that a hunger for whiteboard sessions and hands-on demos isn't easily sated.

]]> ]]> Second, our booth team tirelessly answered as many questions as humanly possible. The team went above and beyond to make each visitor feel like an HAProxy community member. Together, they navigated the crowd with relative ease, demonstrating impressive load-balancing aptitude.

What were attendees talking about?

AWS re:Invent rightfully draws plenty of questions about the AWS platform and how integrated solutions from other vendors help users conquer challenges. Many visitors also asked how we measure up against competitors such as F5 Networks and VMware/Avi Networks. Having the chance to explore our unique advantages and how far we've come was invaluable. 

We also often heard the following questions: 

  • What does HAProxy do?

  • How do you compare to AWS Application Load Balancer (ALB) and NGINX? 

  • Do you have alternatives available to replace [insert infrastructure component]?

  • How do I win that LEGO R2D2?

Across the entire conference, plenty of chatter centered on longstanding AWS topics such as Auto Scaling support and Amazon Route 53. Since most attendees rely on AWS to power some portion of their infrastructure, understanding how HAProxy fits into that equation is critical. For example, HAProxy Fusion's latest updates allowed us to showcase our products much more easily and comprehensively than before. 

Today's trending topics such as Kubernetes, application security, API gateways, AI, and automation were also front and center. We heard and facilitated conversations covering the entire tech stack, from databases to support for the IoT's Message Queuing Telemetry Transport (MQTT) protocol. Name any topic related to application delivery and security—we likely heard rumblings about it. 

Lastly, we appreciated the glowing feedback (and the multiple verbal “thumbs up” comments) that we received from HAProxy users! We're incredibly flattered that you took the time to share your thoughts following some deeply technical (and lengthy) conversations.

HAProxy hits the AWS stage

We ourselves dove into security quite heavily—not only at the booth, but also on stage. HAProxy Technologies' Jakub Suchy, Director of Solutions Engineering, gave a thought-provoking lightning talk covering no-compromise security in AWS with HAProxy:

]]> Thanks so much for your engagement! Being able to give a talk at re:Invent is always a great privilege, and we look forward to presenting the next big development in secure application delivery.

Come see us next year!

AWS re:Invent 2024 was incredible. Our chats helped us better understand the evolving needs of the AWS community. We were thrilled to share our expertise as G2-recognized leaders in the categories of Load Balancing, API Management, Web Application Firewall (WAF), and DDoS Protection (with the awards to prove it!). Your feedback remains consistent with our G2 performance, and we're committed to keeping it that way. 

We'll continue to monitor the changing AWS landscape and bolster our platform support. We look forward to seeing you again at AWS re:Invent 2025, also in Las Vegas, from December 1st to December 6th. Stay tuned as more details become available!

Last but not least, we're thrilled to keep planning for HAProxyConf 2025 next summer in San Francisco. HAProxyConf celebrates the thriving community that's helped make HAProxy One the world’s fastest application delivery and security platform. Over two-plus days, expert speakers will share best practices and real-world use cases highlighting HAProxy's next-gen approach to high-performance application delivery and security. Check out the HAProxyConf official website to learn more and stay updated. And please, don't forget to answer our call for papers!

Want to learn more about HAProxy and AWS?

To dive a little deeper into our AWS support, check out these helpful resources:

]]> Lasting Impressions and Technical Tidbits From AWS re:Invent 2024 appeared first on HAProxy Technologies.]]>
<![CDATA[Announcing HAProxy ALOHA 16.5]]> https://www.haproxy.com/blog/announcing-haproxy-aloha-16-5 Wed, 27 Nov 2024 00:21:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-aloha-16-5 ]]> HAProxy ALOHA 16.5 is now available, and we’re delighted to share that this release includes one of the cornerstone security features announced earlier this year—the new Bot Management Module. HAProxy ALOHA customers will also benefit from the new Network Management CLI, secure Wireguard VPN synchronization between appliances, updated root filesystem packages, and the features announced in open source HAProxy 3.0.

New to HAProxy ALOHA?

HAProxy ALOHA provides high-performance load balancing for TCP, UDP, QUIC, and HTTP-based applications; SSL processing; PacketShield DDoS protection; bot management; and a next-generation WAF. HAProxy ALOHA combines the performance, reliability, and flexibility of our open-source core (HAProxy – the most widely used software load balancer) with a convenient hardware or virtual appliance, an intuitive GUI, and world-class support. HAProxy ALOHA benefits from next-generation security layers powered by threat intelligence from HAProxy Edge and enhanced by machine learning.

What’s new?

HAProxy ALOHA 16.5 includes exclusive new features plus many of the features from the community version of HAProxy 3.0. For the full list of features, read the release notes for HAProxy ALOHA 16.5.

New in HAProxy ALOHA 16.5 are the following important features:

  • The new HAProxy Enterprise Bot Management Module provides fast, reliable, and flexible identification and categorization of bots attempting to access websites or applications, with 100% local processing for low latency and no external dependencies.

  • The new Network Management CLI (netctl) allows customers to automate the management of network interfaces directly from the appliance itself. It operates as an abstraction layer that allows users to configure the network stack of the HAProxy ALOHA load balancer using a simple command-line tool.

  • The new Wireguard VPN feature empowers customers to securely synchronize configurations between HAProxy ALOHA servers across the Internet or internal networks, making it easier to maintain consistency and manage configurations across appliances through an encrypted UDP tunnel that ensures data is protected when traveling between servers.

  • Updates to the root filesystem packages, including libraries, binaries, scripts, and all embedded components improve stability, security, and functionality.

We announced the release of HAProxy 3.0 in May 2024, which included improved simplicity, reliability, security, and flexibility. Many of the features from HAProxy 3.0 are now available in HAProxy ALOHA 16.5.

Some of the biggest community features include:

We outline every community feature in detail in, “Reviewing Every New Feature in HAProxy 3.0”.

Ready to upgrade?

To start the upgrade procedure, visit the installation instructions for HAProxy ALOHA 16.5.

New bot management makes identifying bots and categorizing your traffic a breeze

Our customers have implemented some impressive bot management strategies using HAProxy ALOHA’s tools for traffic profiling, tracking, and filtering. Now, it’s even easier to use HAProxy ALOHA as a powerful alternative to a separate bot management solution. The new Bot Management Module provides fast, reliable, and flexible bot identification and categorization with low latency and deep integration with HAProxy ALOHA’s multi-layered security controls. 

Why bot management?

From DoS attacks to content scraping, the risks from bot traffic are growing yearly. Failure to identify and block malicious bots could result in downtime, data theft, fraud, and more, affecting an organization’s reputation and revenue. Additionally, bot traffic can significantly increase resource use, which increases operational costs and could affect application performance for legitimate human users. 

To combat the rising risks, we wanted to make effective bot management more accessible and more powerful. In HAProxy ALOHA 16.5, customers now have access to the new Bot Management Module, a new weapon in their arsenal against malicious bots.

What can you do with the new Bot Management Module?

HAProxy ALOHA’s new Bot Management Module works out-of-the-box to identify traffic accurately, categorizing it as human, suspicious, bot, verified crawler (search engines), or verified bot/tool/app (non-browser). 

You can combine accurate bot identification with the other powerful layers in the security suite (including the next-generation HAProxy Enterprise WAF) to create customizable, high-performance, and low latency bot management and rate limiting strategies—from simple to advanced.

Why should you use the new Bot Management Module?

Three reasons:

  • Fast performance eliminates latency and ensures rapid bot identification and enforcement of bot management policies even under heavy load (e.g., DoS attack). 

  • Reliable bot management with a simple architecture reduces complexity and keeps your data local and secure.

  • Flexible and customizable bot management shares intelligence with other powerful security layers for smarter, more holistic decision-making and enforcement. 

For most users, we expect the simple answer to be: why wouldn’t you use it? 🙂 You can enable it in moments, and since it’s built into the firmware of HAProxy ALOHA—the plug-and-play hardware or virtual load balancer—it works quickly and efficiently even under heavy load. 

But the real question for many customers is: why use this instead of one of the market-leading bot management solutions? 

Unfortunately, bot management solutions often come with significant compromises (not even counting the extra cost).

  • Latency: solutions that pass requests through an additional layer, sometimes in a different network location, add latency (in addition to the often-quoted processing time) that affects the user experience.

  • Complexity: solutions that require a constant or frequent connection to the vendor’s cloud (for example, for automatic updates to the detection algorithm) introduce complexity and an additional point of failure, putting reliability and data privacy at risk. 

  • Lack of integration: solutions without deep integration with other security layers, such as with the WAF and anomaly detection layers, make decisions with incomplete information and do not give users the flexibility to enhance and customize their bot management strategy.

HAProxy ALOHA’s new Bot Management Module uses reputational signals and scoring based on HAProxy Technologies’ security expertise, data science, and large real-world datasets to identify traffic accurately. Our data science team uses the threat intelligence data provided by HAProxy Edge to train our security models with machine learning, resulting in extremely accurate and efficient detection algorithms for bots and other threats – without relying on static lists and regex-based attack signatures.

Importantly, all the detection, processing, and enforcement is local to the appliance. It does not add additional layers to the request path and does not require an external connection. This minimizes latency, maximizes reliability, and gives you the flexibility to deploy anywhere you like—such as in air-gapped environments.

With deep integration with HAProxy ALOHA’s multi-layered security, you can customize your organization’s bot management to meet your unique needs and traffic profile. You can customize your enforcement policies with options including blocking, tarpitting, challenging, and rate limiting.

But how good is it at identifying bots? While this is hard to test in a benchmark scenario, in real-world deployments with early adopters on HAProxy Enterprise, the Bot Management Module helped a top eCommerce website handling 300,000 requests per second identify heavy amounts of suspicious traffic and avoid crippling outages. As much as 20% of traffic was identified as anomalous, which their previous system had accepted without raising any security concerns.

Now that the HAProxy Enterprise Bot Management Module has come to HAProxy ALOHA, our appliance customers can benefit from its fast, reliable, and flexible bot management capabilities to protect their business and reputation and reduce the resource cost of serving requests from unwanted bots.

New Network Management CLI puts more power in your hands

Previously, administrators could only configure the HAProxy ALOHA network stack using:

  • traditional command-line operations such as ip route and ip rules within the Linux command line—an effective approach that can be complex and require advanced networking knowledge; or

  • the HAProxy ALOHA Services tab—a simpler and more widely used method for our appliance customers.

While these approaches are effective, they focus exclusively on appliance-specific networking configurations. With modern environments increasingly blending hardware, software, cloud, on-premises systems, containers, and virtual machines, it became imperative that we introduce a more open and powerful networking alternative.

With the release of HAProxy ALOHA 16.5, we’ve introduced the new Network Management CLI (netctl), a first-of-its-kind feature in the HAProxy product stack. The Network Management CLI redefines how users configure their networks, harnessing the power of the Network API for an easier and more consistent approach directly from the appliance.

What are the benefits of the Network Management CLI

Netctl isn’t just a simple command-line utility—it's an interface that interacts directly with the Linux Network API, giving users access to a powerful networking suite. It centralizes and abstracts networking commands, like ip route, ip address, and ip link, into a single interface. 

The Network Management CLI offers familiar and intuitive functionality for users accustomed to the Network Manager on Linux distributions. With netctl, you can program and manage the network environment directly from your HAProxy ALOHA appliance, making previously complex tasks, like creating link aggregations, defining VLANs, or managing IP routing, more accessible.

With the new Network Management CLI, HAProxy ALOHA users can:

  • Eliminate complexity by abstracting complex network tasks into simple CLI calls, enabling users to easily configure advanced setups such as link aggregation, virtual local area network (VLAN) over bridges, and virtual router redundancy protocol (VRRP) over VLANs.

  • Gain greater flexibility by providing a unified way to manage network settings without needing to switch between multiple tools or rely on extensive, manual command sequences.

  • Save time and avoid mistakes by streamlining the network setup process, reducing the manual effort required to implement complex setups, and minimizing the risk of human error.

The Network Management CLI demonstrates HAProxy Technologies’ commitment to providing extensive tools to its users over other offerings. It further enhances HAProxy ALOHA’s plug-and-play capabilities with a feature that now handles network configuration.

Enhanced reliability with updated network configuration scripts

In HAProxy ALOHA 16.5, we’ve also updated the network-scripts and config.rc to better support the Network API—which will manage the network stack of HAProxy ALOHA. This brings users more benefits beyond the Network Management CLI, including improved reliability and more efficient configuration.

With the updated network configuration scripts, users will benefit from:

  • Seamless rollback support by reverting to the previous configuration versions in the case of errors, ensuring continuity without requiring an appliance restart.

  • Streamlined VRRP configuration by automatically managing VRRP settings on interfaces, reducing complexity and minimizing misconfiguration.

  • Improved interface management by resolving issues such as deleting virtual interfaces.

New Wireguard VPN secures synchronization between appliances

In distributed environments, synchronizing configurations between appliances over a network can risk exposing sensitive data to potential security threats.

In HAProxy ALOHA 16.5, we’ve introduced Wireguard VPN, a powerful new feature that secures the way HAProxy ALOHA appliances in the same or different data centers communicate over a network.

Why Wireguard VPN?

When HAProxy ALOHA appliances operate in different data centers, synchronizing configuration can pose a risk if the appliances are not interconnected with a dedicated, private connection. While it’s possible to synchronize changes over the internet, this approach could lead to data being intercepted during transmission.

In HAProxy ALOHA 16.5, Wireguard VPN addresses this by providing a fully encrypted UDP tunnel of communication, ensuring that configuration data remains private and secure. Even in scenarios where data centers are interconnected, Wireguard VPN offers HAProxy ALOHA customers enhanced protection by encrypting all configuration data transmitted between the two appliances. This new secure tunnel ensures that bad actors monitoring your network cannot discover sensitive information about your HAProxy ALOHA deployment.

Enhanced stability and security with root filesystem updates

In HAProxy ALOHA 16.5, the root filesystem packages, including libraries, binaries, scripts, and all embedded components, have been updated to the latest version. This update inherits the maintenance of all the embedded open source projects, as well as security and functional fixes.

By updating the root filesystem, HAProxy ALOHA provides users with a more robust and reliable user experience.

Upgrade to HAProxy ALOHA 16.5

When you are ready to upgrade to HAProxy ALOHA 16.5, follow the link below.

Product

Release Notes

Install Instructions

Free Trial

HAProxy ALOHA 16.5

Release Notes

Installation of HAProxy ALOHA 16.5

HAProxy ALOHA Free Trial


]]> Announcing HAProxy ALOHA 16.5 appeared first on HAProxy Technologies.]]>
<![CDATA[Announcing HAProxy 3.1]]> https://www.haproxy.com/blog/announcing-haproxy-3-1 Tue, 26 Nov 2024 00:15:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-3-1 ]]> Back in the spotlight, HAProxy 3.1 builds on its history of innovation, delivering improvements that ensure it remains the go-to open source load balancer! As usual, all the features announced in this release will be incorporated into the next enterprise release (HAProxy Enterprise 3.1).

HAProxy 3.1 brings improvements to observability, reliability, performance, and flexibility. So everything we love about HAProxy is now even better! Continual refinement of the things that matter most to HAProxy users is what helps HAProxy remain the G2 category leader in API management, container networking, DDoS protection, web application firewall (WAF), and load balancing.

In this blog post, we'll cover the changes in a short and digestible format. For a deeper dive into what’s new in version 3.1, subscribe to our blog to make sure you don’t miss part 2 (coming soon)!

Watch our webinar HAProxy 3.1: Feature Roundup and listen to our experts as we examine new features and updates and participate in the live Q&A. 

New to HAProxy?

HAProxy is the world’s fastest and most widely used software load balancer. It provides high availability, load balancing, and best-in-class SSL processing for TCP, QUIC, and HTTP-based applications.

HAProxy is the open source core that powers HAProxy One, the world’s fastest application delivery and security platform. The platform consists of a flexible data plane (HAProxy Enterprise and HAProxy ALOHA) for TCP, UDP, QUIC and HTTP traffic; a scalable control plane (HAProxy Fusion); and a secure edge network (HAProxy Edge).

HAProxy is trusted by leading companies and cloud providers to simplify, scale, and secure modern applications, APIs, and AI services in any environment.

How to get HAProxy 3.1

You can install HAProxy version 3.1 in any of the following ways:

Install the Linux packages for Ubuntu / Debian.

Run it as a Docker container. View the Docker installation instructions.

Compile it from source. View the compilation instructions.

]]> ]]> Major changes

First, let's cover the most important changes in HAProxy 3.1. These changes substantially modify how things were done in previous versions or introduce entirely new capabilities.

Log profiles

HAProxy logs have always been known as a treasure trove of information, offering extreme flexibility in customizing the log format. While this approach has served engineers well for years, evolving needs from DevOps and SecOps teams have pushed this feature to the next level. With HAProxy 3.1, you can now use log profile, a new configuration area designed to define the log format used at different stages of a transaction. These stages include key steps like accept, request, connect, response, close, error, or even any.

Each log profile is linked to a specific log destination server, which brings several advantages:

  • Tailored log formats per destination: No need to rely on post-processing at the syslog server before forwarding log entries again.

  • Logging at multiple stages: Capturing logs at various steps of the transaction simplifies troubleshooting and provides deeper insight into what’s happening.

Additionally, you can pair log profiles with the new do-log action, which lets you generate even more detailed logs as traffic flows through HAProxy. This gives you even greater control and visibility over your infrastructure.

Traces

Traces provide detailed debug messages about how different subsystems are running. They give you a way to dive deeply into diagnosing problems, offering valuable insights when dealing with complex issues. While traces have been part of HAProxy for a while, they were considered experimental and primarily used by developers. With HAProxy 3.1, traces are now officially supported (GA) and much easier to use—though they remain a tool designed for advanced debugging scenarios.

You can enable traces for various subsystems, including h1, h2, h3, quic, qmux, fcgi, spop, peers, check, and more. Traces now have their own dedicated configuration section and can even be controlled using the HAProxy Runtime API.

We’ll be publishing a dedicated blog post soon to walk you through everything you can do with traces, so stay tuned—it’s a game-changer for troubleshooting!

Rework of SPOE

Stream Processing Offloading Engine (SPOE) enables administrators, DevOps, and SecOps teams to implement custom functions at the proxy layer using any programming language. However, as HAProxy’s codebase has evolved, maintaining the original SPOE implementation became a bit more complex.

With HAProxy 3.1, SPOE has been updated to fully support HAProxy’s modern architecture, allowing greater efficiency in building and managing custom functions. It’s now implemented as a “mux”, which allows for fine-grained management of SPOP (the SPOE Protocol) through a new backend mode called mode spop. This update brings several benefits:

  • Support for load-balancing algorithms: You can now apply any load-balancing strategy to SPOP backends, optimizing traffic distribution.

  • Connection sharing between threads: Idle connections can be shared, improving efficiency on the server side and response times on the agent side.

Rest assured, backward compatibility has been a priority. If you’ve built SPOA (Agents) in previous versions of HAProxy, they’ll continue to work just fine with HAProxy 3.1.

HTTP/2 Performance

In the HTTP/2 protocol, each stream has a window size: this is the maximum volume of data that can be transferred before requiring an acknowledgement. By increasing this window size dynamically during a stream’s lifetime, performance can be significantly improved.

With HAProxy 3.1, this process is now automatic. HAProxy adjusts the per-stream window size for optimal efficiency, using dedicated buffers for each stream alongside a shared buffer pool. This enhancement delivers a dramatic boost in performance:

  • POST uploads are now up to 20x faster.

  • Head-of-line blocking is reduced when downloading from HTTP/2 servers, improving responsiveness.

For even more control, a couple of new tuning parameters have been introduced: tune.h2.fe.rxbuf for frontends and tune.h2.be.rxbuf for backends, allowing you to further extend the default settings to match your specific needs.

New Master/Worker Model

HAProxy 3.1 introduces significant improvements to the master/worker model, making the separation of roles cleaner and more efficient.

In previous versions, the master process was responsible for parsing the configuration and then undoing it before handing it over to the workers. This approach occasionally led to issues like inconsistencies during reloads or file descriptor leaks.

Now, the master’s role is limited to starting the worker processes. The workers handle configuration parsing and perform their tasks independently, resulting in more consistent operation across reloads.

Noteworthy changes

Beyond the major changes, HAProxy 3.1 includes several changes that simplify the configuration, improve performance, or extend existing functionality.

  • New quic-initial: Introduces actions to execute during the QUIC handshake, similar to tcp-request actions for TCP. The currently supported actions are reject, accept, dgram-drop (for a silent drop), and send-retry (to force a retry when in 0-RTT for example). These actions help prevent abuse or enforce source-based filtering so that the client cannot even engage in a handshake.

  • New set-retries action: Available in tcp-request and http-request rules, this action allows HAProxy to dynamically change the number of desired retries at runtime. This is particularly useful for adapting HAProxy’s behavior to specific parts of an application, or for learning the desired number of retries from client-side information.

  • New when(condition) converter: Evaluates the condition, and if true passes the input sample as-is; otherwise it returns nothing. This allows emitting extra debugging information or triggering some actions only when some conditions are met. This prevents filling the logs with unnecessary information and can be combined with the bs.debug_str and fs.debug_str fetches to help developers better understand a problem.

  • New bs.debug_str / fs.debug_str fetches: Report useful debugging information from HAProxy’s internal layers. Use these for debugging purposes. You can trigger them with the when() converter above.

  • New last_entity / waiting_entity fetches: Indicate what waiting operation was interrupted by a timeout or an error. For example, it may help detect problems with Lua scripts or SPOA subsystems like compression, or even detect when a full body was not received. This can also report the last rule that was evaluated by HAProxy because of an accept / redirect or deny for example.

  • QUIC pacing: Smooths packet distribution over time to avoid overflowing buffers on the path to client applications. (e.g. browser). While it incurs higher CPU usage, it can improve performance by 10–20x on lossy networks or with slow clients. This feature is currently experimental.

  • New bbr congestion algorithm for QUIC: bbr (Bottleneck Bandwidth and Round-trip propagation time) uses network measurements, including packet loss, to model the path and determine optimal data transfer speeds. This allows higher throughput and lower queueing than other congestion algorithms on lossy networks or weak clients. bbr relies on the pacing mechanism and as such is also currently experimental.

  • Option httpchk now supports a Host header: This is one of the oldest checks in HAProxy and it now officially supports a Host header, eliminating the need for a workaround with fake strings in the httpchk line.

  • New server’s init-state: allows a server to remain down (not on) at startup or when leaving maintenance until the first health check succeeds.

Deprecated features

Several features are now marked as deprecated and will trigger warnings unless you set the expose-deprecated-directives global option. Unless noted otherwise, these features will be removed from version 3.3. Most of the deprecated features have a replacement.

  • The program section is marked as deprecated in this release and will be removed in HAProxy 3.3. 

  • The opentracing filter will be marked as deprecated in HAProxy 3.3 and will be removed in HAProxy 3.5.

  • Another noticeable change is to block duplicate names between various families of proxies (frontend/listen/backend/defaults/log-forward etc) and between servers. Duplicates are now properly detected and will report a deprecation warning in 3.1, indicating a breakage in 3.3.

  • The legacy C-based mailers are also deprecated in 3.1 and will be removed in 3.3. The customizable Lua mailers introduced in HAProxy 2.8 will then be the only way to set up mailers.

Breaking changes

HAProxy 3.1 did not introduce any breaking changes. However, to improve communication about future changes, the R&D team created a Wiki page. This page provides a summary of upcoming breaking changes and the releases in which they are planned, helping you stay informed and prepared: https://github.com/haproxy/wiki/wiki/Breaking-changes.

Conclusion

As always, none of this would be possible without the amazing HAProxy community. Your feedback, suggestions, code commits, testing, and documentation benefits millions of users worldwide as they use HAProxy to master their application traffic. To everyone who contributes – thank you.

Subscribe to our blog below and stay tuned for further deep dives on the latest updates from HAProxy 3.1. And in case you missed it, catch up with the new features we announced earlier this month in HAProxy Enterprise 3.0.

Ready to upgrade to HAProxy 3.1? Here’s how to get started.

Last but not least, we're thrilled to kick off HAProxyConf 2025 next summer in San Francisco.HAProxyConf celebrates the thriving community that's helped make HAProxy One the world's fastest application delivery and security platform. Over 2+ days, expert speakers will share best practices and real-world use cases that highlight HAProxy's next-gen approach to high-performance application delivery and security. Check out the HAProxyConf official website to learn more, stay updated, and answer our call for papers!

]]> Announcing HAProxy 3.1 appeared first on HAProxy Technologies.]]>
<![CDATA[KubeCon NA 2024: Service Discovery, Security, and AI—Oh My!]]> https://www.haproxy.com/blog/kubecon-na-2024-service-discovery-security-and-ai-oh-my Thu, 21 Nov 2024 09:31:00 +0000 https://www.haproxy.com/blog/kubecon-na-2024-service-discovery-security-and-ai-oh-my ]]> Though KubeCon North America 2024 has officially come to a close, the CNCF's flagship event has left us buzzing with residual excitement. After all, waving goodbye to a crowd of 9,200+ attendees is never easy—especially with Salt Lake City's snow-capped mountains towering impressively in the background. 

However, it was our conversations with HAProxy booth visitors that truly stole the show. From the casual hello to the deeply technical dive, attendees eagerly shared their Kubernetes experiences and infrastructure challenges. Exploring these obstacles and corresponding tech trends was a tremendously rewarding experience. Our booth team lives for those "aha" moments where solutions meet user problems head-on and emerge triumphant. 

Here's what we've learned from our four days alongside DevOps professionals, engineers, architects, and fellow K8s enthusiasts.

We've seen how Kubernetes (and open source software) constantly changes at HAProxy—as do the trending topics around it. Unsurprisingly, KubeCon North America 2024 demonstrated that such tech trends are as ephemeral as K8s pods. Our conversations with booth visitors repeatedly touched on some key topics:

AI and ML

We noticed an accelerated shift in interest this year towards AI/ML, which isn't shocking since 58% of organizations are actively experimenting with large language models (LLMs). And since today's experiments will likely be tomorrow's deployments, organizations are rightfully weighing load balancing options for AI/ML workloads. New ML training models and uses are emerging each day, and vendors like us have taken note. In fact, roughly 50% of KubeCon booths incorporated AI messaging in some way, shape, or form! 

We fielded multiple related questions and observed an uptick in Kubernetes adoption specifically to support these applications. AI/ML is an exciting frontier we're eagerly exploring, which is why HAProxy One offers AI/API gateway support. Because our multi-cluster routing support is so capable, we're well positioned to support these bleeding-edge technologies as they mature.

Application security

Unsurprisingly, security remains a hot-button issue for the vast majority of organizations. KubeCon also demonstrated that while web application firewalls (WAFs) and bot management are critical, new and novel approaches to DDoS mitigation have garnered heavy interest. Customization and performance have become differentiators as users demand more from their security suites.

While K8s can do plenty on its own, security features such as HAProxy’s stick tables and HAProxy Enterprise's Global Profiling Engine (GPE) provide highly-effective supplemental protection against application-layer attacks. You can even deploy a stick table purely for bot-labeled requests to further reduce false positives. 

The CAPTCHA module provides another layer of actionable protection, The module supports reCAPTCHA v2, reCAPTCHA v3, reCAPTCHA Enterprise, hCaptcha, Friendly Captcha (frCaptcha), and Turnstile. We chatted with many attendees who were eager to fill their existing security gaps with these features.

Service discovery

Service discovery has always been integral to uncovering active Kubernetes services and pushing configuration changes on the fly. While the concept isn't new, software vendors offer slightly unique flavors of service discovery with varied levels of automation and performance. 

We chatted with many booth visitors who enthusiastically discussed their discoverability needs while expressing a desire for improved scalability for their K8s services. Our team showcased our high-performance service discovery (introduced in HAProxy Fusion 1.2 and improved in HAProxy Fusion 1.3) to great fanfare in Salt Lake City—fueling our fire to further improve the feature. 

With the power to automatically generate over 100,000 lines of HAProxy configuration in seconds, HAProxy Fusion service discovery is a perfect match for large-scale deployments. We're thrilled to continually improve upon this key Kubernetes capability and showcase our progress at the next KubeCon. Don't miss it!

Answering your K8s questions

]]> ]]> KubeCon was a massive convergence of ideas and curiosities, all flowing like a firehose without an off valve. We quickly learned that vendor lock-in remains a primary concern for plenty of organizations, and simplicity is a guiding principle for many grappling with Kubernetes complexities. Attendees also perked up for some exciting new development avenues, such as egress gateways and TLS-based SNI allowlisting using access control lists (which one of our customers is already doing successfully). 

Naturally, attention soon turned to us and our development roadmap. Visitors kept us occupied with numerous questions about our platform and vision behind the evolving HAProxy One platform. Here are responses to some common questions our booth team received:

Does HAProxy offer Ingress functionality?

Yes! HAProxy's comprehensive Kubernetes solution includes Kubernetes Ingress support for organizations requiring simple setup, low resource use, high performance, and cost efficiency. 

Ingress control exists alongside our unique approach to intelligent external load balancing, multi-cluster routing, and blue-green deployments. Deploy HAProxy Enterprise Kubernetes Ingress Controller independently or together with other components in our K8s solution, according to your load balancing needs. 

These features are fulfilled by different products within HAProxy One, so we'd love to chat and determine what best fits your needs.

With Ingress' development "frozen," is Gateway API the way forward?

We're continually evaluating Gateway API support in HAProxy One and plan to bring Gateway API to HAProxy Fusion Control Plane. We want to support as many customer use cases (and preferences) as possible. We also anticipate that organizations will increasingly migrate away from Ingress to an alternative solution. 

Our Kubernetes solution enables external load balancing and multi-cluster routing without the added complexities of Gateway API. HAProxy enables you to route traffic directly from your external HAProxy Enterprise nodes to your Kubernetes pods without having to use Ingress or Gateway API at all. 

There's no management overhead with vendor-specific policies, nor a need to install additional custom resource definitions (CRDs) unless that helps your use case. Using HAProxy Fusion to automate direct-to-pod load balancing also eliminates a network hop normally associated with querying Gateway API. This reduces latency for massive-scale K8s applications while removing a potential point of failure. 

We're truly excited to see how HAProxy Enterprise and HAProxy Fusion service discovery can help our customers' applications perform better at massive scale. This approach is future-proof and a great next step for users should Ingress reach end-of-life.

Thanks for engaging with us

As always, your questions excited us, challenged us, and have even inspired us to redefine what's possible with Kubernetes and HAProxy One. It was great catching up with fresh and familiar faces alike. We never get tired of taking visitors through live demos or deep whiteboard sessions—blending plenty of technical knowledge with a little artistic flair. 

The HAProxy community also deserves a gigantic shoutout for its willingness to share valuable feedback. KubeCon left us (happily!) drowning in a tidal wave of G2 reviews, which help us improve HAProxy One and prioritize popular feature requests. Please, keep those opinions coming and your voices loud!

Come see us next year!

KubeCon North America 2024 blew us away. Our conversations have helped us better understand the evolving needs of the K8s community and better position ourselves as a leader in container networking. 

It's now time to flip the page to KubeCon Europe 2025 and KubeCon North America 2025. We can't wait to unveil some exciting new developments and see how the Kubernetes landscape changes. We'll also be at AWS re:Invent 2024 in Las Vegas, from December 2nd to December 6th. Come see us at Booth 571!

Last but not least, we're thrilled to kick off HAProxyConf 2025 next summer in San Francisco. HAProxyConf celebrates the thriving community that's helped make HAProxy One the world's fastest application delivery and security platform. Over 2+ days, expert speakers will share best practices and real-world use cases that highlight HAProxy's next-gen approach to high-performance application delivery and security. Check out the HAProxyConf official website to learn more, stay updated, and answer our call for papers! 

Want to learn more about HAProxy and Kubernetes?

To dive a little deeper into our Kubernetes solutions and story, check out these helpful resources: 

Our products and HAProxy One—the world’s fastest application delivery and security platform—are always evolving. Stay tuned for important updates and development milestones! Thank you for another fantastic KubeCon.

]]> KubeCon NA 2024: Service Discovery, Security, and AI—Oh My! appeared first on HAProxy Technologies.]]>
<![CDATA[Announcing HAProxy Enterprise 3.0]]> https://www.haproxy.com/blog/announcing-haproxy-enterprise-3-0 Thu, 14 Nov 2024 00:00:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-enterprise-3-0 ]]> HAProxy Enterprise 3.0 is now available. This release extends HAProxy Enterprise’s legendary performance and flexibility and builds upon its cornerstone features. The HAProxy Enterprise WAF is even more powerful, the Global Profiling Engine is more dynamic and performant, UDP load balancing is simpler and more observable, HTTPS performance is improved, and we have added new CAPTCHA and SAML single sign-on modules.

New to HAProxy Enterprise?

HAProxy Enterprise provides high-performance load balancing, can serve as an API gateway, and provides Kubernetes routing and ingress, TLS offloading, bot management, global rate limiting, and a next-generation WAF. HAProxy Enterprise combines the performance, reliability, and flexibility of our open-source core (HAProxy – the most widely used software load balancer) with ultra-low-latency security layers and world-class support. HAProxy Enterprise benefits from full-lifecycle management, monitoring, and automation (provided by HAProxy Fusion), and next-generation security layers powered by threat intelligence from HAProxy Edge and enhanced by machine learning.

To learn more, contact our sales team for a demonstration or request a free trial.

What’s new?

HAProxy Enterprise 3.0 includes new enterprise features plus all the features from the community version of HAProxy 3.0. For the full list of features, read the release notes for HAProxy Enterprise 3.0.

HAProxy Fusion will support HAProxy Enterprise 3.0 soon.

New in HAProxy Enterprise 3.0 are the following important features:

  • Strengthened HAProxy Enterprise WAF robustness and security precision. The next-generation HAProxy Enterprise WAF powered by our Intelligent WAF Engine is now even better at detecting disguised threats with new features such as base64 decoding and the ability to process requests without Content-Type.

  • A more dynamic and performant Global Profiling Engine. The Global Profiling Engine has been upgraded with dynamic peer support, enabling load balancers to connect to it without explicitly being added to the GPE configuration file. The ability to learn peers dynamically results in a lower memory footprint due to peer and session reuse.

  • Improved HTTPS performance and reliability. We’ve improved HTTPS performance as a result of redistributing and defaulting to OpenSSL 1.1.1.

  • New logging capabilities and simplified configuration with the HAProxy Enterprise UDP Module. The HAProxy Enterprise UDP Module now provides logging capabilities for enhanced observability, along with simplified configuration with support for the default-server directive.

  • A new CAPTCHA module. The new CAPTCHA module supports more providers and enables simpler configuration management. The supported modes include reCAPTCHA v2, reCAPTCHA v3, reCAPTCHA Enterprise, hCaptcha, Friendly Captcha (frCaptcha), and Turnstile.

  • A new SAML module. The new SAML single sign-on module is now embedded in HAProxy Enterprise as a native module and is easier to configure.

We announced the release of HAProxy 3.0 in May 2024, which included improved simplicity, reliability, security, and flexibility. The features from HAProxy 3.0 are now available in HAProxy Enterprise 3.0.

Some of these biggest community features include:

  • crt-store feature. Separates certificate storage from frontend use, simplifying and scaling SSL/TLS certificate management.

  • Enhanced HTTP/2 stack. Adds the option to limit and track glitchy HTTP/2 connections. HAProxy’s ability to handle the HTTP/2 CONTINUATION Flood demonstrates its resilience with this type of connection.

  • Persistent stats after reloads. Stats are preserved using the Runtime API command dump stats-file and the stats-file directive, provided proxy objects have assigned GUIDs.

  • Machine-readable logs. Supports JSON and CBOR formats for easier log management and system interoperability.

  • Improved stick table performance. Lock contention reduced by sharing data across smaller, individual tables with separate locks.

  • Differentiated Services field support. Allows classification and traffic prioritization by setting the DS field on both frontend and backend connections via set-fc-tos and set-bc-tos actions.

  • Virtual ACL and map files. Enables in-memory ACL and map file representations using the virt@ prefix, avoiding filesystem searches.

We outline every community feature in detail in, “Reviewing Every New Feature in HAProxy 3.0”.

Ready to upgrade?

When you are ready to start the upgrade procedure, go to the upgrade instructions for HAProxy Enterprise.

]]> ]]> Delivering greater robustness and precision with HAProxy Enterprise WAF

In the last release, we introduced the next-generation HAProxy Enterprise WAF, powered by the Intelligent WAF Engine. This unique engine delivers exceptional accuracy, zero-day threat detection, ultra-low latency, and simple management. Now, in HAProxy Enterprise 3.0, we’ve further enhanced its robustness and security precision.

With the addition of new features, the Intelligent WAF Engine is even more capable of detecting obfuscated threats. These updates strengthen the already powerful HAProxy Enterprise WAF, providing enhanced security against sophisticated attacks and improved accuracy in identifying disguised attacks.

We’ve previously discussed the incredible accuracy of the HAProxy Enterprise WAF, which achieved a true-positive rate of 99.61%, comfortably beating the category average. With the release of HAProxy Enterprise 3.0, the true-positive rate has climbed to 99.84%, tested using open source WAF benchmark data. False-negatives that were already virtually eliminated are now approaching zero.

Additionally, the HAProxy Enterprise WAF continues to deliver a robust true-negative rate of 97.124%, resulting in a balanced accuracy of 98.48%. With the false-positive rate remaining low at 2.876%, these metrics underscore the consistent and reliable performance of the HAProxy Enterprise WAF.

What’s new in the HAProxy Enterprise WAF?

The new capabilities of the HAProxy Enterprise WAF include:

  • Support for base64 decoding to better identify threats that use base64 encoding to obfuscate malicious payloads.

  • The ability to parse requests without a Content-Type to inspect malformed requests and minimize false positives.

  • Support for atomic ruleset updates through the Runtime API, eliminating the need for external tools and reducing complexity and the likelihood of error-prone updates.

  • Prometheus exporter metrics that make monitoring more efficient, including the total number of HTTP requests processed and blocked by an HAProxy Enterprise WAF instance.

With HAProxy Enterprise 3.0, the HAProxy Enterprise WAF delivers superior detection of deceptive threats and reliability, surpassing other vendor solutions that struggle with complex, evasive attacks.

For organizations seeking a solid web application firewall, HAProxy Enterprise WAF offers a robust defense that enhances your infrastructure’s security.

]]> ]]> Upgraded Global Profiling Engine brings enhanced scalability and performance

The Global Profiling Engine helps customers maintain a unified view of client activity across an HAProxy Enterprise cluster. By collecting and analyzing stick table data from all nodes, the Global Profiling Engine offers real-time insight into current and historical client behavior. This data is then shared across the load balancers, enabling informed decision-making such as rate limiting to manage traffic effectively.

In HAProxy Enterprise 3.0, we upgraded the Global Profiling Engine, which now offers dynamic peer support and a much lower memory footprint. This upgrade brings enhanced scalability and improved performance to clients.

What is dynamic peer support in the Global Profiling Engine?

With dynamic peers, load balancers can now connect to the Global Profiling Engine without explicitly being added to the configuration. This means that when new nodes are added or removed from a cluster, they can seamlessly connect or disconnect to the Global Profiling Engine, with all data and configuration automatically shared between them.

Dynamic peer support ensures that each node in a cluster can instantly synchronize data about client behavior and traffic patterns, without the need for administrators to manually configure and manage peer support. This makes it easier to enforce rate limiting policies at a global level and enables customers to make real-time, informed decisions as their system scales, offering cluster-wide data tracking and aggregation—now more dynamic and efficient than ever.

Dynamic peer support also brings customers better memory management due to peer reuse and session reuse. Using the same resources multiple times minimizes memory allocation, resulting in a much lower memory footprint.

Ultimately, the upgraded Global Profiling Engine is a more resource-efficient and scalable solution—and we hope customers take advantage of its dynamic capabilities.

Enhanced TLS performance with OpenSSL optimization

HAProxy Enterprise allows customers to encrypt traffic between the load balancer, clients, and backend servers using TLS.

With the release of HAProxy Enterprise 3.0, TLS performance has been optimized by switching from OpenSSL 3.X to OpenSSL 1.1.1 as the default for relevant operating systems. While this may be a notable change for some customers, the OpenSSL optimization will ultimately bring better performance and reliability for their systems.

]]> ]]> Simplified and more observable UDP load balancing

Customers love the HAProxy Enterprise UDP Module because it delivers fast, reliable UDP proxying and load balancing. By unifying UDP, TCP, and HTTP load balancing under a single solution, HAProxy Enterprise simplifies infrastructure management and eliminates the need for multiple products from other vendors.

Now with the release of HAProxy Enterprise 3.0, there’s more to love about the UDP module. When load balancing UDP traffic, customers now have access to logging capabilities for enhanced observability, along with support for the default-server directive, making configuration easier than before.

Basic logging can be enabled by specifying the log keyword and its arguments in the udp-lb section. Currently, the log output format contains the source and destination addresses, bytes received and sent, the instance name, and the server—and we plan to expand capabilities further in the future.

To learn more about configuring logging for UDP load balancing, visit our documentation.

Previously, configuring UDP load balancing in HAProxy Enterprise required manually specifying each server, which required more time and effort, especially when managing a large number of servers. But now, with the default-server directive, customers can specify these settings once and apply them uniformly across multiple servers. The end result is a more streamlined and simpler configuration process. 

Our documentation outlines how to take advantage of this new directive to improve your workflow.

This enhancement, along with logging capabilities, further strengthens the HAProxy Enterprise UDP Module, which already delivers best-in-class UDP performance compared to other software load balancers. With these updates, customers gain not only a highly performant and scalable UDP proxying and load balancing solution but also one that offers enhanced observability and simplified configuration management.

New CAPTCHA and SAML modules

HAProxy Enterprise 3.0 brings two new native modules to customers:

  • The CAPTCHA Module

  • The SAML Module

Both of these modules, while having different functions, simplify HAProxy Enterprise configuration for customers.

New CAPTCHA Module

This release introduces a new CAPTCHA module that simplifies configuration while extending support to more CAPTCHA providers, including Google reCAPTCHA Enterprise, the biggest one.

Some of the supported modes include:

  • reCAPTCHA v2

  • reCAPTCHA v3

  • reCAPTCHA Enterprise

  • hCaptcha

  • Friendly Captcha (frCaptcha)

  • Turnstile

Similar to the previous implementation, the new CAPTCHA module presents a challenge page to clients to determine if the user is a human. The only difference this time is that the new CAPTCHA module is now embedded in HAProxy Enterprise as a native module. This results in a module that supports more CAPTCHA providers beyond Google reCAPTCHA, can easily integrate with other providers not listed above, and is much simpler to configure.

The previous reCAPTCHA module required customers to configure the module through an extra configuration file and to add more to hapee-lb.cfg. With the new CAPTCHA module, a new section is added to the hapee-lb.cfg where all the settings go—a much simpler, streamlined process to verify that clients are humans.

In HAProxy Enterprise 3.0, implementing a CAPTCHA solution is simpler than ever, making it easier to integrate CAPTCHA verification into your HAProxy Enterprise setup without compromising security.

New SAML Module

This release also includes a new Security Assertion Markup Language (SAML) module, which provides single sign-on to any web application behind HAProxy Enterprise.

Previously, SAML was supported through an SPOE Agent, but with HAProxy Enterprise 3.0, the SAML module is now running in HAProxy Enterprise, greatly simplifying configuration. With the new SAML module, customers no longer have to configure the module in a separate SPOA configuration and can instead merge the configuration into hapee-lb.cfg.

Upgrade to HAProxy Enterprise 3.0

When you are ready to upgrade to HAProxy Enterprise 3.0, follow the link below.

Product

Release Notes

Install Instructions

HAProxy Enterprise 3.0

Release Notes

Installation of HAProxy Enterprise 3.0

Try HAProxy Enterprise 3.0

The world’s leading companies and cloud providers trust HAProxy Technologies to protect their applications and APIs. High-performing teams delivering mission-critical applications and APIs need the most secure, reliable, and efficient application delivery engine available. HAProxy Enterprise’s no-compromise approach to secure application delivery empowers organizations to deliver next-level enterprise scale and innovation.

There has never been a better time to start using HAProxy Enterprise. Request a free trial of HAProxy Enterprise and see for yourself.

]]> Announcing HAProxy Enterprise 3.0 appeared first on HAProxy Technologies.]]>
<![CDATA[Nearly 90% of our AI Crawler Traffic is From TikTok Parent Bytedance – Lessons Learned]]> https://www.haproxy.com/blog/nearly-90-of-our-ai-crawler-traffic-is-from-tiktok-parent-bytedance-lessons-learned Thu, 31 Oct 2024 10:01:00 +0000 https://www.haproxy.com/blog/nearly-90-of-our-ai-crawler-traffic-is-from-tiktok-parent-bytedance-lessons-learned ]]> This month, Fortune.com reported that TikTok’s web scraper — known as Bytespider — is aggressively sucking up content to fuel generative AI models. We noticed the same thing when looking at bot management analytics produced by HAProxy Edge — our global network that we ourselves use to serve traffic for haproxy.com. Some of the numbers we are seeing are fairly shocking, so let’s review the traffic sources and where they originate.

Our own measurements, collected by HAProxy Edge and filtered to traffic for haproxy.com, show a few interesting figures:

  • Nearly 1% of our total traffic comes from AI crawlers

  • Close to 90% of that traffic is from Bytespider, by Bytedance (the parent company of TikTok)

]]> ]]> While Bytespider is currently the most prevalent AI crawler, showing that Bytedance is currently the top source, we have previously observed others (such as ClaudeBot) taking the top spot. AI crawler activity, like all traffic, changes over time.

What does AI traffic mean for us – and you?

While we are primarily a technology company, we also consider ourselves to be a content company; we invest in original, human-authored content — such as documentation or blogs that provide helpful information to our users and wider audience.

Content-scraping bots existed long before LLMs started crawling the web for generative AI applications, and they have usually been considered undesirable visitors on content-heavy websites. Many businesses would not consent to the scraping and possible re-use of their content, in full or in part, by a third party. 

However, AI crawlers used by LLMs come with unique risks and opportunities.

  1. On one hand, an LLM might re-use the original content in full, or with some modification, or remixed with other content at the level of an LLM token (roughly the level of a single word). It is unlikely that a user will know where the original content came from. In cases where an LLM “hallucinates”, a user might receive inaccurate information, for example when requesting code or configuration instructions.

  2. On the other hand, with many users turning to AI chatbots as an alternative to traditional search engines, this is becoming an important channel for discovery and awareness. Businesses might want their brand or product information to be supplied by chatbots in response to user queries. For example, if a user asks for a list of relevant products, a business might want their product to be included in the list, along with features and benefits.

While we don’t limit AI crawlers on our website right now, we will have to make a decision whether to continue to allow them or not. Other businesses running content-heavy public websites will likely find themselves having to make the same decision: to protect the value of their content, or to allow the dissemination of information about their brand and products via these new channels.

What can you do to protect your content from AI crawlers?

If bots and the risk of content replication pose a threat to your business, you need a strategy to mitigate this risk and a technology solution that enables you to implement it.

A common method of disallowing bots is to use the robots.txt file on your website domain. However, some AI crawlers (including Bytespider) don’t identify themselves transparently; they try to pretend to be real users and ignore instructions in robots.txt. It is for this reason that we — like the Fortune.com article — describe the crawling as “aggressive”. It is not only a matter of scale but also the way it is being done. 

Therefore, any technical solution for managing AI crawlers and scrapers must be capable of accurately identifying such bots, even when they are designed to be hard to distinguish from humans.

HAProxy Enterprise customers already benefit from the HAProxy Enterprise Bot Management Module, announced in version 2.9. This technology combines a simple and efficient method for identifying and classifying bots with HAProxy’s legendary flexibility, to support a range of bot management strategies — such as blocking, rate limiting, or challenging via CAPTCHA. 

Our guide, How to Reliably Block AI Crawlers Using HAProxy Enterprise, shows you how to identify and block these bots (either individually or as a category) using a few lines of configuration on HAProxy Enterprise. Other providers, such as our friends at Cloudflare, recently provided a similar solution.

Where does our data come from, and how do we use it to improve bot management?

Our traffic statistics from HAProxy Edge show that the scale of AI crawler traffic is significant and growing fast. Let’s talk about where our data comes from and how we use it.

HAProxy Edge provides a globally distributed application delivery network (ADN) that provides fully managed application services, accelerated content delivery, and a secure partition between external traffic and your network.

By analyzing the traffic connecting to websites and applications hosted on HAProxy Edge (which includes haproxy.com), we can build a picture of global traffic trends. We can also filter these traffic metrics to show AI crawlers. Our bot management technology performs rapid identification and classification of bots (and humans), including identification of known AI crawlers such as:

  • Bytespider (TikTok)

  • OpenAI search bot and ChatGPT variants

  • PerplexityBot

  • Google AI crawler

  • ClaudeBot

  • Others

Our data science team uses the threat intelligence data provided by HAProxy Edge to train our security models with the use of machine learning, resulting in extremely accurate and efficient detection algorithms for bots and other threats – without relying on static lists and regex-based attack signatures. We use these algorithms to power the security layers in HAProxy Edge itself and HAProxy Enterprise and HAProxy Fusion. This includes the HAProxy Enterprise WAF (powered by the Intelligent WAF Engine) and the HAProxy Enterprise Bot Management Module.

For businesses looking for fully managed application services, HAProxy Edge provides bot management and other security features, backed by HAProxy Technologies’ authority on all aspects of the load balancing and traffic control stack. Contact us if you’d like a demo or a trial.

]]> Nearly 90% of our AI Crawler Traffic is From TikTok Parent Bytedance – Lessons Learned appeared first on HAProxy Technologies.]]>