Service reliability
Overload protection
Maximum connection limits and queues are server overload protections that reduce the impact of traffic spikes and increase throughput. By configuring maximum connection limits and queues at the load balancer layer, you can control the traffic volume being sent to servers.
Maximum connection limits Jump to heading
Maximum connection limits manage the number of connections a load balancer or server will receive; these are set in the global
, frontend
, and backend
sections of your configuration.
Global maximum connections Jump to heading
Set a process-wide maximum number of connections available to a load balancer with the maxconn
directive in the global
section. A limit on global maximum connections stops the load balancer from accepting too many connections at once, which protects servers from denial-of-service attacks and running out of memory resources.
haproxy
globalmaxconn 60000
haproxy
globalmaxconn 60000
In this example, the load balancer will accept up to 60,000 connections. Once the 60,000 limit has been reached, the load balancer will no longer accept any new connections and place those new connections into the kernel’s socket queue instead. The queued connections will wait until a connection slot becomes available in the load balancer.
If the global maxconn
directive is not set, it will default to the number reported by the Linux command ulimit -n
, which is usually 1024 and is too low for most use cases.
Frontend maximum connections Jump to heading
Set a maximum number of connections available to frontend proxies with the maxconn
directive in a frontend
section. Frontend maximum connections prevent a frontend proxy from taking all of the available connection slots for itself.
haproxy
frontend websitemaxconn 20000bind :80default_backend web_serversfrontend databasemaxconn 20000bind :3306default_backend database_serversfrontend apimaxconn 20000bind :8080default_backend api_servers
haproxy
frontend websitemaxconn 20000bind :80default_backend web_serversfrontend databasemaxconn 20000bind :3306default_backend database_serversfrontend apimaxconn 20000bind :8080default_backend api_servers
Once the maxconn
directive limit has been reached here, the load balancer will put new connections into the queue instead. The queued connections will wait until a connection slot becomes available.
When the maxconn
value is set to 0
in a frontend
section, which is the default value, the global maxconn
value is used instead.
With a frontend and backend pair, the load balancer will balance the alloted frontend maximum connections across its associated backend servers.
haproxy
frontend websitemaxconn 20000bind :80default_backend web_serversbackend web_serversbalance roundrobinserver s1 192.168.0.10:80server s2 192.168.0.11:80server s3 192.168.0.12:80
haproxy
frontend websitemaxconn 20000bind :80default_backend web_serversbackend web_serversbalance roundrobinserver s1 192.168.0.10:80server s2 192.168.0.11:80server s3 192.168.0.12:80
Server maximum connections Jump to heading
Set a maximum number of connections available to each server with the maxconn
directive in a backend
section. Server maximum connections prevent a server from using all of its memory and CPU by limiting the number of connections it can serve.
haproxy
backend web_serversbalance roundrobinserver s1 192.168.0.10:80 maxconn 30server s2 192.168.0.11:80 maxconn 40server s3 192.168.0.12:80 maxconn 50
haproxy
backend web_serversbalance roundrobinserver s1 192.168.0.10:80 maxconn 30server s2 192.168.0.11:80 maxconn 40server s3 192.168.0.12:80 maxconn 50
Once the maxconn
directive limit has been reached here, the load balancer will put new connections into a queue. The queued connections will wait until a connection slot becomes available in the server.
A server maxconn
has a default value of 0
, which means unlimited connections can be made to it. You can use a default-server
directive to set a default parameter that will apply to all server lines within the same section.
haproxy
backend web_serversbalance roundrobindefault-server maxconn 30server s1 192.168.0.10:80server s2 192.168.0.11:80server s3 192.168.0.12:80
haproxy
backend web_serversbalance roundrobindefault-server maxconn 30server s1 192.168.0.10:80server s2 192.168.0.11:80server s3 192.168.0.12:80
Queues Jump to heading
Queues help manage traffic when servers are busy, priortizing specified connections and preventing sudden traffic spikes from overhwelming your servers. The following queues in the backend
section are enabled only if you’ve set server maximum connections.
Connection timeout queue Jump to heading
Set the maximum amount of time for a connection to wait in a queue for a connection slot to be free with the timeout queue
directive in a backend
section. A connection timeout queue protects server from overload during times of high traffic.
haproxy
backend web_serversbalance roundrobintimeout queue 30sserver s1 192.168.0.10:80 maxconn 30server s2 192.168.0.11:80 maxconn 40server s3 192.168.0.12:80 maxconn 50
haproxy
backend web_serversbalance roundrobintimeout queue 30sserver s1 192.168.0.10:80 maxconn 30server s2 192.168.0.11:80 maxconn 40server s3 192.168.0.12:80 maxconn 50
In this example, a client will wait up to 30 seconds for a response, after the load balancer will return a 503 Service Unavailable response.
The timeout value is milliseconds (ms) by default, and it can be in any other unit if the number is suffixed by the unit. If the timeout queue
directive is unspecified, then the backend’s timeout connect
value is used instead.
HTTP request priority queue Jump to heading
Available since
- HAProxy 1.9
- HAProxy Enterprise 1.9r1
You can prioritize HTTP requests that are waiting in the queue. The http-request set-priority-class
directive in a backend
section gives specified connections higher or lower priority in the queue; it effectively lets certain requests be queued and processed before others.
haproxy
backend web_serversbalance roundrobinacl is_checkout path_beg /checkout/http-request set-priority-class int(1) if is_checkouthttp-request set-priority-class int(2) if !is_checkouttimeout queue 30sserver s1 192.168.0.10:80 maxconn 30server s2 192.168.0.11:80 maxconn 30server s3 192.168.0.12:80 maxconn 30
haproxy
backend web_serversbalance roundrobinacl is_checkout path_beg /checkout/http-request set-priority-class int(1) if is_checkouthttp-request set-priority-class int(2) if !is_checkouttimeout queue 30sserver s1 192.168.0.10:80 maxconn 30server s2 192.168.0.11:80 maxconn 30server s3 192.168.0.12:80 maxconn 30
In this example:
- An ACL is defined and named
is_checkout
, and it will check if the requested URL path begins with/checkout/
. - The first
http-request set-priority-class
line will priortize HTTP requests that begin with/checkout/
in the queue by assigning them a priority class of “1”. - The second
http-request set-priority-class
line will prioritize HTTP requests that don’t begin with/checkout/
in the queue by assigning them a priority class of “2”. - The lower the priority class number, the higher priority the request will be. In this case, a requested URL path beginning with
/checkout/
will be queued and processed before any priority class “2” requests that don’t begin with/checkout/
.
The http-request set-priority-class
directive must be an integer value must between -2407 and 2047. Prioritizing connections ensures that critical operations recieve timely processing, enhancing overall performance and reliability even when the queue is full.
See also Jump to heading
Do you have any suggestions on how we can improve the content of this page?