HAProxy config tutorials

Traffic policing

Traffic policing allows you to limit the rate and number of requests flowing to your backend servers. Traffic policing measures can ensure that users get the desired quality of service, and they can even prevent malicious traffic such as DDoS attacks.

In practice, traffic policing involves denying requests when request rates or counts exceed specified thresholds.

Queue connections to servers Jump to heading

It’s possible to use connection queueing to achieve the desired level of fairness without resorting to rate limiting. With connection queueing, the proxy stores excess connections until the servers are freed up to handle them. The load balancer is designed to hold lots of connections without a sharp increase in memory or CPU usage.

Queueing is disabled by default. To enable it:

  1. Add the maxconn argument to server directives. Use the maxconn argument to specify the maximum number of concurrent connections that will be established with the server.

    In the following example, up to 30 connections will be established to each server. Once all servers reach their maximum number of connections, new connections queue up in the load balancer:

    haproxy
    backend servers
    server s1 192.168.30.10:80 check maxconn 30
    server s2 192.168.31.10:80 check maxconn 30
    server s3 192.168.31.10:80 check maxconn 30
    haproxy
    backend servers
    server s1 192.168.30.10:80 check maxconn 30
    server s2 192.168.31.10:80 check maxconn 30
    server s3 192.168.31.10:80 check maxconn 30

    With this configuration, at most 90 connections can be active at a time. New connections will be queued on the proxy until an active connection closes.

  2. To define how long clients can remain in the queue, add the timeout queue directive:

    haproxy
    backend servers
    timeout queue 10s
    server s1 192.168.30.10:80 check maxconn 30
    server s2 192.168.31.10:80 check maxconn 30
    server s3 192.168.31.10:80 check maxconn 30
    haproxy
    backend servers
    timeout queue 10s
    server s1 192.168.30.10:80 check maxconn 30
    server s2 192.168.31.10:80 check maxconn 30
    server s3 192.168.31.10:80 check maxconn 30

    If a connection request still cannot be dispatched within the timeout period, the client receives a 503 Service Unavailable error. This error response is generally more desirable than allowing servers to become overwhelmed. From the client’s perspective, it’s better to receive a timely error that can be handled programmatically than to wait an extended amount of time and possibly cause errors that are more difficult to resolve.

Limit HTTP requests per day Jump to heading

Limitations

This feature requires the HAProxy Runtime API, which is not available with HAProxy ALOHA.

A fixed window request limit restricts the number of requests that a client can send during some fixed period of time, such as a calendar day.

In this example, we configure a limit of 1000 HTTP requests during a calendar day. The http_req_cnt counter is used to count requests during the day, and we use the Runtime API to clear all records at midnight every night.

  1. In the frontend, add a stick table that stores the HTTP request count.

    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 24h store http_req_cnt
    default_backend servers
    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 24h store http_req_cnt
    default_backend servers
  2. Add an http-request track directive to store the client’s IP address with their request count in the stick table.

    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 24h store http_req_cnt
    http-request track-sc0 src
    default_backend servers
    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 24h store http_req_cnt
    http-request track-sc0 src
    default_backend servers
  3. Add an http-request deny directive to deny requests for clients that exceed the limit.

    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 24h store http_req_cnt
    http-request track-sc0 src
    http-request deny deny_status 429 if { sc_http_req_cnt(0) gt 1000 }
    default_backend servers
    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 24h store http_req_cnt
    http-request track-sc0 src
    http-request deny deny_status 429 if { sc_http_req_cnt(0) gt 1000 }
    default_backend servers

This configuration causes every request after 1000 to be denied, but we need that restriction to be reset at midnight. To reset the counter, we need to use the Runtime API.

To reset the counter manually with the Runtime API:

  1. Enable the Runtime API.

  2. Install the socat utility and use it to invoke the clear table Runtime API command to clear all records from the stick table:

    nix
    echo "clear table website" |\
    sudo socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock
    nix
    echo "clear table website" |\
    sudo socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock

    To reset the counter automatically, you could set up a daily cron job.

  3. To clear a single record as a one-off, include the client’s IP address:

    nix
    echo "clear table website key 192.168.50.10" |\
    sudo socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock
    nix
    echo "clear table website key 192.168.50.10" |\
    sudo socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock

Rate limit HTTP requests Jump to heading

You can limit the number of HTTP requests a user can make within a period of time. When this period of time immediately follows each request, this limit is called a sliding window rate limit.

Follow these steps to create a sliding window limit that allows a client to issue no more than 20 requests in a 10-second window.

  1. Add a stick-table directive to the frontend. The table stores and aggregates each client’s HTTP request rate, where each client is tracked by IP address.

    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 30s store http_req_rate(10s)
    default_backend servers
    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 30s store http_req_rate(10s)
    default_backend servers

    To conserve space, the stick table is limited to the 100,000 most recent entries. Also, entries expire and are removed if they are inactive for 30 seconds.

  2. Add an http-request track directive to store the client’s IP address with their request rate in the stick table. Counters for the entry begin incrementing as soon as the record is added.

    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 30s store http_req_rate(10s)
    http-request track-sc0 src
    default_backend servers
    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 30s store http_req_rate(10s)
    http-request track-sc0 src
    default_backend servers
  3. Add an http-request deny directive to deny requests for clients that exceed the limit.

    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 30s store http_req_rate(10s)
    http-request track-sc0 src
    http-request deny deny_status 429 if { sc_http_req_rate(0) gt 20 }
    default_backend servers
    haproxy
    frontend website
    bind :80
    stick-table type ipv6 size 100k expire 30s store http_req_rate(10s)
    http-request track-sc0 src
    http-request deny deny_status 429 if { sc_http_req_rate(0) gt 20 }
    default_backend servers

    On the http-request deny line, the if expression determines whether the client’s current request rate has exceeded the allowed number of requests, in this case 20. If so, we deny the current request with a 429 Too Many Requests response. When the count of requests during the preceding 10 seconds is again below 20, we accept the request.

You can adjust any part of this example to suit your needs.

  • To change the test interval, change the time specified in the http_req_rate fetch in the stick-table directive.
  • To change the number of allowable requests in the interval, change the gt test value specified in the http-request deny directive.

Rate limit HTTP requests by URL path Jump to heading

You can assign distinct rate limits to individual URLs of your web application. This type of configuration can be useful when different pages require different amounts of processing time, and thus can handle a different number of concurrent users. This configuration uses a map file to associate different rate limits to different URLs in your web application.

  1. On the load balancer, create a file called rates.map.

  2. In the file, list the URL paths and rate thresholds, for example:

    haproxy
    /urla 10
    /urlb 20
    /urlc 30
    haproxy
    /urla 10
    /urlb 20
    /urlc 30
  3. Update the frontend configuration to include the stick-table and http-request track directives shown below:

    haproxy
    frontend website
    bind :80
    stick-table type binary len 20 size 100k expire 10s store http_req_rate(10s)
    http-request track-sc0 base32+src
    http-request set-var(req.rate_limit) path,map_beg(/rates.map,20)
    http-request set-var(req.request_rate) base32+src,table_http_req_rate()
    acl rate_abuse var(req.rate_limit),sub(req.request_rate) lt 0
    http-request deny deny_status 429 if rate_abuse
    default_backend servers
    haproxy
    frontend website
    bind :80
    stick-table type binary len 20 size 100k expire 10s store http_req_rate(10s)
    http-request track-sc0 base32+src
    http-request set-var(req.rate_limit) path,map_beg(/rates.map,20)
    http-request set-var(req.request_rate) base32+src,table_http_req_rate()
    acl rate_abuse var(req.rate_limit),sub(req.request_rate) lt 0
    http-request deny deny_status 429 if rate_abuse
    default_backend servers

    In this example:

    • The stick table has a key of binary to match the tracked value generated by the http-request track-sc0 base32+src directive, which is a hash of the HTTP Host header, the URL path, and the client’s source IP address. This key allows the load balancer to differentiate request rates across all different web pages.
    • The http-request set-var(req.rate_limit) directive retrieves the rate limit threshold from the rates.map file. This directive finds the request rate threshold in the rates.map file for the current URL path being requested. If the URL is not in the map file, a default value of 20 is used. The resulting threshold value is stored in the variable req.rate_limit.
    • The http-request set-var(req.request_rate) directive records the client’s request rate.
    • The ACL named rate_abuse is set to true if the client’s request rate is greater than the rate limit threshold.
    • If the threshold is exceeded, the http-request deny directive denies the request.

Rate limit HTTP requests by URL parameter Jump to heading

As an alternative to rate limiting by URL path, you can configure request rate limiting by URL parameter. This approach can be useful if your clients include an API token in the URL to identify themselves. This configuration is based on a sliding window rate limit configuration.

In the following example, the client is expected to include a token with their requests, as follows:

text
http://yourwebsite.com/api/v1/does_a_thing?token=abcd1234
text
http://yourwebsite.com/api/v1/does_a_thing?token=abcd1234

For this example, the configuration applies a limit of 1000 requests per 24 hour period, and it also requires that the user supply a token as shown above.

  1. In the frontend, add a stick table with a type of string and which stores the HTTP request rate. The sliding window size in this example is 24 hours:

    haproxy
    frontend website
    bind :80
    stick-table type string size 100k expire 24h store http_req_rate(24h)
    acl has_token url_param(token) -m found
    acl exceeds_limit url_param(token),table_http_req_rate() gt 1000
    http-request track-sc0 url_param(token) unless exceeds_limit
    http-request deny deny_status 429 if !has_token or exceeds_limit
    haproxy
    frontend website
    bind :80
    stick-table type string size 100k expire 24h store http_req_rate(24h)
    acl has_token url_param(token) -m found
    acl exceeds_limit url_param(token),table_http_req_rate() gt 1000
    http-request track-sc0 url_param(token) unless exceeds_limit
    http-request deny deny_status 429 if !has_token or exceeds_limit

    In this example:

    • The ACL named has_token indicates if the desired token is included in the URL.
    • The ACL named exceeds_limit finds the current request count for the last 24 hours and compares it to the request rate limit threshold, 1000.
    • The http-request track directive stores the value of the URL parameter named token as the key in the table. The unless exceeds_limit clause serves an important purpose. It prevents the counter from continuing to increment once the client has exceeded the limit. The clause also allows the entry to expire so that the client is not permanently blocked.
    • The http-request deny directive denies the request if the token is missing or if the limit is exceeded.

There is an important reason why this configuration uses the http_req_rate(24h) counter instead of the http_req_cnt counter in conjunction with an expire parameter set to 24h. The former is a sliding window over the last 24 hours. The latter begins when the user sends their first request and increments from then on until the expiration. However, unless you’re manually clearing the table every 24 hours via the Runtime API, the http_req_cnt could stay in effect for a long time while the client stays active. That’s because the expiration is reset whenever the record is touched.

Other response policies Jump to heading

So far, our examples have used http-request deny to drop HTTP requests that exceed the rate limit. However, there are other policies you can implement.

Deny the request Jump to heading

You can deny a client’s HTTP request to return a 403 Forbidden error to the client. Use the directive http-request deny in your frontend section. In the example below, we deny the client’s request if they’ve made more than 20 requests within the last minute:

haproxy
frontend website
bind :80
# use a stick table to track request rates
stick-table type ip size 100k expire 2m store http_req_rate(1m)
http-request track-sc0 src
acl too_many_requests sc_http_req_rate(0) gt 20
http-request deny if too_many_requests
default_backend webservers
haproxy
frontend website
bind :80
# use a stick table to track request rates
stick-table type ip size 100k expire 2m store http_req_rate(1m)
http-request track-sc0 src
acl too_many_requests sc_http_req_rate(0) gt 20
http-request deny if too_many_requests
default_backend webservers

You also change the response code by setting the deny_status argument. Below, we return a 429 Too Many Requests error:

haproxy
http-request deny deny_status 429 if too_many_requests
haproxy
http-request deny deny_status 429 if too_many_requests

Tarpit the request Jump to heading

You can tarpit a client’s HTTP request, which stalls the request for a period of time before returning an error response. This is often used to deter a malicious bot army, since it ties up bots so that they cannot immediately retry their requests.

In the example below, we use http-request tarpit to impede the client if they exceed a rate limit. Use timeout tarpit to set how long the load balancer waits before returning an error response:

haproxy
frontend www
bind :80
# use a stick table to track request rates
stick-table type ip size 100k expire 2m store http_req_rate(1m)
http-request track-sc0 src
acl too_many_requests sc_http_req_rate(0) gt 20
timeout tarpit 10s
http-request tarpit deny_status 429 if too_many_requests
default_backend webservers
haproxy
frontend www
bind :80
# use a stick table to track request rates
stick-table type ip size 100k expire 2m store http_req_rate(1m)
http-request track-sc0 src
acl too_many_requests sc_http_req_rate(0) gt 20
timeout tarpit 10s
http-request tarpit deny_status 429 if too_many_requests
default_backend webservers

Reject the connection Jump to heading

You can reject a TCP connection or HTTP request by using one of the following directives in a frontend section:

Directive Description
http-request reject Closes the connection without a response after a session has been created and the HTTP parser has been initialized. Use this if you need to evaluate the request’s Layer 7 attributes (HTTP headers, cookies, URL).
tcp-request content reject Closes the connection without a response once a session has been created, but before the HTTP parser has been initialized. These requests still show in your logs.
tcp-request connection reject Closes the connection without a response at the earliest point, before a session has been created. These requests do not show in your logs.

A reject response closes the connection immediately without sending a response. The client’s browser will display the error message The connection was reset. Use tcp-request content reject or tcp-request connection reject to drop a connection you know you don’t want to service. For example, use these to drop connections from known malicious IP addresses.

In the following example, we reject requests originating from IP addresses we wish to block:

haproxy
frontend www
bind :80
acl blocked_ip src -f /etc/hapee-2.9/blocklist.acl
tcp-request connection reject if blocked_ip
default_backend webservers
haproxy
frontend www
bind :80
acl blocked_ip src -f /etc/hapee-2.9/blocklist.acl
tcp-request connection reject if blocked_ip
default_backend webservers

Silently drop the connection Jump to heading

You can silently drop a client’s HTTP request, which disconnects immediately without notifying the client that the connection has been closed. This means that the load balancer frees any resources used for this connection. Clients will typically need to time out before they can release their end of the connection. Beware that silently dropping will affect any stateful firewalls or proxies between the load balancer and the client, since they will often hold onto the connection, unaware that it has been disconnected.

In the example below, we use http-request silent-drop to silently drop clients that access a restricted file:

haproxy
frontend www
bind :80
http-request silent-drop if { path_end /restricted.txt }
default_backend webservers
haproxy
frontend www
bind :80
http-request silent-drop if { path_end /restricted.txt }
default_backend webservers

See also Jump to heading

Do you have any suggestions on how we can improve the content of this page?