HAProxy is a high-performance load balancer that provides advanced defense capabilities for detecting and protecting against malicious bot traffic to your website. Combining its unique ACL, map, and stick table systems with its powerful configuration language allows you to track and mitigate the full spectrum of today’s bot threats. Read on to learn how.
Read our blog post Application-Layer DDoS Attack Protection with HAProxy to learn why HAProxy is a key line of defense against DDoS used by many of the world’s top enterprises. For more about rate limiting in general, read our blog post Four Examples of HAProxy Rate Limiting.
It is estimated that bots make up nearly half the traffic on the Internet. When we say bot, we’re talking about a computer program that automates a mundane task. Typical bot activities include crawling websites for indexing, such as how Googlebot finds and catalogs your web pages. Or, you might sign up for services that watch for cheap airline tickets or aggregate price lists to show you the best deal. These types of bots are generally seen as beneficial.
Unfortunately, a large portion of bots is used for malicious reasons. Their intentions include web scraping, spamming, request flooding, brute forcing, and vulnerability scanning. For example, bots may scrape your price lists so that competitors can consistently undercut you or build a competitive solution using your data. Or they may try to locate forums and comment sections where they can post spam. At other times, they’re scanning your site looking for security weaknesses.
HAProxy has best-in-class defense capabilities for detecting and protecting against many types of unwanted bot traffic. Its unique ACL, map, and stick table systems, as well as its flexible configuration language, are the building blocks that allow you to identify any type of bot behavior and neutralize it. Furthermore, HAProxy is well known for maintaining its high performance and efficiency while performing these complex tasks. For those reasons, companies like StackExchange have used HAProxy as a key component in their security strategy.
In this blog post, you’ll learn how to create an HAProxy configuration for bot protection. As you’ll see, bots typically exhibit unique behavior and catching them is a matter of recognizing the patterns. You’ll also learn how to whitelist good bots.
HAProxy Load Balancer
To create an HAProxy configuration for bot protection, you’ll first need to install HAProxy and place it in front of your application servers. All traffic is going to be routed through it so that client patterns can be identified. Then, proper thresholds can be determined and response policies can be implemented.
In this blog post, we’ll look at how many unique pages a client is visiting within a period of time and determine whether this behavior is normal or not. If it crosses the predetermined threshold, we’ll take action at the edge before it gets any further. We’ll also go beyond that and see how to detect and block bots that try to brute-force your login screen and bots that scan for vulnerabilities.
Bot Protection Strategy
Bots can be spotted because they exhibit non-human behavior. Let’s look at a specific behavior: web scraping. In that case, bots often browse a lot of unique pages very quickly in order to find the content or types of pages they’re looking for. A visitor that’s requesting dozens of unique pages per second is probably not human.
Our strategy is to set up the HAProxy load balancer to observe the number of requests each client is making. Then, we’ll check how many of those requests are for pages that the client is visiting for the first time. Remember, web scraping bots want to scan through many pages in a short time. If the rate at which they’re requesting new pages is above a threshold, we’ll flag that user and either deny their requests or route them to a different backend.
You’ll want to avoid blocking good bots like Googlebot though. So, you’ll see how to define whitelists that permit certain IP addresses through.
Detecting Web Scraping
Stick tables store and increment counters associated with clients as they make requests to your website. If you’d like an in-depth introduction, check out our blog post Introduction to HAProxy Stick Tables. To configure one, add a backend
section to your HAProxy configuration file and then add a stick-table
directive to it. Each backend
can only have a single stick-table
definition. We’re going to define two stick tables, as shown:
backend per_ip_and_url_rates | |
stick-table type binary len 8 size 1m expire 24h store http_req_rate(24h) | |
backend per_ip_rates | |
stick-table type ip size 1m expire 24h store gpc0,gpc0_rate(30s) |
The first table, which is defined within your per_ip_and_url_rates backend, will track the number of times that a client has requested the current webpage during the last 24 hours. Clients are tracked by a unique key. In this case, the key is a combination of the client’s IP address and a hash of the path they’re requesting. Notice how the stick table’s type
is binary so that the key can be this combination of data.
The second table, which is within a backend labeled per_ip_rates, stores a general-purpose counter called gpc0
. You can increment a general-purpose counter when a custom-defined event occurs. We’re going to increment it whenever a client visits a page for the first time within the past 24 hours.
The gpc0_rate
counter is going to tell us how fast the client is visiting new pages. The idea is that bots will visit more pages in less time than a normal user would. We’ve arbitrarily set the rating period to thirty seconds. Most of the time, bots are going to be fast. For example, the popular Scrapy bot is able to crawl about 3,000 pages per minute. On the other hand, bots can be configured to crawl your site at the same pace as a normal user would. Just keep in mind that you may want to change the rating period from thirty seconds to something longer, like 24 hours (24h), depending on how many pages a normal user is likely to look at within that amount of time.
Next, add a frontend
section for receiving requests:
frontend fe_main | |
bind :80 | |
# track client's source IP in per_ip_rates stick table | |
http-request track-sc0 src table per_ip_rates | |
# track client's source IP + URL accessed in | |
# per_ip_and_url_rates stick table | |
http-request track-sc1 url32+src table per_ip_and_url_rates unless { path_end .css .js .png .jpeg .gif } | |
# Increment general-purpose counter in per_ip_rates if client | |
# is visiting page for the first time | |
http-request sc-inc-gpc0(0) if { sc_http_req_rate(1) eq 1 } | |
default_backend web_servers |
The line http-request track-sc1
adds the client to the stick-table
storage. It uses a combination of their IP address and the page they’re visiting as the key, which you get with the built-in fetch method url32+src
. A fetch method collects information about the current request.
Web pages these days pull in a lot of supporting files: JavaScript scripts, CSS stylesheets, images. By adding an unless a statement to the end of your http-request track-sc1
line, you can exclude those file types from the count of new page requests. So, in this example, it won’t track requests for JavaScript, CSS, PNG, JPEG and GIF files.
The http-request track-sc1
line automatically updates any counters associated with the stick table, including the httpreqrate
counter. So, in this case, the HTTP request count for the page goes up by one. When the count is exactly one for a given source IP address and page, it means the current user is visiting the page for the first time. When that happens, the conditional statement if { sc_http_req_rate(1) eq 1 }
on the last line becomes true and the directive http-request sc-inc-gpc0(0)
increments the gpc0
counter in our second stick table.
Now that you’re incrementing a general-purpose counter each time a client, identified by IP address, visits a new page, you’re also getting the rate at which that client is visiting new pages via the gpc0_rate(30s)
counter. How many unique page visits over thirty seconds denotes too many? Tools like Google Analytics can help you here with its Pages / Session metric. Let’s say that 15 first-time page requests over that time constitute bot-like behavior. You’ll define that threshold in the upcoming section.
Setting a Threshold
Now that you’re tracking data, it’s time to set a threshold that will separate the bots from the humans. Bots will request pages much faster, over a shorter time. Your first option is to block the request outright. Add an http-request deny
directive to your frontend
section:
frontend fe_main | |
bind :80 | |
http-request track-sc0 src table per_ip_rates | |
http-request track-sc1 url32+src table per_ip_and_url_rates unless { path_end .css .js .png .jpeg .gif } | |
# Set the threshold to 15 within the time period | |
acl exceeds_limit sc_gpc0_rate(0) gt 15 | |
# Increase the new-page count if this is the first time | |
# they've accessed this page, unless they've already | |
# exceeded the limit | |
http-request sc-inc-gpc0(0) if { sc_http_req_rate(1) eq 1 } !exceeds_limit | |
# Deny requests if over the limit | |
http-request deny if exceeds_limit | |
default_backend web_servers |
With this, any user who requests more than 15 unique pages within the last thirty seconds will get a 403 Forbidden response. Optionally, you can use deny_status
to pass an alternate code such as 429 Too Many Requests. Note that the user will only be banned for the duration of the rating period, or thirty seconds in this case, after which it will reset to zero. That’s because we’ve added !exceeds_limit
to the end of the http-request sc-inc-gpc0(0)
line so that if the user keeps accessing new pages within the time period, it won’t keep incrementing the counter.
To go even further, you could use a general-purpose tag (gpt0) to tag suspected bots so that they can be denied from then on, even after their new-page request rate has dropped. This ban will last until their entry in the stick table expires, or 24 hours in this case. The expiration of records is set with the expire
parameter on the stick-table
. Start by adding gpt0
to the list of counters stored by the per_ip_rates stick table:
backend per_ip_rates | |
stick-table type ip size 1m expire 24h store gpc0,gpc0_rate(30s),gpt0 |
Then, add http-request sc-set-gpt0(0)
to your frontend
to set the tag to 1, using the same condition as before. We’ll also add a line that denies all clients that have this flag set.
http-request sc-set-gpt0(0) 1 if exceeds_limit | |
http-request deny if { sc_get_gpt0(0) eq 1 } |
Alternatively, you can send any tagged IP addresses to a special backend by using the use_backend
directive, as shown:
http-request sc-set-gpt0(0) 1 if exceeds_limit | |
use_backend be_bot_jail if { sc_get_gpt0(0) eq 1 } |
This backend could, for example, serve up a cached version of your site or have server
directives with a lower maxconn
limit to ensure that they can’t swamp your server resources. In other words, you could allow bot traffic, but give it less priority.
Observing the data collection
You can use the Runtime API to see the data as it comes in. If you haven’t used it before, check out our blog post Dynamic Configuration with the HAProxy Runtime API to learn about the variety of commands available. In a nutshell, the Runtime API listens on a UNIX socket and you can send queries to it using either socat or netcat.
The show table [table name]
command returns the entries that have been saved to a stick table. After setting up your HAProxy configuration and then making a few requests to your website, take a look at the contents of the per_ip_and_url_rates stick table, like so:
$ echo "show table per_ip_and_url_rates" | socat stdio /var/run/hapee-1.8/hapee-lb.sock | |
# table: per_ip_and_url_rates, type: binary, size:1048576, used:2 | |
0x10ab92c: key=203E97AA7F000001000000000000000000000000 use=0 exp=557590 http_req_rate(86400000)=1 | |
0x10afd7c: key=3CBC49B17F000001000000000000000000000000 use=0 exp=596584 http_req_rate(86400000)=5 |
I’ve made one request to /foo and five requests to /bar; all from a source IP of 127.0.0.1. Although the key is in binary format, you can see that the first four bytes are different. Each key is a hash of the path I was requesting and my IP address, so it’s easy to see that I’ve requested different pages. The httpreqrate tells you how many times I’ve accessed these pages.
You can key off of IPv6 addresses with this configuration as well, by using the same url32+src
fetch method.
Use the Runtime API to inspect the per_ip_rates table too. You’ll see the gpc0 and gpc0_rate values:
# table: per_ip_rates, type: ip, size:1048576, used:1 | |
0x10ab878: key=127.0.0.1 use=0 exp=594039 gpc0=2 gpc0_rate(30000)=2 |
Here, the two requests for unique pages over the past 24 hours are reported as gpc0=2. The number of those that happened during the last thirty seconds was also two, as indicated by the gpc0_rate(30000) value.
If you’re operating more than one instance of HAProxy, combining the counters that each collects will be crucial to getting an accurate picture of user activity. HAProxy Enterprise provides Real-Time Cluster-Wide Tracking with a feature called the Stick Table Aggregator that does just that. This feature shares stick table data between instances using the peers protocol, adds the values together, and then returns the combined results back to each instance of HAProxy. In this way, you can detect patterns using a fuller set of data. Here’s a representation of how multiple peers can be synced:
Verifying Real Users
The risk in rate limiting is accidentally locking legitimate users out of your site. HAProxy Enterprise has the reCAPTCHA module that’s used to present a Google reCAPTCHA v2 challenge page. That way, your visitors can solve a puzzle and access the site if they’re ever flagged. In the next example, we use the reCAPTCHA Lua module so that visitors aren’t denied outright with no way to get back in.
http-request use-service lua.request_recaptcha unless { lua.verify_solved_captcha "ok" } { sc_get_gpt0(0) eq 1 } |
Now, once an IP is marked as a bot, the client will just get reCAPTCHA challenges until such time as they solve one, at which point they can go back to browsing normally.
HAProxy Enterprise has another great feature: the Antibot module. When a client behaves suspiciously by requesting too many unique pages, HAProxy will send them a JavaScript challenge. Many bots aren’t able to parse JavaScript at all, so this will stop them dead in their tracks. The nice thing about this is that it isn’t disruptive to normal users, so the customer experience remains good.
Beyond Scrapers
So far, we’ve talked about detecting and blocking clients that access a large number of unique pages very quickly. This method is especially useful against scrapers, but similar rules can also be applied to detecting bots attempting to brute-force logins and scan for vulnerabilities. It requires only a few modifications.
Brute-force bots
Bots attempting to brute force a login page have a couple of unique characteristics: They make POST requests and they hit the same URL (a login URL), repeatedly testing numerous username and password combinations. In the last section, we were tracking HTTP request rates for a given URL on a per-IP basis with the following line:
http-request track-sc1 base32+src table per_ip_and_url_rates unless { path_end .css .js .png .jpeg .gif } |
We’ve been using http-request sc-inc-gpc0(0)
to increment a general-purpose counter, gpc0
, on the per_ip_rates stick table when the client is visiting a page for the first time.
http-request sc-inc-gpc0(0) if { sc_http_req_rate(1) eq 1 } !exceeds_limit |
You can use this same technique to block repeated hits on the same URL. The reasoning is that a bot that is targeting a login page will send an anomalous amount of POST requests to that page. You will want to watch for POST requests only.
First, because the per_ip_and_url_rates stick table is watching over a period of 24 hours and is collecting both GET and POST requests, let’s make a third stick table to detect brute-force activity. Add the following stick-table
definition:
backend per_ip_and_url_bruteforce | |
stick-table type binary len 8 size 1m expire 10m store http_req_rate(3m) |
Then add an http-request track-sc2
and an http-request deny
line to the frontend:
http-request track-sc2 base32+src table per_ip_and_url_bruteforce if METH_POST { path /login } | |
http-request deny if { sc_http_req_rate(2) gt 10 } |
You now have a stick table and rules that will detect repeated POST requests to the /login URL, as would be seen when an attacker attempts to find valid logins. Note how the ACL { path /login }
restricts this to a specific URL. This is optional, as you could rate limit all paths that clients POST to by omitting it. Read our post Introduction to HAProxy ACLs for more information about defining custom rules using ACLs.
In addition to denying the request, you can also use any of the responses discussed in the Unblocking Real Users section above in order to give valid users who happen to get caught in this net another chance.
Vulnerability scanners
Vulnerability scanners are a threat you face as soon as you expose your site or application to the Internet. Generic vulnerability scanners will probe your site for many different paths, trying to determine whether you are running any known vulnerable, third-party applications.
Many site owners, appropriately, turn to a Web Application Firewall for such threats, such as the WAF that HAProxy Enterprise provides as a native module. However, many security experts agree that it’s beneficial to have multiple layers of protection. By using a combination of stick tables and ACLs, you’re able to detect vulnerability scanners before they are passed through to the WAF.
When a bot scans your site, it will typically try to access paths that don’t exist within your application, such as /phpmyadmin and /wp-admin. Because the backend will respond with 404’s to these requests, HAProxy can detect these conditions using the httperrrate
fetch. This keeps track of the rate of requests the client has made that resulted in a 4xx response code from the backend.
These vulnerability scanners usually make their requests pretty quickly. However, as high rates of 404’s are fairly uncommon, you can add the httperrrate
counter to your existing per_ip_rates table:
backend per_ip_rates | |
stick-table type ip size 1m expire 24h store gpc0,gpc0_rate(30s),http_err_rate(5m) |
Now, with that additional counter, and the http-request track-sc0
already in place, you have—and can view via the Runtime API—the 4xx rate for clients. Block them simply by adding the following line:
http-request deny if { sc_http_err_rate(0) gt 10 } |
You can also use the gpc0
counter that we are using for the scrapers to block them for a longer period of time:
http-request sc-inc-gpc0(0) if { sc_http_err_rate(0) eq 1 } !exceeds_limit |
Now the same limits that apply to scrapers will apply to vulnerability scanners, blocking them quickly before they succeed in finding vulnerabilities.
Alternatively, you can shadowban these clients and send their requests to a honeypot backend, which will not give the attacker any reason to believe that they have been blocked. Therefore, they will not attempt to evade the block. To do this, add the following in place of the http-request deny
above. Be sure to define the backend be_honeypot:
use_backend be_honeypot if { sc_http_err_rate(0) gt 10 } |
Related Article: Security Threats to Websites
Whitelisting Good Bots
Although our strategy is very effective at detecting and blocking bad bots, it will also catch Googlebot, BingBot, and other friendly search crawlers with equal ease. You will want to welcome these bots, not banish them.
The first step to fixing this is to decide which bots you want so that they don’t get blocked. You’ll build a list of good bot IP addresses, which you will need to update on a regular basis. The process takes some time, but is worth the effort! Google provides a helpful tutorial. Follow these steps:
Make a list of strings found in the User-Agent headers of good bots (e.g. GoogleBot).
Grep for the above strings in your access logs and extract the source IP addresses.
Run a reverse DNS query to verify that the IP is indeed a valid good bot. There are plenty of bad bots masquerading as good ones.
Check the forward DNS of the record you got in step 3 to ensure that it maps back to the bot’s IP, as otherwise, an attacker could host fake reverse DNS records to confuse you.
Use whois to extract the IP range from the whois listing so that you cover a larger number of IP’s. Most companies are good about keeping their search bots and proxies within their own IP ranges.
Export this list of IP’s to a file with one IP or CIDR netmask per line (e.g. 192.168.0.0/24).
Now that you have a file containing the IP addresses of good bots, you will want to apply that to HAProxy so that these bots aren’t affected by your blocking rules. Save the file as whitelist.acl and then change the http-request track-sc1
line to the following:
http-request track-sc1 url32+src table per_ip_and_url_rates unless { path_end .css .js .png .jpeg .gif } || { src -f /etc/hapee-1.8/whitelist.acl } |
Now, search engines won’t get their page views counted as scraping. If you have multiple files, such as another for whitelisting admin users, you can order them like this:
unless { src -f /etc/hapee-1.8/whitelist.acl -f /etc/hapee-1.8/admins.acl } |
When using whitelist files, it’s a good idea to ensure that they are distributed to all of your HAProxy servers and that each server is updated during runtime. An easy way to accomplish this is to purchase HAProxy Enterprise and use its lb-update module. This lets you host your ACL files at a URL and have each load balancer fetch updates at a defined interval. In this way, all instances are kept in sync from a central location.
Identifying Bots By Their Location
When it comes to identifying bots, using geolocation data to place different clients into categories can be a big help. You might decide to set a different rate limit for China, for example, if you were able to tell which clients originated from there.
In this section, you’ll see how to read geolocation databases with HAProxy. This can be done with either HAProxy Enterprise or HAProxy Community, although in different ways.
Geolocation with HAProxy Enterprise
HAProxy Enterprise provides modules that will read MaxMind and Digital Element geolocation databases natively. You can also read them with HAProxy Community, but you must first convert them to map files and then load the maps into HAProxy.
Let’s see how to do this with MaxMind using HAProxy Enterprise.
MaxMind
First, load the database by adding the following directives to the global
section of your configuration:
module-load hapee-lb-maxmind.so | |
maxmind-load COUNTRY /etc/hapee-1.8/geolocation/GeoLite2-Country.mmdb | |
maxmind-cache-size 10000 |
Within your frontend
, use http-request set-header
to add a new HTTP header to all requests, which captures the client’s country:
http-request set-header x-geoip-country %[src,maxmind-lookup(COUNTRY,country,iso_code)] |
Now, requests to the backend
will include a new header that looks like this:
x-geoip-country: US |
You can also add the line maxmind-update url https://example.com/maxmind.mmdb
to have HAProxy automatically update the database from a URL during runtime.
Digital element
If you’re using Digital Element for geolocation, the same thing as we did for MaxMind can be done by adding the following to the global
section of your configuration:
module-load hapee-lb-netacuity.so | |
netacuity-load 26 /etc/hapee-1.8/geolocation/netacuity/ | |
netacuity-cache-size 10000 |
Then, inside of your frontend
add an http-request set-header
line:
http-request set-header x-geoip-country %[src,netacuity-lookup-ipv4(âpulse-two-letter-countryâ)] |
This adds a header to all requests, which contains the client’s country:
x-geoip-country: US |
To have HAProxy automatically update the Digital Element database during runtime, add netacuity-update url https://example.com/netacuity_db
to your global
section.
Read the next section if you’re using HAProxy Community, otherwise skip to the Using the Location Information section.
Geolocation with HAProxy community
If you’re using HAProxy Community, you’ll first want to convert the geolocation database to a map file. In the following example, we will show converting the MaxMind city database into an HAProxy map file.
First, make a file named readcitymap.py with the following contents:
import sys | |
ip_blocks_file = sys.argv[1] | |
city_locations_file = sys.argv[2] | |
#First load the city locations into memory, as we will be using them a lot | |
city_locations = {} | |
city_locations_handle = open(city_locations_file,'r') | |
for city_location_line in city_locations_handle.readlines(): | |
city_location_parts = city_location_line.split(",") | |
if len(city_location_parts) < 13: | |
continue | |
if not city_location_parts[0].isdigit(): | |
continue | |
location_id=city_location_parts[0] | |
locale_code=city_location_parts[1] | |
continent_code=city_location_parts[2] | |
continent_name=city_location_parts[3] | |
country_iso_code=city_location_parts[4] | |
country_name=city_location_parts[5] | |
subdivision_1_iso_code=city_location_parts[6] | |
subdivision_1_name=city_location_parts[7] | |
subdivision_2_iso_code=city_location_parts[8] | |
subdivision_2_name=city_location_parts[9] | |
city_name=city_location_parts[10] | |
metro_code=city_location_parts[11] | |
time_zone=city_location_parts[12] | |
#print "Found country code '" + str(country_iso_code) + "' for id '" + str(location_id) + "'" | |
city_locations[location_id] = [country_iso_code, city_name] | |
#print "Country code for 10471023: " + str(city_locations[str(10471023)][0]) | |
#Next build the country_iso_code and city_name files with this data | |
#Open map file handles | |
country_iso_code_file= open('country_iso_code.map', 'w') | |
city_name_file = open('city_name.map', 'w') | |
gps_map_file = open('gps.map', 'w') | |
#Process the lines of the ip block file | |
ip_blocks_handle = open(ip_blocks_file,'r') | |
for ip_block_line in ip_blocks_handle.readlines(): | |
ip_block_line_parts = ip_block_line.split(',') | |
if len(ip_block_line_parts) < 9: | |
continue | |
network=ip_block_line_parts[0] | |
geoname_id=ip_block_line_parts[1] | |
#Per docs "registered" is where the IP is registered, rather then used | |
registered_country_geoname_id=ip_block_line_parts[2] | |
#"represented" only applies to military bases/etc and is their country | |
represented_country_geoname_id=ip_block_line_parts[3] | |
is_anonymous_proxy=ip_block_line_parts[4] | |
is_satellite_provider=ip_block_line_parts[5] | |
postal_code=ip_block_line_parts[6] | |
latitude=ip_block_line_parts[7] | |
longitude=ip_block_line_parts[8].rstrip() #Last column gets a newline appended to it | |
if not geoname_id in city_locations: | |
continue | |
#Write the country map line | |
country_iso_code_file.write(network + ' ' + city_locations[geoname_id][0] + '\n') | |
#Write the city map line | |
city_name_file.write(network + ' ' + city_locations[geoname_id][1].strip('"') + '\n') | |
#Write the GPS map line | |
gps_map_file.write(network + ' ' + longitude + ", " + latitude + '\n') | |
country_iso_code_file.close() | |
city_name_file.close() | |
gps_map_file.close() |
Next, download the MaxMind City database (with minor modifications this script will work for just country databases). Either the GeoLite City or paid City database CSV files will produce the same output. Then, extract the zip file.
When you run this script with a Blocks CSV as the first argument and the Locations CSV as the second argument, it will produce the files countryisocode.map, city_name.map, and gps.map.
python read_city_map.py GeoLite2-City-CSV_20181127/GeoLite2-City-Blocks-IPv4.csv GeoLite2-City-CSV_20181127/GeoLite2-City-Locations-en.csv |
Use http-request set-header
to add an HTTP header, as we did in the previous Enterprise examples:
http-request set-header x-geoip-country %[src,map(/etc/hapee-1.8/country_iso_code.map)] |
Once again we will end up with a header that contains the client’s country.
x-geoip-country: US |
We’ll use it in the next section.
Using the location information
Whether you used HAProxy Enterprise or HAProxy Community to get the geolocation information, you can now use it to make decisions. For example, you could route clients that trigger too many errors to a special, honeypot backend. With geolocation data, the threshold that you use might be higher or lower for some countries.
use_backend be_honeypot if { sc_http_err_rate(0) gt 5 } { req.hdr(x-geoip-country) CN } |
Since this information is stored in an HTTP header, your backend server will also have access to it, which gives you the ability to take further action from there. We won’t get into it here, but HAProxy also supports device detection and other types of application intelligence databases.
Conclusion
In this blog post, you learned how to identify and ban bad bots from your website by using the powerful configuration language within the HAProxy load balancer. Placing this type of bot management in front of your servers will protect you from these crawlers as they attempt content scraping, brute forcing and mining for security vulnerabilities.
HAProxy Enterprise gives you several options in how you deal with these threats, allowing you to block them, send them to a dedicated backend, or present a challenge to them. Need help constructing an HAProxy configuration for bot management and protection that accommodates your unique environment? Contact us to learn more or sign up for a free trial. HAProxy Technologies’ expert support team has many decades of experience mitigating many types of bot threats. They can help provide an approach tailored to your needs.
Are you using HAProxy for your bot defense? Let us know in the comment section below! Want to stay up to date as we publish similar topics? Subscribe to this blog and follow us on Twitter!