Today’s microservices powered architectures require the ability to make frequent application delivery changes in an automated and reliable way. One of HAProxy’s top microservices users performs 20 thousand backend server updates per day per physical machine, with thousands of machines in their fleet. HAProxy is able to fully address such requirements through its extensive Runtime API, which can be leveraged to dynamically scale backend servers up and down during runtime.
We recently published a blog post on HAProxy’s hitless reloads and upgrades which described a new, completely safe reload mechanism in HAProxy. However, reloads can still cause other types of issues and are best kept to a minimum; for example, frequent reloads combined with long running connections can result in a temporary accumulation of old processes or an increase in load.
To minimize reloads, we can use HAProxy’s Runtime API which can be accessed over a TCP or Unix socket. The API provides “live” access to numerous HAProxy features, including changing backend server addresses, ports, weights, and states; enabling and disabling backends; updating maps, ACLs, and TLS ticket keys; providing session lists, recent error logs, and other debugging information. By taking advantage of the backend-related runtime functionality, we can dynamically scale backend servers without triggering any reloads.
In this blog post, we will be using the Runtime API to implement dynamic scaling. Consul will be used for service discovery. The general approach described here is reusable and could be adapted to any service discovery tool or microservices orchestration system you might be using.
The HAProxy Runtime API – Introduction
The HAProxy Runtime API is exposed over a TCP or Unix socket.
The Runtime API is fully described in the HAPEE Management Guide, section 9.3, and can be enabled with the following global HAProxy configuration:
global
...
stats socket /var/run/hapee-lb.sock mode 666 level admin
stats socket ipv4@127.0.0.1:9999 level admin
stats timeout 2m
...
Once the Runtime API is enabled, it can be accessed manually by using the command “socat”:
$ echo "<command></command>" | socat stdio /var/run/hapee-lb.sock
$ echo "<command></command>" | socat stdio tcp4-connect:127.0.0.1:9999
The Runtime API exposes a number of useful features and options, all executable in runtime without reloading the service.
The output of the “help” command displays the main groups of functions available. Here is the output from HAProxy version 1.8-dev2. If your HAProxy is missing any particular command, please ensure that you are using one of the newer releases – the Runtime API has evolved over time.
$ echo "help" | socat stdio /var/run/hapee-lb.sock
help : this message
prompt : toggle interactive mode with prompt
quit : disconnect
show errors : report last request and response errors for each proxy
clear counters : clear max statistics counters (add 'all' for all counters)
show info : report information about the running process
show stat : report counters for each proxy and server
show schema json : report schema used for stats
disable agent : disable agent checks (use 'set server' instead)
disable health : disable health checks (use 'set server' instead)
disable server : disable a server for maintenance (use 'set server' instead)
enable agent : enable agent checks (use 'set server' instead)
enable health : enable health checks (use 'set server' instead)
enable server : enable a disabled server (use 'set server' instead)
set maxconn server : change a server's maxconn setting
set server : change a server's state, weight or address
get weight : report a server's current weight
set weight : change a server's weight (deprecated)
show sess [id] : report the list of current sessions or dump this session
shutdown session : kill a specific session
shutdown sessions server : kill sessions on a server
clear table : remove an entry from a table
set table [id] : update or create a table entry's data
show table [id]: report table usage stats or dump this table's contents
disable frontend : temporarily disable specific frontend
enable frontend : re-enable specific frontend
set maxconn frontend : change a frontend's maxconn setting
show servers state [id]: dump volatile server information (for backend )
show backend : list backends in the current running config
shutdown frontend : stop a specific frontend
set dynamic-cookie-key backend : change a backend secret key for dynamic cookies
enable dynamic-cookie backend : enable dynamic cookies on a specific backend
disable dynamic-cookie backend : disable dynamic cookies on a specific backend
show stat resolvers [id]: dumps counters from all resolvers section and
associated name servers
set maxconn global : change the per-process maxconn setting
set rate-limit : change a rate limiting value
set timeout : change a timeout setting
show env [var] : dump environment variables known to the process
show cli sockets : dump list of cli sockets
add acl : add acl entry
clear acl : clear the content of this acl
del acl : delete acl entry
get acl : report the patterns matching a sample for an ACL
show acl [id] : report available acls or dump an acl's contents
add map : add map entry
clear map : clear the content of this map
del map : delete map entry
get map : report the keys and values matching a sample for a map
set map : modify map entry
show map [id] : report available maps or dump a map's contents
show pools : report information about the memory pools usage
To see the complete Runtime API documentation, please refer to the HAPEE Management Guide, section 9.3.
Dynamic Scaling
For our dynamic scaling example, we are going to use two machines, ‘virtdeb1’ and ‘virtdeb2’, both running Debian 8. They are each running Apache (on port 8080) and Consul (on port 8500).
Consul
There is a convenient guide for configuring Consul available, but to give you quick a representation of the configuration, here is the Consul state as reported by Consul in our example setup, after it has been configured:
$ ./consul members
Node Address Status Type Build Protocol DC
virtdeb2 192.168.122.66:8301 alive server 0.8.3 2 dc1
virtdeb1 192.168.122.185:8301 alive server 0.8.3 2 dc1
The web service is also configured in Consul via the following JSON file in the services directory:
{
"service": {
"name": "my-cluster",
"port": 8500,
"tags": ["web"],
"check": {
"script": "wget localhost:8080 > /dev/null 2>&1",
"interval": "30s"
}
}
}
Please note that we are not running the health checks in the above example very frequently, as those are best handled by HAProxy. Depending on your environment, you might not want health checks in Consul at all, but in our example, they are used to conveniently and automatically remove a server from the list once it stops responding.
Now we can run wget -qO -O - localhost:8500/v1/catalog/service/my-cluster and see a JSON response that has both nodes listed in it.
HAProxy
For HAProxy, we are going to start with a simple, but complete configuration file:
global
maxconn 10000
log 127.0.0.1 local2 info
pidfile /var/run/hapee-1.7/hapee-lb.pid
user daemon
group daemon
stats socket /var/run/hapee-lb.sock mode 666 level admin
daemon
defaults
mode http
log global
option httplog
timeout connect 10s
timeout client 300s
timeout server 300
frontend fe_main
bind *:80
default_backend be_template
backend be_template
balance roundrobin
option httpchk HEAD /
We will be adding additional configuration lines to it as we progress with our example setup.
Configuration Updates
There are two general methods for updating the HAProxy configuration:
Updating the configuration dynamically using the HAProxy Runtime API (completely avoiding a reload), optionally saving the current state to the runtime state file
Building the HAProxy configuration file from a template and invoking a reload (+ using the recently added hitless reloads capability)
Please note that you don’t have to choose one of these approaches – you could combine them. In this blog post, we are going to show how to configure the Runtime API approach (plus using the state file), as well as how to configure a hybrid approach using the Runtime API (with no state file) and with a configuration file built from a template. The hybrid approach could be optimal for your environment when you want to implement dynamic scaling, but also keep the running configuration and the contents of the configuration file identical.
Runtime API
The following configuration file additions will help us configure HAProxy for effective use of the Runtime API:
First, we could use the server templates feature (which was recently added to the HAProxy Enterprise Edition (get a free trial) and to the development branch of HAProxy Community Edition) to quickly define template/placeholder slots for up to n backend servers:
server-template websrv 1-100 192.168.122.1:8080 check disabled
This configuration is equal to writing out server websrvX 192.168.122.1:8080 check disabled
100 times, but automatically replacing X with a number incrementing from 1 to 100, inclusive. The servers are added in the disabled state, and it is expected that your server template range (“1-100”) will be larger than the number of servers you currently have, to allow for runtime/dynamic scaling to up to n configured backend server slots.
After the configuration is in place, we could manually invoke the Runtime API to configure or update the servers. For example, we could update the IP address and port of ‘websrv1’, and change it from a ‘disabled’ to a ‘ready’ state:
echo "set server be_template/websrv1 addr 192.168.122.42 port 8080" | socat stdio /var/run/hapee-lb.sock
echo "set server be_template/websrv1 state ready" | socat stdio /var/run/hapee-lb.sock
In addition to using the server-template directive, we are also going to ensure that any runtime changes are written out to the state file, and that the state file is loaded on reload/restart:
global
server-state-file /usr/local/haproxy/haproxy.state
defaults
load-server-state-from-file global
Please note that the state file is not updated automatically. To save state, you would run echo "show servers state" | socat stdio /var/run/haproxy.sock > /usr/local/haproxy/haproxy.state
after making changes to the states and before invoking an HAProxy reload or restart.
The above configuration will allow us to automatically persist most of the settings that can be changed using the Runtime API. However, please note that at the moment it will not persist the port numbers set in runtime. The ability to save ports to the state file in HAProxy is coming soon, but if your environment depends on the varying port numbers and you would want to implement it right now, you could simply switch to using a configuration file template, explained below.
Another possible reason for preferring a configuration file template over the “server-template” and “server-state-files” directives would be to minimize “drift” between the literal contents of the configuration file and the configuration which is loaded and running in your HAProxy processes.
Configuration file from a template
Templates can be written for any templating engine, but in this case, we are going to use jinja2 for Python. Instead of adding the “server-template” line mentioned above to your HAProxy configuration file, you would copy the whole configuration file to a new, template file (named e.g. “haproxy.tmpl”). Then, in the template, you would replace the “server-template” line (or the list of backend servers) with the following:
{{ backends_be_template }}
This template would then be evaluated by a script to replace the placeholder with the actual list of backend servers and ports, and to produce the final configuration file. The list of servers/ports might come from your orchestration system or any other source. An example of such a script is included in the next section.
Using a hybrid approach
In a hybrid approach, we would want to use the Runtime API to apply changes to a running HAProxy instance immediately, but we would also want to keep the configuration file in sync using the template-based approach. So, a helper script should do the following:
Get a list of backends from the orchestration system
Compare the list with the backends active in HAProxy
Adjust the backend addresses and ports using the Runtime API
Rebuild the HAProxy configuration with server lines and write it to the disk
Without further ado, here is an example Python script that implements the above steps. (If you get excess space when copying, please click the icon “raw” in the upper right of the content box.)
#!/usr/bin/python
import requests
import socket
import sys
from jinja2 import Template
Consul_api_server="http://localhost:8500"
Consul_service="my-cluster"
Haproxy_servers=["/var/run/hapee-lb.sock"]
Backend_name="be_template"
#The following are only used for building the configuration from template
Haproxy_template_file="/home/haproxy/haproxy/haproxy.tmpl"
Haproxy_config_file="/home/haproxy/haproxy/haproxy.cfg"
Haproxy_spare_slots=4
Backend_base_name="websrv"
def send_haproxy_command(server, command):
if haproxy_server[0] == "/":
haproxy_sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
else:
haproxy_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
haproxy_sock.settimeout(10)
try:
haproxy_sock.connect(haproxy_server)
haproxy_sock.send(command)
retval = ""
while True:
buf = haproxy_sock.recv(16)
if buf:
retval += buf
else:
break
haproxy_sock.close()
except:
retval = ""
finally:
haproxy_sock.close()
return retval
def build_config_from_template(backends):
backend_block=""
i=0
for backend in backends:
i+=1
backend_block += " server %s%d %s:%s cookie %s%d check\n" % (Backend_base_name, i,backend[0], backend[1],Backend_base_name, i)
for disabled_slot in range(0,Haproxy_spare_slots):
i+=1
backend_block += " server %s%d 0.0.0.0:80 cookie %s%d check disabled\n" % (Backend_base_name, i,Backend_base_name, i)
try:
haproxy_template_fh = open(Haproxy_template_file, 'r')
haproxy_template = Template(haproxy_template_fh.read())
haproxy_template_fh.close()
except:
print("Failed to read HAProxy config template")
template_values = {}
template_values["backends_%s"%(Backend_name)] = backend_block
try:
haproxy_config_fh = open(Haproxy_config_file,'w')
haproxy_config_fh.write(haproxy_template.render(template_values))
haproxy_config_fh.close()
except:
print("Failed to write HAProxy config file")
if __name__ == "__main__":
#First, get the servers we need to add
try:
consul_json = requests.get("%s/v1/catalog/service/%s" % (Consul_api_server, Consul_service))
consul_json.raise_for_status()
consul_service = consul_json.json()
except:
print("Failed to get backend list from Consul.")
sys.exit(1)
backend_servers=[]
for server in consul_service:
backend_servers.append([server['Address'], server['ServicePort']])
if len(backend_servers) < 1: print("Consul didn't return any servers.") sys.exit(2) #Now update each HAProxy server with the backends in question for haproxy_server in Haproxy_servers: haproxy_slots = send_haproxy_command(haproxy_server,"show stat\n") if not haproxy_slots: print("Failed to get current backend list from HAProxy socket.") sys.exit(3) haproxy_slots = haproxy_slots.split('\n') haproxy_active_backends = {} haproxy_inactive_backends = [] for backend in haproxy_slots: backend_values = backend.split(",") if len(backend_values) > 80 and backend_values[0] == Backend_name:
server_name = backend_values[1]
if server_name == "BACKEND":
continue
server_state = backend_values[17]
server_addr = backend_values[73]
if server_state == "MAINT":
#Any server in MAINT is assumed to be unconfigured and free to use (to stop a server for your own work try 'DRAIN' for the script to just skip it)
haproxy_inactive_backends.append(server_name)
else:
haproxy_active_backends[server_addr] = server_name
haproxy_slots = len(haproxy_active_backends) + len(haproxy_inactive_backends)
for backend in backend_servers:
if "%s:%s" % (backend[0],backend[1]) in haproxy_active_backends:
del haproxy_active_backends["%s:%s" % (backend[0],backend[1])] #Ignore backends already set
else:
if len(haproxy_inactive_backends) > 0:
backend_to_use = haproxy_inactive_backends.pop(0)
send_haproxy_command(haproxy_server, "set server %s/%s addr %s port %s\n" % (Backend_name, backend_to_use, backend[0], backend[1]))
send_haproxy_command(haproxy_server, "set server %s/%s state ready\n" % (Backend_name, backend_to_use))
else:
print("WARNING: Not enough backend slots in backend")
for remaining_server in haproxy_active_backends:
send_haproxy_command(haproxy_server, "haproxy_server, set server %s/%s state maint\n" % (Backend_name, remaining_server))
#Finally, rebuild the HAProxy configuration for restarts/reloads
build_config_from_template(backend_servers)
For the script to work, you would need the “requests” and “jinja2” packages installed for Python; using pip or your operating system’s package manager should accomplish the task.
Running the script
The script could be run on the machine running HAProxy, invoked from cron.
The script could also be run on a separate server on which Consul is running. In that case, you would need a mechanism for transferring the finished template to the HAProxy server, and you would need to modify the “Haproxy_servers” variable to point to a remote IP and port rather than a Unix socket, to be able to access the Runtime API. (Specific example of modifying the script to connect to a remote machine is included near the bottom of this post.)
Finally, instead of using cron for periodically invoking the script, you could use the “watches” section in Consul:
{
"watches": [
{
"type": "service",
"service": "my-cluster",
"handler": "/usr/bin/python /usr/local/haproxy/update_haproxy_from_consul.py"
}
]
}
With a “watches” section, whenever a new server is added or removed from the configuration, Consul would automatically runs the script.
Modifying / improving the script
The script could be modified and improved in any way you would prefer.
If the option “server-template” suits your needs and you do not need to use a configuration file template, you might wish to edit the script and remove jinja2 invocations. This would be done by commenting out the line “from jinja2 import Template” near the top of the script, and by replacing the last line of the script (“buildconfigfrom_template(…)”) with a command that saves the state to the state file.
Other common modifications you might want to implement in the script would be to support multiple HAProxy processes or to support HAProxy instances on machines other than the one running the script. Both are briefly explained below:
Multiple Processes
If in your HAProxy configuration, you have the ‘nbproc’ value set to more than ‘1’, you might notice a warning about the stats sockets not being bound to a specific process. To get around that, create one socket for each process in the global section:
nbproc 3
stats socket /var/run/hapee-lb-1.sock mode 660 level admin process 1
stats socket /var/run/hapee-lb-2.sock mode 660 level admin process 2
stats socket /var/run/hapee-lb-3.sock mode 660 level admin process 3
Then add each socket to the “Haproxy_servers” variable near the top of the script:
Haproxy_servers=["/var/run/hapee-lb-1.sock", "/var/run/hapee-lb-2.sock", "/var/run/hapee-lb-3.sock"]
Now all three processes will be updated with the same backends.
Script running on another server
As mentioned, the Runtime API is also accessible over TCP, so you can enable the TCP-based API and update the HAProxy configurations on other machines using the same script.
stats socket 192.168.122.185:8181 level admin
If you are using nbproc > 1, the configuration would look like this:
stats socket 192.168.122.185:8181 level admin process 1
stats socket 192.168.122.185:8182 level admin process 2
stats socket 192.168.122.185:8183 level admin process 3
Then, in the script you would specify IPs and ports instead of (or in addition to) Unix sockets:
Haproxy_servers=[("192.168.122.185",8181),("192.168.122.185",8182),("192.168.122.185",8183)]
And the above configuration tips could used for any other Runtime API functionality.
Conclusion
By using the HAProxy Runtime API we can dynamically scale backend servers and generally update the “live” configuration without requiring a reload.
We can conveniently use the new server-template feature to configure up to n backend server slots.
If you would like to use server-template before waiting for the stable release of HAProxy 1.8, please see our HAProxy Enterprise Edition – Trial Version.
The general approach shown in this blog post is reusable and could be adapted to any service discovery tool or microservices orchestration system you might be using. Contact HAProxy Technologies if you would like us to provide you with expert advice on how to best integrate the solution into your existing infrastructure.
Stay tuned for more blog posts on using microservices with HAProxy, and happy scaling!
Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.