Integrations
Ansible
Ansible is a configuration management solution implemented primarily in the Python programming language that you can use to manage your load balancer deployment. Unlike other configuration management software, Ansible offers an ad-hoc mode in which tasks can be run manually. In many ways, this is similar to running commands via shell scripts or manually via SSH.
This guide shows how to run ad-hoc Ansible commands to control the load balancer, as well as how to organize more complex tasks into Ansible playbooks, which are YAML-formatted task definitions executed with the ansible-playbook
command.
Install Ansible Jump to heading
-
Install the latest version of Ansible onto your workstation. This will be the control node from which you manage your load balancer nodes. Note that Ansible is not supported on Windows.
-
Install Ansible Lint onto your workstation, which helps to identify syntax and spacing errors and contains style recommendations and deprecation warnings.
-
Install
socat
on your load balancers so Ansible can invoke Runtime API commands:nixsudo apt-get install socatnixsudo apt-get install socatnixsudo yum install socatnixsudo yum install socatnixsudo zypper install socatnixsudo zypper install socatnixsudo pkg install socatnixsudo pkg install socat -
Ansible uses SSH for communication with the remote Linux servers. Therefore, it expects that you have examined and accepted the remote server’s SSH host key. Connect via SSH to the machines where the load balancer is installed and accept the host key:
nixssh lb1.example.comnixssh lb1.example.comoutputtextThe authenticity of host 'lb1.example.com (100.200.1.6)' can't be established.ECDSA key fingerprint is SHA256:dzUE7CyUTeE98A5WKUT8DyNwvNqFO3CcJtRQFvsa4xk.Are you sure you want to continue connecting (yes/no)? yesoutputtextThe authenticity of host 'lb1.example.com (100.200.1.6)' can't be established.ECDSA key fingerprint is SHA256:dzUE7CyUTeE98A5WKUT8DyNwvNqFO3CcJtRQFvsa4xk.Are you sure you want to continue connecting (yes/no)? yes -
Update your Ansible
inventory
file,/etc/ansible/hosts
, by adding your load balancer servers:ini[loadbalancers]lb1.example.comlb2.example.comini[loadbalancers]lb1.example.comlb2.example.com -
Install Python and
pip
, the Python package manager, onto the load balancer servers. Ansible requires this on all nodes that it manages. -
After Python is installed, run the following Ansible
ping
command to verify that everything is working:nixansible loadbalancers -u root -m pingnixansible loadbalancers -u root -m pingoutputtextloadbalancers | SUCCESS => {"changed": false,"ping": "pong"}outputtextloadbalancers | SUCCESS => {"changed": false,"ping": "pong"}The
-u
flag defines the remote user for the SSH connection and-m
tells Ansible which module should be used. Ping is one of the pre-installed modules. -
Many of the Ansible commands you’ll use require the Runtime API. To enable it, see this guide.
Ansible ad-hoc command usage Jump to heading
Unlike most configuration management engines, you can use Ansible to issue on-the-fly commands to reconfigure the state on multiple machines. It shows you the state of the infrastructure and allows you to perform runtime changes to multiple machines easily.
The most basic command, which we introduced in the last section, uses the ping
module to tell you whether the machine is alive:
nix
ansible loadbalancers -u root -m ping
nix
ansible loadbalancers -u root -m ping
You can run shell commands on the remote machine by specifying the -a
argument. Below, we invoke the netstat
command on the remote load balancer servers:
nix
ansible loadbalancers -u root -a "netstat -tlpn"
nix
ansible loadbalancers -u root -a "netstat -tlpn"
To use shell features like pipes and output redirects, you can use the shell
module:
nix
ansible loadbalancers -u root -m shell -a "netstat -tlpn | grep :80"
nix
ansible loadbalancers -u root -m shell -a "netstat -tlpn | grep :80"
One thing to note is that if you choose to run as a non-root user, some commands will require sudo privileges to work properly. In that case it is necessary to use the --become
flag (short form: -b
) before executing commands. Below, we specify --become
to ensure that the socat
package is installed on the system:
nix
ansible loadbalancers --become --ask-become-pass \-m apt -a "name=socat state=present update_cache=yes"
nix
ansible loadbalancers --become --ask-become-pass \-m apt -a "name=socat state=present update_cache=yes"
The --ask-become-pass
argument prompts you to enter your sudo password. Please note that for --ask-become-pass
to work correctly, all machines must have the same password for the sudo user.
Rather than prompting for the sudo password, you can enable passwordless sudo. To enable passwordless sudo, add the NOPASSWD:
directive to your user or group using the visudo
command.
nix
sudo visudo
nix
sudo visudo
Then edit the file as shown below and save it:
nix
# Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) NOPASSWD: ALL
nix
# Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) NOPASSWD: ALL
Then you can use --become
without --ask-become-pass
.
Next, you can check if the load balancer service is running on a node. The --check
argument can be used with most modules to see what changes would have been made without actually doing them. In this case, we only want to see if the load balancer is active, but not start it otherwise.
nix
ansible loadbalancers -u root -m service -a "name=hapee-2.9-lb state=started" --check
nix
ansible loadbalancers -u root -m service -a "name=hapee-2.9-lb state=started" --check
outputtext
haproxy-ams | SUCCESS => {"changed": false,"name": "hapee-2.9-lb","state": "started",...
outputtext
haproxy-ams | SUCCESS => {"changed": false,"name": "hapee-2.9-lb","state": "started",...
To perform a hitless reload, use state=reloaded
and omit the --check
parameter:
nix
ansible loadbalancers -u root -m service -a "name=hapee-2.9-lb state=reloaded"
nix
ansible loadbalancers -u root -m service -a "name=hapee-2.9-lb state=reloaded"
We can combine the above commands with the copy
module to sync an arbitrary configuration to multiple hosts and then reload the load balancer to apply the changes:
nix
ansible loadbalancers -u root -m copy -a "src=/home/user/hapee-lb.cfg dest=/etc/hapee-2.9/hapee-lb.cfg"
nix
ansible loadbalancers -u root -m copy -a "src=/home/user/hapee-lb.cfg dest=/etc/hapee-2.9/hapee-lb.cfg"
nix
ansible loadbalancers -u root -m service -a "name=hapee-2.9-lb state=reloaded"
nix
ansible loadbalancers -u root -m service -a "name=hapee-2.9-lb state=reloaded"
Doing this for more than a few commands is quite tedious, which is why you might define an Ansible playbook instead. However, in a pinch, the ad-hoc commands are quite useful.
Ad-hoc commands can be used to interact directly with the HAProxy Runtime API.
For example, to disable a specific server from a specific backend:
nix
ansible loadbalancers -u root -m shell -a "echo 'disable server bk_www/www-01-server' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"
nix
ansible loadbalancers -u root -m shell -a "echo 'disable server bk_www/www-01-server' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"
Or, to show different debugging statistics:
-
show stat
nixansible loadbalancers -u root -m shell -a "echo 'show stat' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"nixansible loadbalancers -u root -m shell -a "echo 'show stat' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock" -
show info
nixansible loadbalancers -u root -m shell -a "echo 'show info' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"nixansible loadbalancers -u root -m shell -a "echo 'show info' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock" -
show fd
nixansible loadbalancers -u root -m shell -a "echo 'show fd' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"nixansible loadbalancers -u root -m shell -a "echo 'show fd' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock" -
show activity
nixansible loadbalancers -u root -m shell -a "echo 'show activity' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"nixansible loadbalancers -u root -m shell -a "echo 'show activity' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock" -
show pools
nixansible loadbalancers -u root -m shell -a "echo 'show pools' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"nixansible loadbalancers -u root -m shell -a "echo 'show pools' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"
See the Runtime API documentation for more examples.
The ad-hoc commands are also useful for prototyping the commands which are going to become part of a larger playbook, so it’s useful to be comfortable with running ad-hoc commands before beginning to write complex playbooks.
Write your first Ansible Playbook Jump to heading
In the last section we described a way to transfer a load balancer configuration to multiple hosts by using the ansible
command; The commands used were the following:
nix
ansible loadbalancers -u root -m copy -a "src=/home/user/hapee-lb.cfg dest=/etc/hapee-2.9/hapee-lb.cfg"
nix
ansible loadbalancers -u root -m copy -a "src=/home/user/hapee-lb.cfg dest=/etc/hapee-2.9/hapee-lb.cfg"
nix
ansible loadbalancers -u root -m service -a "name=hapee-2.9-lb state=reloaded"
nix
ansible loadbalancers -u root -m service -a "name=hapee-2.9-lb state=reloaded"
In this section, we will show how to adapt these ad-hoc commands into a playbook that can achieve the same result. The equivalent playbook would be the following:
yaml
- hosts: loadbalancersremote_user: roottasks:- name: Copy load balancer configurationcopy:src: "/home/user/hapee-lb.cfg"dest: "/etc/hapee-2.9/hapee-lb.cfg"owner: rootgroup: hapeemode: 0644- name: Check if there are no errors in configuration filecommand: "/opt/hapee-2.9/sbin/hapee-lb -c -f /etc/hapee-2.9/hapee-lb.cfg"register: hapee_check- name: Reload load balancer if the check passedservice:name: hapee-2.9-lbstate: reloadedwhen: hapee_check is success and not ansible_check_mode
yaml
- hosts: loadbalancersremote_user: roottasks:- name: Copy load balancer configurationcopy:src: "/home/user/hapee-lb.cfg"dest: "/etc/hapee-2.9/hapee-lb.cfg"owner: rootgroup: hapeemode: 0644- name: Check if there are no errors in configuration filecommand: "/opt/hapee-2.9/sbin/hapee-lb -c -f /etc/hapee-2.9/hapee-lb.cfg"register: hapee_check- name: Reload load balancer if the check passedservice:name: hapee-2.9-lbstate: reloadedwhen: hapee_check is success and not ansible_check_mode
Multiple configuration files
If you have multiple configuration files in your application, be sure the hapee-lb -c
command checks them all in the correct order.
About the configuration checking command
- In version 2.8 and earlier, the command indicates a valid configuration by printing
Configuration file is valid
in addition to setting the zero return status. - In version 2.9 and later, the command sets the zero return status for a valid configuration but does not display a message. To display the message, include the
-V
option on the command line.
You might notice a few things in the above YAML snippet when moving from ad-hoc commands to a playbook:
-
Under tasks, each separate command needs to be named; This name is displayed during playbook execution.
-
One addition not present in the ad-hoc commands is the
register
keyword, which is used to store the success/fail result of the remote command:yaml- name: Check if there are no errors in configuration filecommand: "/opt/hapee-2.9/sbin/hapee-lb -c -f /etc/hapee-2.9/hapee-lb.cfg"register: hapee_checkyaml- name: Check if there are no errors in configuration filecommand: "/opt/hapee-2.9/sbin/hapee-lb -c -f /etc/hapee-2.9/hapee-lb.cfg"register: hapee_checkThen, the
when
line in the next block verifies that the configuration syntax check executed correctly. The block executes only upon success.yaml- name: Reload load balancer if the check passedservice:name: hapee-2.9-lbstate: reloadedwhen: hapee_check is success and not ansible_check_modeyaml- name: Reload load balancer if the check passedservice:name: hapee-2.9-lbstate: reloadedwhen: hapee_check is success and not ansible_check_modeIf the check did not pass, we skip reloading the load balancer. It also checks that Ansible is not running in dry-run mode by verifying
ansible_check_mode
.About the configuration checking command
- In version 2.8 and earlier, the command indicates a valid configuration by printing
Configuration file is valid
in addition to setting the zero return status. - In version 2.9 and later, the command sets the zero return status for a valid configuration but does not display a message. To display the message, include the
-V
option on the command line.
- In version 2.8 and earlier, the command indicates a valid configuration by printing
Ansible Lint Jump to heading
Save the above file as first-haproxy-deploy.yml
and check its syntax with ansible-lint
:
nix
ansible-lint first-haproxy-deploy.yml
nix
ansible-lint first-haproxy-deploy.yml
outputtext
[301] Commands should not change things if nothing needs doingfirst-playbook.yml:14Task/Handler: Check if there are no errors in configuration file
outputtext
[301] Commands should not change things if nothing needs doingfirst-playbook.yml:14Task/Handler: Check if there are no errors in configuration file
You can ignore the [ 301 ]
warning in this case; We are only interested in making sure there are no obvious errors, the most common being missing spaces in the YAML file.
The linter is useful, however it generally only catches Ansible syntax errors. Therefore, it is recommended to run playbooks with the --check
flag to catch some Ansible runtime errors. Run the ansible-playbook
command with the --check
flag:
nix
ansible-playbook first-haproxy-deploy.yml --check
nix
ansible-playbook first-haproxy-deploy.yml --check
This step only validates that there are no obvious errors in the playbook itself, and not the load balancer configuration file.
Finally, to run the playbook execute:
nix
ansible-playbook first-haproxy-deploy.yml
nix
ansible-playbook first-haproxy-deploy.yml
Jinja templates Jump to heading
You can utilize Jinja templates in both the playbook YAML file itself and in configuration files it manages.
To use Jinja templates to manage an external file you can use the template
module. In the snippet below, the tasks
block now includes a template
block:
yaml
- hosts: loadbalancersremote_user: roottasks:- name: Sync main HAPEE configurationtemplate:src: ../templates/hapee-lb.cfg.j2dest: /etc/hapee-2.9/hapee-lb.cfgowner: rootgroup: hapeemode: 0664- name: Check if there are no errors in configuration filecommand: "/opt/hapee-2.9/sbin/hapee-lb -c -f /etc/hapee-2.9/hapee-lb.cfg"register: hapee_check- name: Reload HAPEE if the check passedservice:name: hapee-2.9-lbstate: reloadedwhen: hapee_check is success and not ansible_check_mode
yaml
- hosts: loadbalancersremote_user: roottasks:- name: Sync main HAPEE configurationtemplate:src: ../templates/hapee-lb.cfg.j2dest: /etc/hapee-2.9/hapee-lb.cfgowner: rootgroup: hapeemode: 0664- name: Check if there are no errors in configuration filecommand: "/opt/hapee-2.9/sbin/hapee-lb -c -f /etc/hapee-2.9/hapee-lb.cfg"register: hapee_check- name: Reload HAPEE if the check passedservice:name: hapee-2.9-lbstate: reloadedwhen: hapee_check is success and not ansible_check_mode
To start using Jinja templates, it is enough to rename a file to have a .j2
extension. Then you can optionally introduce Jinja templating patterns into the file.
To use Jinja templates inside your playbooks directly, you can look at the following example playbook:
yaml
---# updates ACLs on remote load balancer nodes via the Runtime API- hosts: loadbalancersremote_user: ubuntubecome: truebecome_user: rootbecome_method: sudotasks:- name: update ACL fileshell: "echo '{{ acl_action }} acl {{ acl_path }} {{ acl_ip_address }}' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"- name: check ACL file for presence/absence of the IP addressshell: "echo 'show acl {{ acl_path }}' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock | grep {{ acl_ip_address }}"register: socket_showfailed_when: "socket_show.rc != 0 and socket_show.rc != 1"- debug: var=socket_show.stdout_lines
yaml
---# updates ACLs on remote load balancer nodes via the Runtime API- hosts: loadbalancersremote_user: ubuntubecome: truebecome_user: rootbecome_method: sudotasks:- name: update ACL fileshell: "echo '{{ acl_action }} acl {{ acl_path }} {{ acl_ip_address }}' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock"- name: check ACL file for presence/absence of the IP addressshell: "echo 'show acl {{ acl_path }}' | socat stdio unix-connect:/var/run/hapee-2.9/hapee-lb.sock | grep {{ acl_ip_address }}"register: socket_showfailed_when: "socket_show.rc != 0 and socket_show.rc != 1"- debug: var=socket_show.stdout_lines
Jinja patterns are enclosed in quotes and curly brackets {{}}
; The variables inside of the brackets will be expanded and used as strings. Variables are set when you run the ansible-playbook
command, as shown below:
nix
ansible-playbook update-acl-playbook.yml \-e "acl_action=add acl_path=/etc/hapee-2.9/ip-whitelist.acl `acl_ip_address=10.10.10.10"`
nix
ansible-playbook update-acl-playbook.yml \-e "acl_action=add acl_path=/etc/hapee-2.9/ip-whitelist.acl `acl_ip_address=10.10.10.10"`
External variables are defined via the -e
flag. However, Jinja templates can be used with various variables like host variables, group variables, Ansible facts or custom variables set via the register
keyword. See more examples in the Ansible Templating (Jinja2) guide.
Ansible inventory file configuration Jump to heading
In our previous inventory
file, we defined two load balancer nodes that Ansible manages:
ini
[loadbalancers]lb1.example.comlb2.example.com
ini
[loadbalancers]lb1.example.comlb2.example.com
You can create hierarchies of load balancer groups, such as to update all load balancers, only load balancers in Europe, or only load balancers in the Americas. One node can appear in multiple groups. The group names can contain any valid DNS record or IP addresses; As a suffix, the port on which SSH is listening can also be specified. A more complex inventory file could look similar to this:
ini
[loadbalancers:children]loadbalancers-europeloadbalancer-americas[loadbalancers-europe]lb01.eu.example.comlb02.eu.example.com[loadbalancers-americas]lb01.us.example.comlb02.ca.example.comlb02.mx.example.com[loadbalancers-staging]# Using IP address with SSH ports10.10.12.10:222210.10.12.11:2223
ini
[loadbalancers:children]loadbalancers-europeloadbalancer-americas[loadbalancers-europe]lb01.eu.example.comlb02.eu.example.com[loadbalancers-americas]lb01.us.example.comlb02.ca.example.comlb02.mx.example.com[loadbalancers-staging]# Using IP address with SSH ports10.10.12.10:222210.10.12.11:2223
The :children
keyword creates a new group that inherits the nodes of two other groups. Set the group when calling an ad-hoc command. Below, we check whether the European load balancers are up:
nix
ansible loadbalancers-europe -u root -m ping
nix
ansible loadbalancers-europe -u root -m ping
Do you have any suggestions on how we can improve the content of this page?