Enterprise modules

Route health injection

The Route Health Injection (RHI) module monitors your load balancer’s connectivity to backend servers and has the ability to remove the entire load balancer from duty if the load balancer can suddenly not reach those servers. The idea is that if a load balancer can’t reach the servers, and you’re running an active/active load balancer pair, then you can deactive the problematic load balancer and route all traffic to the other, healthy load balancer.

The RHI module is meant to work with IP routing protocols, such as BGP and OSPF, configured for Equal-cost multi-path (ECMP) routing. ECMP enables the network to route traffic to a destination over multiple paths, allowing you to relay IP packets to both of your load balancers in parallel. The ability to detect problems and route traffic away from unhealthy load balancers is important for making ECMP resilient.

Concepts Jump to heading

This section describes the concepts behind route health injection combined with ECMP.

How ECMP routing works Jump to heading

Your router attempts to send packets to their destination using the most efficient network path. When two network paths have identical costs, meaning they are equally good, then the router can load balance traffic across both paths given that it supports ECMP. With ECMP, you can configure the router to see both of your load balancers as being different, but equal, routes to the destination IP address. So the router can send traffic to both load balancers in parallel, achieving high availiability.

How do the two load balancers present themselves to the router as being routes of equal costs? They present routes to the same IP address as passing through themselves and inject those routes into the router’s routing table. In essence, they become gateways for reaching the same IP address.

A router on the 192.168.0.0/24 network sees:

Destination address Route via
192.168.1.10 * Load balancer 1 at 192.168.0.101
or
* Load balancer 2 at 192.168.0.102

The RHI module shares routes with the router. It does this by installing BIRD Internet Routing Daemon, which is open-source software, onto the load balancer. BIRD shares routes by using either the BGP or OSPF protocol.

While to the router it looks as through the load balancers are simply the next hop towards the destination, actually the load balancers are the final hop. They own the destination IP address. Having both load balancers bound to the same IP address could cause conflicts on the network, though. Two ways to solve this problem that we’ll cover are:

  • Add a second address to the load balancer’s loopback network interface and disable ARP so that this address isn’t advertised on the network. That way, only the load balancer sees this address and essentially sends the traffic to itself.
  • Intercept traffic destined for a non-locally bound address by configuring transparent proxying.

How Route Health Injection works Jump to heading

The RHI module adds this load balancer as a route in BIRD’s configuration using a custom routing table named volatile. BIRD then broadcasts these routes to peer routers using the BGP or OSPF protocol. If either a frontend or a backend is down, then the RHI module removes the route from the volatile table, notifying BIRD to stop advertising this load balancer as a route on the network, diverting the flow of traffic to the other load balancer in the active/active cluster. You can configure ECMP on your router to load balance traffic to both load balancers via the advertised routes.

Prerequisities Jump to heading

Ensure that you’ve met the following prerequisites:

  • Your router has enabled ECMP. Consult your router’s documentation for details.

Install and configure RHI Jump to heading

In this section, you will learn how to set up route health injection.

Install the RHI module Jump to heading

To install the RHI module, perform these steps on each load balancer:

  1. Install the RHI module using your package manager:

    nix
    sudo apt-get install hapee-extras-rhi
    nix
    sudo apt-get install hapee-extras-rhi
    nix
    sudo yum install hapee-extras-rhi
    nix
    sudo yum install hapee-extras-rhi
    nix
    sudo zypper install hapee-extras-rhi
    nix
    sudo zypper install hapee-extras-rhi
    nix
    sudo pkg install hapee-extras-rhi
    nix
    sudo pkg install hapee-extras-rhi

    This installs the hapee-extras-route package too, which is our version of the BIRD Internet Routing Daemon. The daemon is stored as /opt/hapee-extras/sbin/hapee-route.

  2. Create a socket for the Runtime API.

    The RHI module needs to connect to the Runtime API to collect information about the health of your frontends and backends. Add a new stats socket line to the global section of your HAProxy Enterprise configuration. This exposes the Runtime API as the socket /var/run/hapee-extras/hapee-lb.sock:

    hapee-lb.cfg
    haproxy
    global
    stats socket /var/run/hapee-extras/hapee-lb.sock user hapee-lb group hapee mode 660
    hapee-lb.cfg
    haproxy
    global
    stats socket /var/run/hapee-extras/hapee-lb.sock user hapee-lb group hapee mode 660

    Use the correct socket path

    Your load balancer configuration likely contains a stats socket line in the global section already, but if it isn’t using the file path shown here, add a second stats socket line or else it won’t work.

    Optional: Change the socket path

    The RHI module expects the Runtime API socket to be /var/run/hapee-extras/hapee-lb.sock. However, you can change the path that the RHI module expects by setting the variable HAPEE_LB_SOCKET in the following file:

    • On Debian/Ubuntu: /etc/default/hapee-extras-rhi
    • On RHEL: /etc/sysconfig/hapee-extras-rhi

    For example:

    hapee-extras-rhi
    text
    HAPEE_LB_SOCKET="/var/run/hapee-3.0/hapee-lb.sock"
    hapee-extras-rhi
    text
    HAPEE_LB_SOCKET="/var/run/hapee-3.0/hapee-lb.sock"

    Still, you must ensure that this same path is configured in your HAProxy Enterprise configuration file:

    hapee-lb.cfg
    haproxy
    global
    stats socket /var/run/hapee-3.0/hapee-lb.sock user hapee-lb group hapee mode 660
    hapee-lb.cfg
    haproxy
    global
    stats socket /var/run/hapee-3.0/hapee-lb.sock user hapee-lb group hapee mode 660

Configure route health injection Jump to heading

To configure the RHI module to share routes with your router:

  1. Edit the file /etc/hapee-extras/hapee-rhi.cfg. The default configuration contains an example:

    hapee-rhi.cfg
    text
    # Inject the 10.200.200.200/32 address into the route daemon if
    # all the backends "be_static" and "be_app" are up.
    10.200.200.200/32 = all(b:be_static,b:be_app)
    hapee-rhi.cfg
    text
    # Inject the 10.200.200.200/32 address into the route daemon if
    # all the backends "be_static" and "be_app" are up.
    10.200.200.200/32 = all(b:be_static,b:be_app)

    This file contains a list of routes that the RHI module will add to BIRD’s volatile table, but only if the given rule returns true. A rule checks the status of one or more frontends or backends to see if they are up or down. A backend is treated as down if all servers fail their health checks or if you manually disable the servers. A frontend is down if you disable it manually.

    In this example, the rule uses all function to announce the 192.168.1.10/32 IP address only when both the be_static and be_app backend are up and running. When the condition is false, the IP address is removed from the list of advertised routes.

    Let’s look at another example. The following line uses the any function to advertise the IP if either the be_app or be_app2 backends are up and running:

    hapee-rhi.cfg
    text
    192.168.1.10/32 = any(b:be_app,b:be_app2)
    hapee-rhi.cfg
    text
    192.168.1.10/32 = any(b:be_app,b:be_app2)
  2. After making changes to the hapee-rhi.cfg file, save the file. Then enable and restart the service:

    nix
    sudo systemctl enable hapee-extras-rhi
    sudo systemctl restart hapee-extras-rhi
    nix
    sudo systemctl enable hapee-extras-rhi
    sudo systemctl restart hapee-extras-rhi
  3. To configure BIRD for BGP or OSPF, which are used to advertise routes to peer routers, edit the file /etc/hapee-extras/hapee-route.cfg.

    • Add a section for either BGP or OSPF, depending on which protocol you intend to use for advertising routes to peers.

      Use BGP

      An example BGP configuration section:

      hapee-route.cfg
      text
      protocol bgp r1 {
      local 192.168.0.101 as 65001;
      neighbor 192.168.0.1 as 65001;
      graceful restart on;
      import none;
      # advertise the IP route
      export where proto = "vol1";
      }
      hapee-route.cfg
      text
      protocol bgp r1 {
      local 192.168.0.101 as 65001;
      neighbor 192.168.0.1 as 65001;
      graceful restart on;
      import none;
      # advertise the IP route
      export where proto = "vol1";
      }

      In this example:

      • The local directive refers to the IP address assigned to this load balancer’s network interface and assigns the Autonomous System Number 65001.
      • The neighbor directive refers to the layer 3 device, such as the gateway router, with which we are establishing a BGP session.
      • The export line advertises routes from the volatile table, vol1.
      Use OSPF

      An example OSPF configuration section:

      hapee-route.cfg
      text
      protocol ospf anycast {
      tick 2;
      import none;
      # advertise the IP route
      export where proto = "vol1";
      area 0.0.0.0 {
      stub no;
      interface "eth0" {
      hello 10;
      retransmit 6;
      cost 10;
      transmit delay 5;
      dead count 4;
      wait 50;
      type broadcast;
      };
      };
      }
      hapee-route.cfg
      text
      protocol ospf anycast {
      tick 2;
      import none;
      # advertise the IP route
      export where proto = "vol1";
      area 0.0.0.0 {
      stub no;
      interface "eth0" {
      hello 10;
      retransmit 6;
      cost 10;
      transmit delay 5;
      dead count 4;
      wait 50;
      type broadcast;
      };
      };
      }
      • The export line advertises routes from the volatile table, vol1.
  4. After making changes to the hapee-route.cfg file, save the file. Then restart the service:

    nix
    sudo systemctl restart hapee-extras-route
    nix
    sudo systemctl restart hapee-extras-route
  5. Optional: To advertise IPv6 routes, repeat these steps for the hapee-extras-route6 service.

  6. Verify that RHI added a route to BIRD by calling the show route command. The IP should display.

    nix
    sudo /opt/hapee-extras/bin/hapee-route-cli show route
    nix
    sudo /opt/hapee-extras/bin/hapee-route-cli show route
    output
    text
    BIRD 1.6.3 ready.
    192.168.1.10/32 dev auto [vol1 18:54:15] * (0)
    output
    text
    BIRD 1.6.3 ready.
    192.168.1.10/32 dev auto [vol1 18:54:15] * (0)

    When you disable all servers in the backend, the command should not return this route.

Intercept traffic destined for the IP Jump to heading

Remember that both of your load balancers must be able to receive packets at the same IP address defined in the route shared with your router, for example 192.168.1.10/32, but without causing a conflict on the network. Here are two ways to accomplish that:

Add a second address to the load balancer's loopback network interface

The IP must be handled on each server’s loopback interface to accept connections, but can’t be advertised on the network or it will be identified as an IP conflict by some network components. To avoid IP address conflicts, disable ARP for IP addresses managed by the loopback interface.

Perform these steps on both load balancers:

  1. Edit the HAProxy Enterprise configuration file, /etc/hapee-3.0/hapee-lb.cfg:

    hapee-lb.cfg
    haproxy
    frontend www
    bind 192.168.1.10:80 name http
    bind 192.168.1.10:443 name https ssl crt site.pem
    hapee-lb.cfg
    haproxy
    frontend www
    bind 192.168.1.10:80 name http
    bind 192.168.1.10:443 name https ssl crt site.pem
    • In the frontend section, define one or more bind lines that listen at the IP address you created a route for, such as 192.168.1.10. This address should not be assigned to any network interfaces.

    • Each load balancer should be assigned the same IP address.

  2. Save the changes and then reload the service.

    nix
    sudo systemctl reload hapee-3.0-lb
    nix
    sudo systemctl reload hapee-3.0-lb
  3. Manage the IP address through a loopback interface.

    Edit the file /etc/network/interfaces. Add a new iface section for the lo interface and add the address under it:

    interfaces
    text
    # The loopback network interface
    auto lo
    iface lo inet loopback
    iface lo inet static
    address 192.168.1.10/32
    interfaces
    text
    # The loopback network interface
    auto lo
    iface lo inet loopback
    iface lo inet static
    address 192.168.1.10/32

    Edit the netplan YAML configuration file located in /etc/netplan. The configuration file is probably the one having the lowest number and has a name like 00-installer-config.yaml or 01-netcfg.yaml.

    Edit the netplan YAML file, adding an lo section under the ethernets level:

    01-netcfg.yaml
    yaml
    network:
    ethernets:
    lo:
    dhcp4: false
    addresses:
    - "192.168.1.10/32"
    01-netcfg.yaml
    yaml
    network:
    ethernets:
    lo:
    dhcp4: false
    addresses:
    - "192.168.1.10/32"

    Then use sudo netplan try and sudo netplan apply before rebooting to make sure the configuration is valid. Ignore warnings about Open vSwitch.

    To persist the IP address on RHEL 9.2 or newer, use NetworkManager. Previous versions did not support managing the loopback interface with NetworkManager.

    nix
    sudo nmcli connection modify lo +ipv4.addresses 192.168.1.10/32
    sudo nmcli con up 'lo'
    nix
    sudo nmcli connection modify lo +ipv4.addresses 192.168.1.10/32
    sudo nmcli con up 'lo'

    To persist the IP address on RHEL systems older than 9.2, create a new service for loading it at boot:

    nix
    sudo touch /etc/systemd/system/01-static-ip.service
    sudo vi /etc/systemd/system/01-static-ip.service
    nix
    sudo touch /etc/systemd/system/01-static-ip.service
    sudo vi /etc/systemd/system/01-static-ip.service

    Add the following lines to the service file:

    01-static-ip.service
    text
    [Unit]
    Description=Add static IP to loopback
    Wants=network-online.target
    After=network-online.target
    [Service]
    Type=oneshot
    # create the address
    ExecStart=-/usr/sbin/ip address add 192.168.1.10/32 dev lo
    [Install]
    WantedBy=multi-user.target
    01-static-ip.service
    text
    [Unit]
    Description=Add static IP to loopback
    Wants=network-online.target
    After=network-online.target
    [Service]
    Type=oneshot
    # create the address
    ExecStart=-/usr/sbin/ip address add 192.168.1.10/32 dev lo
    [Install]
    WantedBy=multi-user.target

    Set the service to start on boot:

    nix
    sudo systemctl enable 01-static-ip.service
    nix
    sudo systemctl enable 01-static-ip.service
  4. Add the following lines to /etc/sysctl.conf.

    text
    net.ipv4.conf.all.arp_ignore=1
    net.ipv4.conf.all.arp_announce=2
    text
    net.ipv4.conf.all.arp_ignore=1
    net.ipv4.conf.all.arp_announce=2
  5. Restart the HAProxy Enterprise server.

Configure transparent proxying

Perfom these steps on both load balancers to enable transparent proxying:

  1. Edit the HAProxy Enterprise configuration file, /etc/hapee-3.0/hapee-lb.cfg:

    hapee-lb.cfg
    haproxy
    frontend www
    bind 192.168.1.10:80 name http transparent
    bind 192.168.1.10:443 name https ssl crt site.pem transparent
    hapee-lb.cfg
    haproxy
    frontend www
    bind 192.168.1.10:80 name http transparent
    bind 192.168.1.10:443 name https ssl crt site.pem transparent
    • In the frontend section, define one or more bind lines that listen at the IP address you created a route for, such as 192.168.1.10. This address should not be assigned to any network interfaces.

    • Because the IP address is not configured on the network interface, add the transparent argument. This indicates that the IP address should be bound even though it does not belong to the local machine. Packets targeting this address will be intercepted as if the address were locally configured. This feature uses the Linux kernel’s TPROXY feature.

    • Each load balancer should be assigned the same IP address.

  2. Save the changes and then reload the service.

    nix
    sudo systemctl reload hapee-3.0-lb
    nix
    sudo systemctl reload hapee-3.0-lb
  3. Add firewall rules that intercept packets that have a destination IP address matching a listening socket, which in this case is our transparent proxied IP address. Also add policy-based routing rules to deliver the traffic locally.

    Create firewall and routing rules:

    nix
    sudo iptables -t mangle -N DIVERT
    sudo iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
    sudo iptables -t mangle -A PREROUTING -p udp -m socket -j DIVERT
    sudo iptables -t mangle -A DIVERT -j MARK --set-mark 1
    sudo iptables -t mangle -A DIVERT -j ACCEPT
    sudo ip rule add fwmark 1 lookup 100
    sudo ip route add local 0.0.0.0/0 dev lo table 100
    nix
    sudo iptables -t mangle -N DIVERT
    sudo iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
    sudo iptables -t mangle -A PREROUTING -p udp -m socket -j DIVERT
    sudo iptables -t mangle -A DIVERT -j MARK --set-mark 1
    sudo iptables -t mangle -A DIVERT -j ACCEPT
    sudo ip rule add fwmark 1 lookup 100
    sudo ip route add local 0.0.0.0/0 dev lo table 100

    To make the iptables changes persistent after reboot, use the iptables-save command. It saves the changes and configures the system to restore them at reboot.

    nix
    sudo apt install iptables-persistent
    sudo su -c 'iptables-save > /etc/iptables/rules.v4'
    nix
    sudo apt install iptables-persistent
    sudo su -c 'iptables-save > /etc/iptables/rules.v4'

    To verify the IP tables rules, use the iptables command. Note that the output may show tcp or the number 6 and udp or the number 17:

    nix
    sudo iptables -L -v -n -t mangle
    nix
    sudo iptables -L -v -n -t mangle
    output
    text
    Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
    pkts bytes target prot opt in out source destination
    1941 335K DIVERT tcp -- * * 0.0.0.0/0 0.0.0.0/0 socket
    0 0 DIVERT udp -- * * 0.0.0.0/0 0.0.0.0/0 socket
    ...
    Chain DIVERT (1 references)
    pkts bytes target prot opt in out source destination
    1941 335K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK set 0x1
    1941 335K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0
    output
    text
    Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
    pkts bytes target prot opt in out source destination
    1941 335K DIVERT tcp -- * * 0.0.0.0/0 0.0.0.0/0 socket
    0 0 DIVERT udp -- * * 0.0.0.0/0 0.0.0.0/0 socket
    ...
    Chain DIVERT (1 references)
    pkts bytes target prot opt in out source destination
    1941 335K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK set 0x1
    1941 335K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0

    To make the policy route (ip rule) and route table (ip route) changes persist after reboot, your next step depends on whether or not your system uses netplan. Typically, Ubuntu uses it, but Debian does not.

    • If your system uses netplan, persist the policy route (ip rule) and route table (ip route) changes in the netplan YAML configuration file located in /etc/netplan. The configuration file is probably the one having the lowest number and has a name like 00-installer-config.yaml or 01-netcfg.yaml.

      Edit the netplan YAML file, adding an lo section under the ethernets level:

      01-netcfg.yaml
      yaml
      network:
      ethernets:
      lo:
      routing-policy:
      - to: 0.0.0.0/0
      mark: 1
      table: 100
      routes:
      - to: 0.0.0.0/0
      type: local
      table: 100
      01-netcfg.yaml
      yaml
      network:
      ethernets:
      lo:
      routing-policy:
      - to: 0.0.0.0/0
      mark: 1
      table: 100
      routes:
      - to: 0.0.0.0/0
      type: local
      table: 100

      Then use sudo netplan try and sudo netplan apply before rebooting to make sure the configuration is valid. Ignore warnings about Open vSwitch.

    • If your system does not use netplan, persist the changes by creating a new service for loading them at boot. Create the systemd service file:

      nix
      sudo touch /etc/systemd/system/01-static-route.service
      sudo vi /etc/systemd/system/01-static-route.service
      nix
      sudo touch /etc/systemd/system/01-static-route.service
      sudo vi /etc/systemd/system/01-static-route.service

      Add the following lines to the service file:

      01-static-route.service
      text
      [Unit]
      Description=Add route table 100
      Wants=network-online.target
      After=network-online.target
      [Service]
      Type=oneshot
      # create the route table and rule
      ExecStart=-/usr/sbin/ip route add local 0.0.0.0/0 dev lo table 100
      ExecStart=-/usr/sbin/ip rule add fwmark 1 lookup 100
      [Install]
      WantedBy=multi-user.target
      01-static-route.service
      text
      [Unit]
      Description=Add route table 100
      Wants=network-online.target
      After=network-online.target
      [Service]
      Type=oneshot
      # create the route table and rule
      ExecStart=-/usr/sbin/ip route add local 0.0.0.0/0 dev lo table 100
      ExecStart=-/usr/sbin/ip rule add fwmark 1 lookup 100
      [Install]
      WantedBy=multi-user.target

      Set the service to start on boot:

      nix
      sudo systemctl enable 01-static-route.service
      nix
      sudo systemctl enable 01-static-route.service

      Restart the system. The saved settings will be restored after the restart.

    Create firewall rules:

    nix
    sudo firewall-cmd --permanent --direct --add-chain ipv4 mangle DIVERT
    sudo firewall-cmd --permanent --direct --add-rule ipv4 mangle PREROUTING 0 -p tcp -m socket -j DIVERT
    sudo firewall-cmd --permanent --direct --add-rule ipv4 mangle PREROUTING 0 -p udp -m socket -j DIVERT
    sudo firewall-cmd --permanent --direct --add-rule ipv4 mangle DIVERT 0 -j MARK --set-mark 1
    sudo firewall-cmd --permanent --direct --add-rule ipv4 mangle DIVERT 1 -j ACCEPT
    nix
    sudo firewall-cmd --permanent --direct --add-chain ipv4 mangle DIVERT
    sudo firewall-cmd --permanent --direct --add-rule ipv4 mangle PREROUTING 0 -p tcp -m socket -j DIVERT
    sudo firewall-cmd --permanent --direct --add-rule ipv4 mangle PREROUTING 0 -p udp -m socket -j DIVERT
    sudo firewall-cmd --permanent --direct --add-rule ipv4 mangle DIVERT 0 -j MARK --set-mark 1
    sudo firewall-cmd --permanent --direct --add-rule ipv4 mangle DIVERT 1 -j ACCEPT

    Reload the firewall:

    nix
    sudo firewall-cmd --reload
    nix
    sudo firewall-cmd --reload

    To persist the IP routing rules, create a new service for loading them at boot:

    nix
    sudo touch /etc/systemd/system/01-static-route.service
    sudo vi /etc/systemd/system/01-static-route.service
    nix
    sudo touch /etc/systemd/system/01-static-route.service
    sudo vi /etc/systemd/system/01-static-route.service

    Add the following lines to the service file:

    01-static-route.service
    text
    [Unit]
    Description=Add route table 100
    Wants=network-online.target
    After=network-online.target
    [Service]
    Type=oneshot
    # create the route table and rule
    ExecStart=-/usr/sbin/ip route add local 0.0.0.0/0 dev lo table 100
    ExecStart=-/usr/sbin/ip rule add fwmark 1 lookup 100
    [Install]
    WantedBy=multi-user.target
    01-static-route.service
    text
    [Unit]
    Description=Add route table 100
    Wants=network-online.target
    After=network-online.target
    [Service]
    Type=oneshot
    # create the route table and rule
    ExecStart=-/usr/sbin/ip route add local 0.0.0.0/0 dev lo table 100
    ExecStart=-/usr/sbin/ip rule add fwmark 1 lookup 100
    [Install]
    WantedBy=multi-user.target

    Set the service to start on boot:

    nix
    sudo systemctl enable 01-static-route.service
    nix
    sudo systemctl enable 01-static-route.service

    Restart the system.

    To verify the rule table, use the ip command:

    nix
    sudo ip rule ls
    nix
    sudo ip rule ls
    output
    text
    ...
    32765: from all fwmark 0x1 lookup 100
    ...
    output
    text
    ...
    32765: from all fwmark 0x1 lookup 100
    ...

    To verify the route table, use the ip command:

    nix
    sudo ip route ls table 100
    nix
    sudo ip route ls table 100
    output
    text
    local default dev lo scope host
    output
    text
    local default dev lo scope host

At this point, you have completed the setup of the Route Health Injection module.

View the BIRD volatile table Jump to heading

To see the volatile table definition:

  1. Edit the file /etc/hapee-extras/hapee-route.cfg.

  2. Scroll down to the protocol volatile section.

    hapee-route.cfg
    text
    protocol volatile vol1 {
    # gateway <ip>
    }
    hapee-route.cfg
    text
    protocol volatile vol1 {
    # gateway <ip>
    }

    By default, BIRD announces routes through the gateway configured on the network interface, but you can specify a different network gateway by uncommenting the gateway directive and typing its IP address.

    For example:

    hapee-route.cfg
    text
    protocol volatile vol1 {
    gateway 192.168.1.244
    }
    hapee-route.cfg
    text
    protocol volatile vol1 {
    gateway 192.168.1.244
    }

    You can add more volatile tables to support advertising routes for different frontends:

    hapee-route.cfg
    text
    protocol volatile vol1 {
    }
    protocol volatile vol2 {
    }
    hapee-route.cfg
    text
    protocol volatile vol1 {
    }
    protocol volatile vol2 {
    }

    Then, prefix each route in the RHI module’s configuration, /etc/hapee-extras/hapee-rhi.cfg, with a table’s name:

    hapee-rhi.cfg
    text
    vol1%192.168.1.10/32 = all(b:be_static,b:be_app)
    vol2%192.168.1.11/32 = any(b:k8s_servers)
    hapee-rhi.cfg
    text
    vol1%192.168.1.10/32 = all(b:be_static,b:be_app)
    vol2%192.168.1.11/32 = any(b:k8s_servers)

Reference Jump to heading

This section describes the options available when configuring the RHI module.

RHI configuration file Jump to heading

This section describes the syntax of the RHI configuration file, /etc/hapee-extras/hapee-rhi.cfg.

text
<network>[,<network>,[...]] = <agg>(<b:|f:><name>[,<b:|f:><name>,[...]])
text
<network>[,<network>,[...]] = <agg>(<b:|f:><name>[,<b:|f:><name>,[...]])
Argument Description
<network> [%<protoname>]{<ipv4>,<ipv6>}[/<mask>] Specify an IPv4 or IPv6 CIDR subnet or list of several comma-delimited subnets. If you do not specify any subnet mask, RHI applies the /32 mask. For advanced configuration, you can supply the name of a volatile table in the %<protoname> section (default is vol1).
<agg> Aggregation function: all - Returns true if all listed proxies are active. any - Returns true if at least one of the proxies listed is active. never - Always false. For debugging purposes.
<b:|f:> Prefix of either b for backend or f for frontend.
<name> Name of the frontend or backend.

See also Jump to heading

Do you have any suggestions on how we can improve the content of this page?