Global Profiling Engine

Configure real-time aggregation of stick table data

The Global Profiling Engine collects stick table data from all HAProxy Enterprise nodes in the cluster in real time. It then aggregates that data and pushes it back to all of the nodes. For example, if LoadBalancer1 receives two requests and LoadBalancer2 receives three requests, the Global Profiling Engine will sum those numbers to get a total of five, then push that to both LoadBalancer1 and LoadBalancer2. This is helpful for an active/active load balancer configuration wherein the nodes need to share client request information to form an accurate picture of activity across the cluster.

The aggregated data does not overwrite the data on the load balancer nodes. Instead, it is pushed to secondary stick tables that have, for example, a suffix of .aggregate. You would use a fetch method to retrieve the aggregated data and perform an action, such as rate limiting.

Stick table data is transferred between the HAProxy Enterprise servers and the Global Profiling Engine server by using the peers protocol, a protocol created specifically for this purpose. You must configure which servers should participate, both on the Global Profiling Engine server and on each HAProxy Enterprise node.

Configure HAProxy Enterprise nodes Jump to heading

An HAProxy Enterprise node must be configured to share its stick table data with the Global Profiling Engine server. Once aggregated, the profiling engine sends the data back to each node where it is stored in a new stick table.

Follow these steps on each load balancer:

  1. Edit the file /etc/hapee-3.0/hapee-lb.cfg.

    Add a peers section.

    hapee-lb.cfg
    haproxy
    global
    [...]
    # By setting this, you are directing HAProxy Enterprise to use the server line
    # that specifies this name as the local node.
    localpeer enterprise1
    [...]
    peers mypeers
    # This is the address and port that the load balancer will receive aggregated data from the GPE server
    bind 0.0.0.0:10000
    # The local HAProxy Enterprise node hostname defined by one of the following:
    # 1) the value provided when the load balancer process is started with the -L argument
    # 2) the localpeer name from the global section of the load balancer configuration (suggested method)
    # 3) the hostname as returned by the system hostname command (default)
    server enterprise1
    # The Global Profiling Engine
    # If you run GPE on the same server, use a different port here
    server gpe 192.168.50.40:10000
    # stick tables definitions
    table request_rates type ip size 100k expire 30s store http_req_rate(10s)
    table request_rates.aggregate type ip size 100k expire 30s store http_req_rate(10s)
    hapee-lb.cfg
    haproxy
    global
    [...]
    # By setting this, you are directing HAProxy Enterprise to use the server line
    # that specifies this name as the local node.
    localpeer enterprise1
    [...]
    peers mypeers
    # This is the address and port that the load balancer will receive aggregated data from the GPE server
    bind 0.0.0.0:10000
    # The local HAProxy Enterprise node hostname defined by one of the following:
    # 1) the value provided when the load balancer process is started with the -L argument
    # 2) the localpeer name from the global section of the load balancer configuration (suggested method)
    # 3) the hostname as returned by the system hostname command (default)
    server enterprise1
    # The Global Profiling Engine
    # If you run GPE on the same server, use a different port here
    server gpe 192.168.50.40:10000
    # stick tables definitions
    table request_rates type ip size 100k expire 30s store http_req_rate(10s)
    table request_rates.aggregate type ip size 100k expire 30s store http_req_rate(10s)

    Inside it:

    • Define a bind line to set the IP address and port at which this node should receive data back from the Global Profiling Engine server. In this example, the bind directive listens on all IP addresses at port 10000 and receives aggregated data.

    • Define a server line for the current load balancer server. The server name value is important because it must match the name you set in the Global Profiling Engine server’s configuration for the corresponding peer line. The hostname may be one of the following, in order of precedence:

      • the value provided with the -L argument specified on the command line used to start the load balancer process
      • the localpeer name specified in the global section of the load balancer configuration (this method is used in this example)
      • the host name returned by the system hostname command. This is the default, but we recommend using one of the other two methods

      In this example, the local HAProxy Enterprise node is listed with only its hostname, enterprise1. It is not necessary to specify its IP address and port.

    • Define a server line for the Global Profiling Engine server. Set its IP address and port. The name you set here is also important. It must match the corresponding peer line in the Global Profiling Engine server’s configuration.

    • Define stick tables. For each one, add a duplicate line where the table name has the suffix .aggregate. In this example, the non-aggregated stick table request_rates will store current HTTP request rates. The stick tables record the rate at which clients make requests over 10 seconds. We clear out stale records after 30 seconds by setting the expire parameter on the stick table. The type parameter sets the key for the table, which in this case is an IP address. The stick table request_rates.aggregate receives its data from the Global Profiling Engine. Its suffix, .aggregate, will match the the profiling engine’s configuration.

    localpeer definition

    If you receive an error for your load balancer configuration that looks like the following after specifying the name for your load balancer on the server line:

    text
    [WARNING] (6125) : config : Removing incomplete section 'peers mypeers' (no peer named 'enterprise1')
    text
    [WARNING] (6125) : config : Removing incomplete section 'peers mypeers' (no peer named 'enterprise1')

    Specify your hostname value for localpeer in your global section:

    hapee-lb.cfg
    haproxy
    global
    localpeer enterprise1
    hapee-lb.cfg
    haproxy
    global
    localpeer enterprise1

    This global setting is required in cases where your hostname (retrieved using the system hostname command) is different from your desired peer name. Be sure to update your GPE configuration to use the name you specify as the localpeer name, as well as update your load balancer configuration to reference that name on the server line for your load balancer in your peers section).

  2. Add directives to your frontend, backend or listen sections that populate the non-aggregated stick tables with data.

    Below, the http-request track-sc0 line adds request rate information for each client that connects to the load balancer, using the client’s source IP address (src) as the key in the stick table.

    hapee-lb.cfg
    haproxy
    frontend fe_main
    bind :80
    default_backend webservers
    # add records to the stick table using the client's
    # source IP address as the table key
    http-request track-sc0 src table mypeers/request_rates
    hapee-lb.cfg
    haproxy
    frontend fe_main
    bind :80
    default_backend webservers
    # add records to the stick table using the client's
    # source IP address as the table key
    http-request track-sc0 src table mypeers/request_rates
  3. Add directives that read the aggregated data returned from the Global Profiling Engine server. That data is stored in the table with the suffix .aggregate.

    Below, the http-request deny line rejects clients that have a request rate greater than 1000. The client’s request rate is an aggregate amount calculated from all active load balancers. Note that this line reads data from the request_rates.aggregate table.

    hapee-lb.cfg
    haproxy
    # perform actions like rate limiting
    http-request deny deny_status 429 if { sc_http_req_rate(0,mypeers/request_rates.aggregate) gt 1000 }
    hapee-lb.cfg
    haproxy
    # perform actions like rate limiting
    http-request deny deny_status 429 if { sc_http_req_rate(0,mypeers/request_rates.aggregate) gt 1000 }
  4. Restart HAProxy Enterprise.

    nix
    sudo systemctl restart hapee-3.0-lb
    nix
    sudo systemctl restart hapee-3.0-lb

Configure the Global Profiling Engine Jump to heading

The Global Profiling Engine server collects stick table data from HAProxy Enterprise load balancers in your cluster, but you must set which load balancers will be allowed to participate by listing them in the configuration file.

Use dynamic configuration Jump to heading

Available since

  • HAProxy Enterprise - GPE version 1.0 (hapee-extras-gpe10 package or newer)

Load balancers can connect to the GPE server without you adding them explicitly to the GPE configuration file. Include the dynamic-peers directive in either:

  • the aggregations section to enable it for only that section.
  • the global section to enable it for multiple aggregations sections.

For example, to set dynamic-peers in an aggregations section:

  1. On the Global Profiling Engine server, edit the file /etc/hapee-extras/hapee-gpe-stktagg.cfg. Add an aggregations section that includes dynamic-peers:

    hapee-gpe-stktagg.cfg
    haproxy
    global
    # Enables the Global Profiling Engine API
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations data
    # set how to map non-aggregated to aggregated stick tables
    from any to .aggregate
    # the profiling engine listens at this address
    peer gpe 0.0.0.0:10000 local
    # register load balancer on the fly
    dynamic-peers
    hapee-gpe-stktagg.cfg
    haproxy
    global
    # Enables the Global Profiling Engine API
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations data
    # set how to map non-aggregated to aggregated stick tables
    from any to .aggregate
    # the profiling engine listens at this address
    peer gpe 0.0.0.0:10000 local
    # register load balancer on the fly
    dynamic-peers
  2. Optional: If you have multiple aggregations sections, which is useful for serving multiple clusters of load balancers, then you can simplify your setup by setting a bind directive in the global section instead of setting a peer line with the local keyword in each aggregations section. This sets the address at which to listen for incoming stick table data.

    hapee-gpe-stktagg.cfg
    haproxy
    global
    # Enables the Global Profiling Engine API
    stats socket /var/run/hapee-extras/gpe-api.sock
    bind 0.0.0.0:10000
    aggregations data
    # set how to map non-aggregated to aggregated stick tables
    from any to .aggregate
    # register load balancer on the fly
    dynamic-peers
    hapee-gpe-stktagg.cfg
    haproxy
    global
    # Enables the Global Profiling Engine API
    stats socket /var/run/hapee-extras/gpe-api.sock
    bind 0.0.0.0:10000
    aggregations data
    # set how to map non-aggregated to aggregated stick tables
    from any to .aggregate
    # register load balancer on the fly
    dynamic-peers

    If you do this, then on the load balancers the peer line for the GPE server must use the same name as the aggregations section. Here, the name is data.

    hapee-lb.cfg
    haproxy
    peers mypeers
    peer data 192.168.56.26:10000
    peer enterprise1 192.168.50.41:10000
    peer enterprise2 192.168.50.42:10000
    ...
    hapee-lb.cfg
    haproxy
    peers mypeers
    peer data 192.168.56.26:10000
    peer enterprise1 192.168.50.41:10000
    peer enterprise2 192.168.50.42:10000
    ...

Use static configuration Jump to heading

You can specify the IP address of each load balancer that is allowed to connect:

  1. On the Global Profiling Engine server, edit the file /etc/hapee-extras/hapee-gpe-stktagg.cfg.

    In the aggregations section, add a peer line for the Global Profiling Engine itself and for each HAProxy Enterprise node. Each peer’s name (e.g. enterprise1) should match the name you set in the HAProxy Enterprise configuration, since that is how the profiling engine validates the peer.

    Peer names

    Be sure that the peer names you specify in the GPE server’s configuration match exactly the names you specified in your load balancer configuration. For example, the following load balancer configuration sets the load balancer’s localpeer name to enterprise1 and we reference this name again in the peers section:

    hapee-lb.cfg
    haproxy
    global
    localpeer enterprise1
    ...
    peers mypeers
    bind 0.0.0.0:10000
    server enterprise1
    ...
    hapee-lb.cfg
    haproxy
    global
    localpeer enterprise1
    ...
    peers mypeers
    bind 0.0.0.0:10000
    server enterprise1
    ...

    As such, it must appear in the GPE server’s configuration as enterprise1 in order for GPE to make connection to the load balancer.

    hapee-gpe-stktagg.cfg
    haproxy
    global
    # Enables the Global Profiling Engine API
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations data
    # set how to map non-aggregated to aggregated stick tables
    from any to .aggregate
    # the profiling engine listens at this address
    peer gpe 0.0.0.0:10000 local
    # the load balancers listen at these addresses
    peer enterprise1 192.168.50.41:10000
    peer enterprise2 192.168.50.42:10000
    hapee-gpe-stktagg.cfg
    haproxy
    global
    # Enables the Global Profiling Engine API
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations data
    # set how to map non-aggregated to aggregated stick tables
    from any to .aggregate
    # the profiling engine listens at this address
    peer gpe 0.0.0.0:10000 local
    # the load balancers listen at these addresses
    peer enterprise1 192.168.50.41:10000
    peer enterprise2 192.168.50.42:10000

    In this example:

    • The Global Profiling Engine API provides a programmable API, which listens at the socket /var/run/hapee-extras/gpe-api.sock. The stats socket directive enables a CLI that lets you view data that the aggregator has stored.

    • In the aggregations section, the from line defines how non-aggregated stick tables map to aggregated stick tables, and what the suffix for the aggregated stick tables should be. The keyword any means that any stick table found will be aggregated. Aggregated data is pushed to tables with the same name, but ending with the suffix .aggregate. In the example, the engine expects stick tables to be named like request_rates and it will push aggregated data to request_rates.aggregate.

      You can also use a more specific mapping. In the example below, the engine expects stick tables to be named like request_rates.nonaggregate and it will push aggregated data to request_rates.aggregate. Stick tables without the .nonaggregate suffix will be ignored.

      haproxy
      from .nonaggregate to .aggregate
      haproxy
      from .nonaggregate to .aggregate
    • The peer line with the local argument indicates the local GPE server.

    • HAProxy Enterprise peer lines must use the same name you set on the server line in the HAProxy Enterprise configuration (e.g. enterprise1), and they must specify the IP addresses and ports where the load balancers are receiving aggregated data.

  2. Restart the Global Profiling Engine service:

    haproxy
    sudo systemctl restart hapee-extras-gpe
    haproxy
    sudo systemctl restart hapee-extras-gpe

Verify your setup Jump to heading

Check that the Global Profiling Engine and load balancers are setup correctly by utilizing their APIs.

  1. On the load balancer, call the Runtime API function show peers to check that the Global Profiling Engine is listed and that its last_status is ESTA (established):

    Below, the show peers command lists connected peers:

    nix
    echo "show peers" | sudo socat stdio unix-connect:/var/run/hapee-3.0/hapee-lb.sock | head -2
    nix
    echo "show peers" | sudo socat stdio unix-connect:/var/run/hapee-3.0/hapee-lb.sock | head -2
    output
    text
    0x5651d4a03010: [07/Jul/2021:17:02:30] id=mypeers disabled=0 flags=0x2213 resync_timeout=<PAST> task_calls=92
    0x5651d4a06540: id=gpe(remote,active) addr=192.168.50.40:10000 last_status=ESTA last_hdshk=2m17s
    output
    text
    0x5651d4a03010: [07/Jul/2021:17:02:30] id=mypeers disabled=0 flags=0x2213 resync_timeout=<PAST> task_calls=92
    0x5651d4a06540: id=gpe(remote,active) addr=192.168.50.40:10000 last_status=ESTA last_hdshk=2m17s
  2. Call the Runtime API function show table to see data in non-aggregated and aggregated stick tables.

    Below, we view data in the stick table named request_rates.aggregate:

    nix
    echo "show table mypeers/request_rates.aggregate" | sudo socat stdio unix-connect:/var/run/hapee-3.0/hapee-lb.sock
    nix
    echo "show table mypeers/request_rates.aggregate" | sudo socat stdio unix-connect:/var/run/hapee-3.0/hapee-lb.sock
    output
    text
    # table: mypeers/request_rates.aggregate, type: ip, size:102400, used:1
    0x7fc0e401fb80: key=192.168.50.1 use=0 exp=28056 http_req_rate(10000)=5
    output
    text
    # table: mypeers/request_rates.aggregate, type: ip, size:102400, used:1
    0x7fc0e401fb80: key=192.168.50.1 use=0 exp=28056 http_req_rate(10000)=5
  3. On the Global Profiling Engine server, call the show aggrs function to see load balancers that are registered as peers. A state of 0x7 means a successful connection. If you see a state of 0xffffffff, that means that a connection was not successful. Often, this is caused by the peer names not matching between the Global Profiling Engine’s configuration and the HAProxy Enterprise configuration.

    Below, the show aggrs command shows that the peer named enterprise1 has connected:

    nix
    echo "show aggrs" | sudo socat stdio /var/run/hapee-extras/gpe-api.sock
    nix
    echo "show aggrs" | sudo socat stdio /var/run/hapee-extras/gpe-api.sock
    output
    text
    aggregations data
    peer 'enterprise1'(0) sync_ok: 1 accept: 1(last: 6080) connect: 1(last: 16086) state: 0x7 sync_state: 0x3
    sync_req_cnt: 0 sync_fin_cnt: 0 sync_cfm_cnt: 0
    output
    text
    aggregations data
    peer 'enterprise1'(0) sync_ok: 1 accept: 1(last: 6080) connect: 1(last: 16086) state: 0x7 sync_state: 0x3
    sync_req_cnt: 0 sync_fin_cnt: 0 sync_cfm_cnt: 0

Optional: Bind outgoing connections to an interface Jump to heading

If the server where you are running the Global Profiling Engine has multiple network interfaces, you can configure the engine to bind to a specific one for outgoing data sent to HAProxy Enterprise servers.

To bind outgoing connections to a specific address, use the source directive in the global section.

IPv4 examples

hapee-gpe-stktagg.cfg
haproxy
global
source 126.123.10.12:12345
hapee-gpe-stktagg.cfg
haproxy
global
source 126.123.10.12:12345

The port is optional. It defaults to 0 for random ports.

hapee-gpe-stktagg.cfg
haproxy
global
source 126.123.10.12
hapee-gpe-stktagg.cfg
haproxy
global
source 126.123.10.12

IPv6 examples

hapee-gpe-stktagg.cfg
haproxy
global
source [2607:f8b0:400e:c00::ef]:12345
hapee-gpe-stktagg.cfg
haproxy
global
source [2607:f8b0:400e:c00::ef]:12345

The port is optional. It defaults to 0 for random ports.

hapee-gpe-stktagg.cfg
haproxy
global
source [2607:f8b0:400e:c00::ef]
hapee-gpe-stktagg.cfg
haproxy
global
source [2607:f8b0:400e:c00::ef]

GPE with session persistence Jump to heading

Available since

  • HAProxy Enterprise 2.9r1

A special situation arises when you want to use the Global Profiling Engine to sync session persistence data across load balancers. Session persistence uses a stick table to track which server a client was routed to initially and from then on continues to route that client to the same server.

  1. For example, consider the backend below that enables session persistence, but without GPE:

    haproxy
    backend servers
    stick-table type ip size 1m expire 30m
    stick on src
    server s1 192.168.0.10:80 check
    server s2 192.168.0.11:80 check
    haproxy
    backend servers
    stick-table type ip size 1m expire 30m
    stick on src
    server s1 192.168.0.10:80 check
    server s2 192.168.0.11:80 check
  2. We need to make the following changes to the backend:

    • Remove the stick-table line.
    • Make the stick on directive reference the sessions table in the peers section named mypeers.
    • By default, each load balancer can arrange the servers differently. However, we need to ensure consistent server IDs across all load balancers, so we use the id argument to set the IDs explicitly.
    haproxy
    backend servers
    stick on src table mypeers/sessions
    server s1 192.168.0.10:80 check id 1
    server s2 192.168.0.11:80 check id 2
    haproxy
    backend servers
    stick on src table mypeers/sessions
    server s1 192.168.0.10:80 check id 1
    server s2 192.168.0.11:80 check id 2
  3. Move the stick-table definition to the peers section:

    haproxy
    peers mypeers
    bind 0.0.0.0:10000
    server enterprise1
    server gpe 192.168.50.40:10000
    table sessions type ip size 1m expire 30m store server_id,server_key
    table sessions.aggregate type ip size 1m expire 30m store server_id,server_key write-to mypeers/sessions
    haproxy
    peers mypeers
    bind 0.0.0.0:10000
    server enterprise1
    server gpe 192.168.50.40:10000
    table sessions type ip size 1m expire 30m store server_id,server_key
    table sessions.aggregate type ip size 1m expire 30m store server_id,server_key write-to mypeers/sessions

    In this example:

    • We have moved the stick table to the peers section and named it sessions. You must set its store argument to server_id,server_key.
    • A table named sessions.aggregate syncs session persistence data to GPE, which then syncs it to all load balancers. The aggregate table must set the write-to argument so that the data is written back to the sessions table. The write-to parameter allows remote load balancers to update the local sessions table with session persistence data.
  4. Make this same change on the other load balancer.

Multi-level setup Jump to heading

You can aggregate stick tables from other Global Profiling Engines, which allows you to aggregate stick tables across different data centers, for example.

We will consider the following setup:

GPE multilevel diagram

The top-level aggr3 Global Profiling Engine will sum the counters from the intermediate aggr1 and aggr2 aggregate stick tables. It will then send the top-level aggregate stick table to all HAProxy Enterprise nodes.

You can also host multiple top-level servers for high availability. In that case, intermediate servers simply push their data to both. See below for details.

Configure the top-level Global Profiling Engine Jump to heading

Follow these steps on the server you wish to be the top-level Global Profiling Engine.

  1. Edit the file /etc/hapee-extras/hapee-gpe-stktagg.cfg.

    hapee-gpe-stktagg.cfg
    haproxy
    global
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations toplevel
    from .intermediate to .aggregate
    peer top-gpe 0.0.0.0:10000 local
    peer intermediate-gpe1 192.168.56.111:10000 down
    peer intermediate-gpe2 192.168.56.112:10000 down
    hapee-gpe-stktagg.cfg
    haproxy
    global
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations toplevel
    from .intermediate to .aggregate
    peer top-gpe 0.0.0.0:10000 local
    peer intermediate-gpe1 192.168.56.111:10000 down
    peer intermediate-gpe2 192.168.56.112:10000 down
    • The current server has the local keyword set on its peer line.

    • In this example, two other Global Profiling Engine servers, intermediate-gpe1 and intermediate-gpe2, are listed with the down keyword, which means that they are one level down from the top.

    • The top-level Global Profiling Engine will aggregate stick table data from the intermediate servers. Their stick tables should have the .intermediate suffix.

    • The top-level Global Profiling Engine will push aggregated data back to the intermediate servers. The globally aggregated stick tables should have the .aggregate suffix.

Configure the intermediate Global Profiling Engines Jump to heading

Follow these steps on the servers you wish to be the intermediate-level Global Profiling Engines.

  1. Edit the file /etc/hapee-extras/hapee-gpe-stktagg.cfg.

    intermediate-gpe1

    hapee-gpe-stktagg.cfg
    haproxy
    global
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations myaggr
    from any to .intermediate
    forward .aggregate
    peer intermediate-gpe1 0.0.0.0:10000 local
    peer top-gpe 192.168.56.113:10000 up
    peer enterprise1 192.168.50.41:10000
    peer enterprise2 192.168.50.42:10000
    hapee-gpe-stktagg.cfg
    haproxy
    global
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations myaggr
    from any to .intermediate
    forward .aggregate
    peer intermediate-gpe1 0.0.0.0:10000 local
    peer top-gpe 192.168.56.113:10000 up
    peer enterprise1 192.168.50.41:10000
    peer enterprise2 192.168.50.42:10000

    intermediate-gpe2

    hapee-gpe-stktagg.cfg
    haproxy
    global
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations myaggr
    from any to .intermediate
    forward .aggregate
    peer intermediate-gpe2 0.0.0.0:10000 local
    peer top-gpe 192.168.56.113:10000 up
    peer enterprise3 192.168.50.51:10000
    peer enterprise4 192.168.50.52:10000
    hapee-gpe-stktagg.cfg
    haproxy
    global
    stats socket /var/run/hapee-extras/gpe-api.sock
    aggregations myaggr
    from any to .intermediate
    forward .aggregate
    peer intermediate-gpe2 0.0.0.0:10000 local
    peer top-gpe 192.168.56.113:10000 up
    peer enterprise3 192.168.50.51:10000
    peer enterprise4 192.168.50.52:10000
    • The current server has the local keyword set on its peer line.

    • The upper-level Global Profiling Engine peer is denoted by the up keyword.

    • Each intermediate Global Profiling Engine is aware of only the HAProxy Enterprise nodes it manages and of the top-level Global Profiling Engine.

    • The intermediate-level Global Profiling Engines will aggregate stick table data from the HAProxy Enterprise servers.

    • The forward line relays the top-level server’s .aggregate stick tables to the HAProxy Enterprise servers.

    • The intermediate-level Global Profiling Engines will push aggregated data back to the HAProxy Enterprise servers. The aggregated stick tables should have the .aggregate suffix.

Configure for high availability Jump to heading

To create a highly available setup, you can have multiple top-level servers. Add the group parameter to them so that the intermediate servers recognize that there are two top-level servers.

intermediate-gpe1

hapee-gpe-stktagg.cfg
haproxy
global
stats socket /var/run/hapee-extras/gpe-api.sock
aggregations myaggr
from any to .intermediate
forward .aggregate
peer intermediate-gpe1 0.0.0.0:10000 local
peer top-gpe1 192.168.56.113:10000 up group 1
peer top-gpe2 192.168.56.114:10000 up group 1
peer enterprise1 192.168.50.41:10000
peer enterprise2 192.168.50.42:10000
hapee-gpe-stktagg.cfg
haproxy
global
stats socket /var/run/hapee-extras/gpe-api.sock
aggregations myaggr
from any to .intermediate
forward .aggregate
peer intermediate-gpe1 0.0.0.0:10000 local
peer top-gpe1 192.168.56.113:10000 up group 1
peer top-gpe2 192.168.56.114:10000 up group 1
peer enterprise1 192.168.50.41:10000
peer enterprise2 192.168.50.42:10000

See also Jump to heading

Do you have any suggestions on how we can improve the content of this page?