Troubleshooting

Enable core dumps

In the rare event that an HAProxy Enterprise process crashes or behaves abnormally, you can capture a core dump (also known as a crash dump) that you can send to the Support team. A core dump is a file that encapsulates the state of an application when it crashes and is useful in diagnosing and fixing potential issues. Core dumps are not enabled by default, so you must configure your OS settings to allow the collection of these files.

Enable core dumps Jump to heading

  1. Enable the core dump handler. This sets the core dump handler inside the default HAProxy Enterprise change root environment.

    Find the chroot directory

    The default chroot environment for HAProxy Enterprise is /var/empty. You can find this value in the configuration file /etc/hapee-3.0/hapee-lb.cfg in the global section.

    The default chroot environment is /var/empty. We want core dumps to be saved in /var/empty/tmp. The kernel setting kernel.core_pattern sets this value.

    nix
    sudo sysctl -w fs.suid_dumpable=1
    sudo sysctl -w kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t
    nix
    sudo sysctl -w fs.suid_dumpable=1
    sudo sysctl -w kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t
  2. Optional: Persist the configuration so that core dumps are still enabled after reboot. Add the following lines to /etc/sysctl.d/99-sysctl.conf. This again sets the directory for saving core dumps to /tmp inside of the chroot environment, /var/empty.

    99-sysctl.conf
    text
    # add these lines to the end of the file
    fs.suid_dumpable=1
    kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t
    99-sysctl.conf
    text
    # add these lines to the end of the file
    fs.suid_dumpable=1
    kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t
  3. Create a subdirectory inside of the chroot environment with permissions that allow the hapee-lb user to write to it. This subdirectory should be the same as the directory you specified for kernel.core_pattern in the previous step. We will create a /tmp directory inside of /var/empty and set its permissions:

    nix
    sudo mkdir /var/empty/tmp
    sudo chmod 0777 /var/empty/tmp
    nix
    sudo mkdir /var/empty/tmp
    sudo chmod 0777 /var/empty/tmp
  4. To set the maximum size of a core dump file, add the DefaultLimitCORE setting to the file /etc/systemd/system.conf. Below, we set the value to infinity.

    system.conf
    text
    # add this line to the end of the file
    DefaultLimitCORE=infinity
    system.conf
    text
    # add this line to the end of the file
    DefaultLimitCORE=infinity
  5. Restart the Systemd daemon. Processes running under Systemd will not be affected by this restart.

    nix
    sudo systemctl daemon-reexec
    nix
    sudo systemctl daemon-reexec
  6. Edit the HAProxy Enterprise configuration so that it includes set-dumpable in the global section:

    haproxy
    global
    set-dumpable
    haproxy
    global
    set-dumpable
  7. Restart HAProxy Enterprise.

    nix
    sudo systemctl restart hapee-3.0-lb
    nix
    sudo systemctl restart hapee-3.0-lb

Retrieve core dumps Jump to heading

After a crash in HAProxy Enterprise, the system will generate a core dump file and place it in one of two locations:

  • If the fault occurred in HAProxy Enterprise’s master process, the core dump file will be in /tmp.
  • If it occurred in a worker process, it will be in the location you configured as your kernel.core_pattern (probably /var/empty/tmp).

In one of those locations will be a file that starts with core. This file is the core dump. The core file will look like core.17442.997.994.6.1689180587.17442.

The core dump file name has significance. If you configure your kernel.core_pattern to name files with the pattern core.%P.%u.%g.%s.%t, the resulting file name will include:

Variable Description
%P Process ID of the dumped process (as it appears in the initial PID namespace).
%u UID of the dumped process.
%g GID of the dumped process.
%s Number of the signal that caused the dump.
%t Unix time of the dump.

The last part of the filename is also the process ID.

Produce a core dump for a running process Jump to heading

It is possible to retrieve a core dump from HAProxy Enterprise without adjusting resource limits, changing kernel settings, or restarting HAProxy Enterprise. This is possible with a utility named gcore. Retrieving the core dump in this way may be useful when it is not possible to complete those steps, or in the case for retrieving process state information when a process may be stuck but has not crashed. The downside to this approach is that unlike the previous procedures which enable core dumps for any future crashes, using gcore is a manual procedure.

  1. To produce a core dump for a running HAProxy Enterprise process, first find the process ID, or PID, using ps. The ps command will produce two process IDs. The first column of the output shows the user. The second column is the PID.

    In this output, the master process, the process run under root, has a process ID of 19973. The worker process, the process run under user hapee-lb, has a process ID of 19975.

    nix
    ps -ef | grep hapee
    nix
    ps -ef | grep hapee
    output
    text
    root 19973 1 0 17:39 ? 00:00:00 /opt/hapee-3.0/sbin/hapee-lb
    -Ws -f /etc/hapee-3.0/hapee-lb.cfg -p /run/hapee-3.0-lb.pid
    hapee-lb 19975 19973 0 17:39 ? 00:00:00 /opt/hapee-3.0/sbin/hapee-lb
    -Ws -f /etc/hapee-3.0/hapee-lb.cfg -p /run/hapee-3.0-lb.pid
    output
    text
    root 19973 1 0 17:39 ? 00:00:00 /opt/hapee-3.0/sbin/hapee-lb
    -Ws -f /etc/hapee-3.0/hapee-lb.cfg -p /run/hapee-3.0-lb.pid
    hapee-lb 19975 19973 0 17:39 ? 00:00:00 /opt/hapee-3.0/sbin/hapee-lb
    -Ws -f /etc/hapee-3.0/hapee-lb.cfg -p /run/hapee-3.0-lb.pid
  2. If the gcore utility is not installed, you can install it using your package manager. It is packaged with gdb.

    nix
    sudo apt-get install gdb
    nix
    sudo apt-get install gdb
    nix
    sudo yum gdb
    nix
    sudo yum gdb
  3. Use the gcore command with a process ID to produce a core dump file in your current working directory.

    nix
    sudo gcore 19973
    nix
    sudo gcore 19973
    output
    text
    Using host libthread_db library "/lib64/libthread_db.so.1".
    0x00007ffb49cfdea3 in epoll_wait () from /lib64/libc.so.6
    Saved corefile core.19973
    [Inferior 1 (process 19973) detached]
    output
    text
    Using host libthread_db library "/lib64/libthread_db.so.1".
    0x00007ffb49cfdea3 in epoll_wait () from /lib64/libc.so.6
    Saved corefile core.19973
    [Inferior 1 (process 19973) detached]

Produce a core dump for a stuck process Jump to heading

If the gcore utility is not available, prlimit can be used to produce a core dump for a running process that is stuck. prlimit is used to set resource limits dynamically in the current session for running processes.

Warning

Proceed with this method only as a last resort, and only if the application is totally nonresponsive. The process will be stopped abruptly, and this may result in unexpected behavior (such as unsaved changes).

  1. To produce a core file, find the process ID, or PID, using ps. The ps command will produce two process IDs. The first column of the output shows the user. The second column is the PID.

    In this output, the master process, the process run under root, has a process ID of 19973. The worker process, the process run under user hapee-lb, has a process ID of 19975.

    nix
    ps -ef | grep hapee
    nix
    ps -ef | grep hapee
    output
    text
    root 19973 1 0 17:39 ? 00:00:00 /opt/hapee-3.0/sbin/hapee-lb
    -Ws -f /etc/hapee-3.0/hapee-lb.cfg -p /run/hapee-3.0-lb.pid
    hapee-lb 19975 19973 0 17:39 ? 00:00:00 /opt/hapee-3.0/sbin/hapee-lb
    -Ws -f /etc/hapee-3.0/hapee-lb.cfg -p /run/hapee-3.0-lb.pid
    output
    text
    root 19973 1 0 17:39 ? 00:00:00 /opt/hapee-3.0/sbin/hapee-lb
    -Ws -f /etc/hapee-3.0/hapee-lb.cfg -p /run/hapee-3.0-lb.pid
    hapee-lb 19975 19973 0 17:39 ? 00:00:00 /opt/hapee-3.0/sbin/hapee-lb
    -Ws -f /etc/hapee-3.0/hapee-lb.cfg -p /run/hapee-3.0-lb.pid
  2. Set the core file size limit to unlimited for the process. This example sets the core file size limit for the process with ID 19973 to unlimited:

    nix
    sudo prlimit --core=unlimited:unlimited --pid=19973
    nix
    sudo prlimit --core=unlimited:unlimited --pid=19973

    You can check the limits for a process using the prlimit command:

    nix
    sudo prlimit --pid=19973
    nix
    sudo prlimit --pid=19973
    output
    text
    RESOURCE DESCRIPTION SOFT HARD UNITS
    AS address space limit unlimited unlimited bytes
    CORE max core file size unlimited unlimited blocks
    CPU CPU time unlimited unlimited seconds
    DATA max data size unlimited unlimited bytes
    FSIZE max file size unlimited unlimited blocks
    LOCKS max number of file locks held unlimited unlimited
    MEMLOCK max locked-in-memory address space 65536 65536 bytes
    MSGQUEUE max bytes in POSIX mqueues 819200 819200 bytes
    NICE max nice prio allowed to raise 0 0
    NOFILE max number of open files 20027 20027
    NPROC max number of processes 7163 7163
    RSS max resident set size unlimited unlimited pages
    RTPRIO max real-time priority 0 0
    RTTIME timeout for real-time tasks unlimited unlimited microsecs
    SIGPENDING max number of pending signals 7163 7163
    STACK max stack size 8388608 unlimited bytes
    output
    text
    RESOURCE DESCRIPTION SOFT HARD UNITS
    AS address space limit unlimited unlimited bytes
    CORE max core file size unlimited unlimited blocks
    CPU CPU time unlimited unlimited seconds
    DATA max data size unlimited unlimited bytes
    FSIZE max file size unlimited unlimited blocks
    LOCKS max number of file locks held unlimited unlimited
    MEMLOCK max locked-in-memory address space 65536 65536 bytes
    MSGQUEUE max bytes in POSIX mqueues 819200 819200 bytes
    NICE max nice prio allowed to raise 0 0
    NOFILE max number of open files 20027 20027
    NPROC max number of processes 7163 7163
    RSS max resident set size unlimited unlimited pages
    RTPRIO max real-time priority 0 0
    RTTIME timeout for real-time tasks unlimited unlimited microsecs
    SIGPENDING max number of pending signals 7163 7163
    STACK max stack size 8388608 unlimited bytes
  3. Enable the core dump handler. This sets the core dump handler inside the default HAProxy Enterprise change root environment.

    Tip

    The default chroot environment for HAProxy Enterprise is /var/empty. You can find this value in the configuration file /etc/hapee-3.0/hapee-lb.cfg in the global section.

    The default chroot environment is /var/empty. We want core dumps to be saved in /var/empty/tmp. The kernel setting kernel.core_pattern sets this value.

    nix
    sudo sysctl -w fs.suid_dumpable=1
    sudo sysctl -w kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t
    nix
    sudo sysctl -w fs.suid_dumpable=1
    sudo sysctl -w kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t
  4. Create a subdirectory inside of the chroot environment with permissions that allow the hapee-lb user to write to it. This subdirectory should be the same as the directory you specified for kernel.core_pattern in the previous step. This is required to generate core dumps for HAProxy Enterprise’s worker processes.

    We will create a /tmp directory inside of /var/empty and set its permissions:

    nix
    sudo mkdir /var/empty/tmp
    sudo chmod o+w /var/empty/tmp
    nix
    sudo mkdir /var/empty/tmp
    sudo chmod o+w /var/empty/tmp
  5. For the process that is stuck, force a crash. This command will abruptly stop the process with PID 19973:

    nix
    sudo kill -SIGABRT 19973
    nix
    sudo kill -SIGABRT 19973

    After forcing HAProxy Enterprise to stop abruptly, you may need to restart the service for it to resume processing.

    nix
    systemctl restart hapee-3.0-lb
    nix
    systemctl restart hapee-3.0-lb
  6. After forcing the crash, the system will generate a core dump file and place it in one of two locations:

    • If you issued the kill command on HAProxy Enterprise’s master process, the core dump file will be in /tmp.
    • If you issued the kill command on one of HAProxy Enterprise’s worker processes, it will be in the location you configured as your kernel.core_pattern (probably /var/empty/tmp).

Enable core dumps for Docker Jump to heading

You can enable core dumps when running HAProxy Enterprise as a Docker container. To enable core dumps in Docker:

  1. Configure the kernel settings on your host instance (the instance running Docker) to specify the location for saving core dumps. This location is communicated to all Docker containers running on the instance.

    This sets the kernel setting for core_pattern to specify that core dump files should be saved to /tmp. Make sure that the directory you specify for core_pattern is a directory that exists.

    nix
    echo '/tmp/core.%P.%u.%g.%s.%t' | sudo tee /proc/sys/kernel/core_pattern
    nix
    echo '/tmp/core.%P.%u.%g.%s.%t' | sudo tee /proc/sys/kernel/core_pattern
  2. Start the HAProxy Enterprise Docker container with the following arguments. These are similar to the arguments provided for starting the container normally, without enabling core dumps (see: Install HAProxy Enterprise on Docker), with a few additional arguments added to enable core dumps within the container.

    We are providing three additional parameters to the docker run command: init, ulimit, and mount.

    nix
    sudo docker run \
    --name hapee-3.0 \
    --init \
    --ulimit core=-1 \
    --mount type=bind,source=/tmp/,target=/tmp/ \
    -d \
    -p 80:80 \
    -p 443:443 \
    -p 5555:5555 \
    -v $(pwd):/etc/hapee-3.0 \
    --restart=unless-stopped \
    hapee-registry.haproxy.com/haproxy-enterprise:3.0r1
    nix
    sudo docker run \
    --name hapee-3.0 \
    --init \
    --ulimit core=-1 \
    --mount type=bind,source=/tmp/,target=/tmp/ \
    -d \
    -p 80:80 \
    -p 443:443 \
    -p 5555:5555 \
    -v $(pwd):/etc/hapee-3.0 \
    --restart=unless-stopped \
    hapee-registry.haproxy.com/haproxy-enterprise:3.0r1

    Be sure to specify the directory containing your configuration files using -v.

    • --init tells Docker to implement signal handling for the container. This is required to catch an application crash.
    • --ulimit core=-1 sets the core dump file size limit to unlimited.
    • --mount type=bind,source=/tmp/,target=/tmp/ tells Docker to mount the /tmp directory on the host instance into the container.

    The Docker container is read-only, and as such, the core dump files cannot be saved inside of the container. Specifying this mount argument guarantees that the core files still exist on your host system after the container is stopped or deleted.

    In the previous step, we set the location for saving core dump files to /tmp, so we must provide two additional parameters for mount, source and target, each also set to /tmp.

    The Docker container inherits the kernel settings from the host instance, so we expect the Docker container to write core dump files to the /tmp directory.

  3. Core dump files produced by crashes in both HAProxy Enterprise’s master process and its worker processes will be placed in /tmp on the host instance.

Do you have any suggestions on how we can improve the content of this page?