Troubleshooting
Enable core dumps
In the rare event that an HAProxy Enterprise process crashes or behaves abnormally, you can capture a core dump (also known as a crash dump) that you can send to the Support team. A core dump is a file that encapsulates the state of an application when it crashes and is useful in diagnosing and fixing potential issues. Core dumps are not enabled by default, so you must configure your OS settings to allow the collection of these files.
Enable core dumps Jump to heading
-
Enable the core dump handler. This sets the core dump handler inside the default HAProxy Enterprise change root environment.
Find the chroot directory
The default chroot environment for HAProxy Enterprise is
/var/empty
. You can find this value in the configuration file/etc/hapee-2.9/hapee-lb.cfg
in theglobal
section.The default chroot environment is
/var/empty
. We want core dumps to be saved in/var/empty/tmp
. The kernel settingkernel.core_pattern
sets this value.nixsudo sysctl -w fs.suid_dumpable=1sudo sysctl -w kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%tnixsudo sysctl -w fs.suid_dumpable=1sudo sysctl -w kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t -
Optional: Persist the configuration so that core dumps are still enabled after reboot. Add the following lines to
/etc/sysctl.d/99-sysctl.conf
. This again sets the directory for saving core dumps to/tmp
inside of the chroot environment,/var/empty
.99-sysctl.conftext# add these lines to the end of the filefs.suid_dumpable=1kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t99-sysctl.conftext# add these lines to the end of the filefs.suid_dumpable=1kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t -
Create a subdirectory inside of the chroot environment with permissions that allow the
hapee-lb
user to write to it. This subdirectory should be the same as the directory you specified forkernel.core_pattern
in the previous step. We will create a/tmp
directory inside of/var/empty
and set its permissions:nixsudo mkdir /var/empty/tmpsudo chmod 0777 /var/empty/tmpnixsudo mkdir /var/empty/tmpsudo chmod 0777 /var/empty/tmp -
To set the maximum size of a core dump file, add the
DefaultLimitCORE
setting to the file/etc/systemd/system.conf
. Below, we set the value toinfinity
.system.conftext# add this line to the end of the fileDefaultLimitCORE=infinitysystem.conftext# add this line to the end of the fileDefaultLimitCORE=infinity -
Restart the Systemd daemon. Processes running under Systemd will not be affected by this restart.
nixsudo systemctl daemon-reexecnixsudo systemctl daemon-reexec -
Edit the HAProxy Enterprise configuration so that it includes
set-dumpable
in theglobal
section:haproxyglobalset-dumpablehaproxyglobalset-dumpable -
Restart HAProxy Enterprise.
nixsudo systemctl restart hapee-2.9-lbnixsudo systemctl restart hapee-2.9-lb
Retrieve core dumps Jump to heading
After a crash in HAProxy Enterprise, the system will generate a core dump file and place it in one of two locations:
- If the fault occurred in HAProxy Enterprise’s master process, the core dump file will be in
/tmp
. - If it occurred in a worker process, it will be in the location you configured as your
kernel.core_pattern
(probably/var/empty/tmp
).
In one of those locations will be a file that starts with core
. This file is the core dump. The core file will look like core.17442.997.994.6.1689180587.17442
.
The core dump file name has significance. If you configure your kernel.core_pattern
to name files with the pattern core.%P.%u.%g.%s.%t
, the resulting file name will include:
Variable | Description |
---|---|
%P |
Process ID of the dumped process (as it appears in the inital PID namespace). |
%u |
UID of the dumped process. |
%g |
GID of the dumped process. |
%s |
Number of the signal that caused the dump. |
%t |
Unix time of the dump. |
The last part of the filename is also the process ID.
Produce a core dump for a running process Jump to heading
It is possible to retrieve a core dump from HAProxy Enterprise without adjusting resource limits, changing kernel settings, or restarting HAProxy Enterprise. This is possible with a utility named gcore
. Retrieving the core dump in this way may be useful when it is not possible to complete those steps, or in the case for retrieving process state information when a process may be stuck but has not crashed. The downside to this approach is that unlike the previous procedures which enable core dumps for any future crashes, using gcore
is a manual procedure.
-
To produce a core dump for a running HAProxy Enterprise process, first find the process ID, or PID, using
ps
. Theps
command will produce two process IDs. The first column of the output shows the user. The second column is the PID.In this output, the master process, the process run under
root
, has a process ID of19973
. The worker process, the process run under userhapee-lb
, has a process ID of19975
.nixps -ef | grep hapeenixps -ef | grep hapeeoutputtextroot 19973 1 0 17:39 ? 00:00:00 /opt/hapee-2.9/sbin/hapee-lb-Ws -f /etc/hapee-2.9/hapee-lb.cfg -p /run/hapee-2.9-lb.pidhapee-lb 19975 19973 0 17:39 ? 00:00:00 /opt/hapee-2.9/sbin/hapee-lb-Ws -f /etc/hapee-2.9/hapee-lb.cfg -p /run/hapee-2.9-lb.pidoutputtextroot 19973 1 0 17:39 ? 00:00:00 /opt/hapee-2.9/sbin/hapee-lb-Ws -f /etc/hapee-2.9/hapee-lb.cfg -p /run/hapee-2.9-lb.pidhapee-lb 19975 19973 0 17:39 ? 00:00:00 /opt/hapee-2.9/sbin/hapee-lb-Ws -f /etc/hapee-2.9/hapee-lb.cfg -p /run/hapee-2.9-lb.pid -
If the
gcore
utiltiy is not installed, you can install it using your package manager. It is packaged withgdb
.nixsudo apt-get install gdbnixsudo apt-get install gdbnixsudo yum gdbnixsudo yum gdb -
Use the
gcore
command with a process ID to produce a core dump file in your current working directory.nixsudo gcore 19973nixsudo gcore 19973outputtextUsing host libthread_db library "/lib64/libthread_db.so.1".0x00007ffb49cfdea3 in epoll_wait () from /lib64/libc.so.6Saved corefile core.19973[Inferior 1 (process 19973) detached]outputtextUsing host libthread_db library "/lib64/libthread_db.so.1".0x00007ffb49cfdea3 in epoll_wait () from /lib64/libc.so.6Saved corefile core.19973[Inferior 1 (process 19973) detached]
Produce a core dump for a stuck process Jump to heading
If the gcore
utility is not available, prlimit
can be used to produce a core dump for a running process that is stuck. prlimit
is used to set resource limits dynamically in the current session for running processes.
Warning
Proceed with this method only as a last resort, and only if the application is totally nonresponsive. The process will be stopped abruptly, and this may result in unexpected behavior (such as unsaved changes).
-
To produce a core file, find the process ID, or PID, using
ps
. Theps
command will produce two process IDs. The first column of the output shows the user. The second column is the PID.In this output, the master process, the process run under
root
, has a process ID of19973
. The worker process, the process run under userhapee-lb
, has a process ID of19975
.nixps -ef | grep hapeenixps -ef | grep hapeeoutputtextroot 19973 1 0 17:39 ? 00:00:00 /opt/hapee-2.9/sbin/hapee-lb-Ws -f /etc/hapee-2.9/hapee-lb.cfg -p /run/hapee-2.9-lb.pidhapee-lb 19975 19973 0 17:39 ? 00:00:00 /opt/hapee-2.9/sbin/hapee-lb-Ws -f /etc/hapee-2.9/hapee-lb.cfg -p /run/hapee-2.9-lb.pidoutputtextroot 19973 1 0 17:39 ? 00:00:00 /opt/hapee-2.9/sbin/hapee-lb-Ws -f /etc/hapee-2.9/hapee-lb.cfg -p /run/hapee-2.9-lb.pidhapee-lb 19975 19973 0 17:39 ? 00:00:00 /opt/hapee-2.9/sbin/hapee-lb-Ws -f /etc/hapee-2.9/hapee-lb.cfg -p /run/hapee-2.9-lb.pid -
Set the core file size limit to
unlimited
for the process. This example sets the core file size limit for the process with ID19973
tounlimited
:nixsudo prlimit --core=unlimited:unlimited --pid=19973nixsudo prlimit --core=unlimited:unlimited --pid=19973You can check the limits for a process using the
prlimit
command:nixsudo prlimit --pid=19973nixsudo prlimit --pid=19973outputtextRESOURCE DESCRIPTION SOFT HARD UNITSAS address space limit unlimited unlimited bytesCORE max core file size unlimited unlimited blocksCPU CPU time unlimited unlimited secondsDATA max data size unlimited unlimited bytesFSIZE max file size unlimited unlimited blocksLOCKS max number of file locks held unlimited unlimitedMEMLOCK max locked-in-memory address space 65536 65536 bytesMSGQUEUE max bytes in POSIX mqueues 819200 819200 bytesNICE max nice prio allowed to raise 0 0NOFILE max number of open files 20027 20027NPROC max number of processes 7163 7163RSS max resident set size unlimited unlimited pagesRTPRIO max real-time priority 0 0RTTIME timeout for real-time tasks unlimited unlimited microsecsSIGPENDING max number of pending signals 7163 7163STACK max stack size 8388608 unlimited bytesoutputtextRESOURCE DESCRIPTION SOFT HARD UNITSAS address space limit unlimited unlimited bytesCORE max core file size unlimited unlimited blocksCPU CPU time unlimited unlimited secondsDATA max data size unlimited unlimited bytesFSIZE max file size unlimited unlimited blocksLOCKS max number of file locks held unlimited unlimitedMEMLOCK max locked-in-memory address space 65536 65536 bytesMSGQUEUE max bytes in POSIX mqueues 819200 819200 bytesNICE max nice prio allowed to raise 0 0NOFILE max number of open files 20027 20027NPROC max number of processes 7163 7163RSS max resident set size unlimited unlimited pagesRTPRIO max real-time priority 0 0RTTIME timeout for real-time tasks unlimited unlimited microsecsSIGPENDING max number of pending signals 7163 7163STACK max stack size 8388608 unlimited bytes -
Enable the core dump handler. This sets the core dump handler inside the default HAProxy Enterprise change root environment.
Tip
The default chroot environment for HAProxy Enterprise is
/var/empty
. You can find this value in the configuration file/etc/hapee-2.9/hapee-lb.cfg
in theglobal
section.The default chroot environment is
/var/empty
. We want core dumps to be saved in/var/empty/tmp
. The kernel settingkernel.core_pattern
sets this value.nixsudo sysctl -w fs.suid_dumpable=1sudo sysctl -w kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%tnixsudo sysctl -w fs.suid_dumpable=1sudo sysctl -w kernel.core_pattern=/tmp/core.%P.%u.%g.%s.%t -
Create a subdirectory inside of the chroot environment with permissions that allow the
hapee-lb
user to write to it. This subdirectory should be the same as the directory you specified forkernel.core_pattern
in the previous step. This is required to generate core dumps for HAProxy Enterprise’s worker processes.We will create a
/tmp
directory inside of/var/empty
and set its permissions:nixsudo mkdir /var/empty/tmpsudo chmod o+w /var/empty/tmpnixsudo mkdir /var/empty/tmpsudo chmod o+w /var/empty/tmp -
For the process that is stuck, force a crash. This command will abruptly stop the process with PID
19973
:nixsudo kill -SIGABRT 19973nixsudo kill -SIGABRT 19973After forcing HAProxy Enterprise to stop abruptly, you may need to restart the service for it to resume processing.
nixsystemctl restart hapee-2.9-lbnixsystemctl restart hapee-2.9-lb -
After forcing the crash, the system will generate a core dump file and place it in one of two locations:
- If you issued the
kill
command on HAProxy Enterprise’s master process, the core dump file will be in/tmp
. - If you issued the
kill
command on one of HAProxy Enterprise’s worker processes, it will be in the location you configured as yourkernel.core_pattern
(probably/var/empty/tmp
).
- If you issued the
Enable core dumps for Docker Jump to heading
You can enable core dumps when running HAProxy Enterprise as a Docker container. To enable core dumps in Docker:
-
Configure the kernel settings on your host instance (the instance running Docker) to specify the location for saving core dumps. This location is communicated to all Docker containers running on the instance.
This sets the kernel setting for
core_pattern
to specify that core dump files should be saved to/tmp
. Make sure that the directory you specify forcore_pattern
is a directory that exists.nixecho '/tmp/core.%P.%u.%g.%s.%t' | sudo tee /proc/sys/kernel/core_patternnixecho '/tmp/core.%P.%u.%g.%s.%t' | sudo tee /proc/sys/kernel/core_pattern -
Start the HAProxy Enterprise Docker container with the following arguments. These are similar to the arguments provided for starting the container normally, without enabling core dumps (see: Install HAProxy Enterprise on Docker), with a few additional arguments added to enable core dumps within the container.
We are providing three additional parameters to the
docker run
command:init
,ulimit
, andmount
.nixsudo docker run \--name hapee-2.9 \--init \--ulimit core=-1 \--mount type=bind,source=/tmp/,target=/tmp/ \-d \-p 80:80 \-p 443:443 \-p 5555:5555 \-v $(pwd):/etc/hapee-2.9 \--restart=unless-stopped \hapee-registry.haproxy.com/haproxy-enterprise:2.9r1nixsudo docker run \--name hapee-2.9 \--init \--ulimit core=-1 \--mount type=bind,source=/tmp/,target=/tmp/ \-d \-p 80:80 \-p 443:443 \-p 5555:5555 \-v $(pwd):/etc/hapee-2.9 \--restart=unless-stopped \hapee-registry.haproxy.com/haproxy-enterprise:2.9r1Be sure to specify the directory containing your configuration files using
-v
.--init
tells Docker to implement signal handling for the container. This is required to catch an application crash.--ulimit core=-1
sets the core dump file size limit tounlimited
.--mount type=bind,source=/tmp/,target=/tmp/
tells Docker to mount the/tmp
directory on the host instance into the container.
The Docker container is read-only, and as such, the core dump files cannot be saved inside of the container. Specifying this
mount
argument guarantees that the core files still exist on your host system after the container is stopped or deleted.In the previous step, we set the location for saving core dump files to
/tmp
, so we must provide two additional parameters formount
,source
andtarget
, each also set to/tmp
.The Docker container inherits the kernel settings from the host instance, so we expect the Docker container to write core dump files to the
/tmp
directory. -
Core dump files produced by crashes in both HAProxy Enterprise’s master process and its worker processes will be placed in
/tmp
on the host instance.
Do you have any suggestions on how we can improve the content of this page?