Installation
External mode on-premises
Available since
version 1.7
In external mode, the ingress controller runs outside of your Kubernetes cluster. In this guide, you will learn how to set up external mode using an on-premises cluster and Project Calico.
In this scenario, we deploy a custom Kubernetes installation that uses Project Calico as its Container Networking Interface (CNI) plugin. A CNI plugin is responsible for defining the virtual network that pods use to communicate with one another. Because a pod network is typically accessible only to Kubernetes pods, we need a way to bridge this network with a public-facing, external network.
Project Calico has the ability to perform BGP peering between the pod network and an external network, allowing us to install and run the ingress controller external to Kubernetes, while still receiving IP route advertisements that enable it to relay traffic to pods.
We will use the following components:
Component | Description |
---|---|
HAProxy Kubernetes Ingress Controller | The ingress controller runs as a standalone process outside of your Kubernetes cluster. |
Project Calico | Project Calico is a network plugin for Kubernetes. It supports BGP peering, which allows pods inside your Kubernetes cluster to share their IP addresses with a server outside of the cluster. |
BIRD Internet Routing Daemon | BIRD is a software-defined router. It receives routes from Project Calico and makes them available to the ingress controller. |
Prepare servers for Kubernetes Jump to heading
Deploy Linux servers that will host your Kubernetes components.
You will need:
- a control plane server: one Linux server to run the Kubernetes control plane and be responsible for managing the cluster and hosting the Kubernetes API.
- worker nodes: one or more Linux servers to act as Kubernetes worker nodes, which host pods.
- ingress controller server: one Linux server to run HAProxy Kubernetes Ingress Controller.
On the control plane server and worker nodes, perform these steps:
-
Follow the Install Docker Engine guide to install Docker and Containerd on the server. Containerd will serve as the container runtime in Kubernetes.
-
By default, the Containerd configuration file,
/etc/containerd/config.toml
, disables the Container Runtime Interface (CRI) plugin that Kubernetes needs. We also need to enable Systemd cgroups becausekubeadm
installs the Kubernetes service, kubelet, as a Systemd service. The easiest method is to generate a default configuration file and then make changes to it usingsed
, the search-and-replace tool.nixcontainerd config default | sudo tee /etc/containerd/config.tomlsudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.tomlsudo systemctl restart containerdnixcontainerd config default | sudo tee /etc/containerd/config.tomlsudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.tomlsudo systemctl restart containerd -
Disable swap, as required by the Kubernetes kubelet service.
nixswapoff -anixswapoff -a -
Follow the Installing kubeadm guide to install the
kubeadm
,kubectl
, andkubelet
packages. We will use thekubeadm
tool to install Kubernetes.
Configure the Kubernetes control plane server Jump to heading
At least one server must become the central management server, otherwise known as the control plane. On that server, perform the following additional steps:
-
Call
kubeadm init
to install Kubernetes on this server. Replace the value of--apiserver-advertise-address
with your server’s IP address. Set--pod-network-cidr
to the IP range you want to use for your Kubernetes cluster’s private network. Be sure that this range does not overlap with other IP ranges already in use on your network.nixsudo kubeadm init \--cri-socket unix:///run/containerd/containerd.sock \--pod-network-cidr 172.16.0.0/16 \--apiserver-advertise-address 192.168.56.10nixsudo kubeadm init \--cri-socket unix:///run/containerd/containerd.sock \--pod-network-cidr 172.16.0.0/16 \--apiserver-advertise-address 192.168.56.10Argument Description --cri-socket
Sets the path to the Containerd CRI socket. --pod-network-cidr
Sets the range of IP addresses to use for the pod network. Each new pod will receive an IP address in this range. The IP range 172.16.0.0/16
allows up to 65534 unique IP addresses.--apiserver-advertise-address
Add this optional argument if your server has more than one IP address assigned to it to specify the address on which the Kubernetes API should listen. Refer to the kubeadm init documentation guide for more information about these and other arguments.
-
After the installation, a kubeconfig file is created at
/etc/kubernetes/admin.conf
. This contains settings for connecting to the new Kubernetes cluster. Copy it to your home directory and to the root user’s home directory. This allows you to connect to the Kubernetes API usingkubectl
and we will configure Project Calico, which will connect as root, to use this kubeconfig file too.nixsudo mkdir $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configsudo mkdir /root/.kubesudo cp -i /etc/kubernetes/admin.conf /root/.kube/configsudo chown root:root /root/.kube/confignixsudo mkdir $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configsudo mkdir /root/.kubesudo cp -i /etc/kubernetes/admin.conf /root/.kube/configsudo chown root:root /root/.kube/config -
Optional: If the server has more than one IP address assigned to it, you must configure the Kubernetes kubelet service to use the correct one. In the file
/etc/default/kubelet
(or/etc/sysconfig/kubelet
), set the--node-ip
argument to your server’s IP address. It’s also a good idea to set the path to the Containerd socket explicitly via the--container-runtime-endpoint
argument. Then restart the service.nixsudo touch /etc/default/kubeletecho "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.10 --container-runtime-endpoint=unix:///run/containerd/containerd.sock" | sudo tee /etc/default/kubeletsudo systemctl daemon-reloadsudo systemctl restart kubeletnixsudo touch /etc/default/kubeletecho "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.10 --container-runtime-endpoint=unix:///run/containerd/containerd.sock" | sudo tee /etc/default/kubeletsudo systemctl daemon-reloadsudo systemctl restart kubelet -
At this point, the Kubernetes control plane should be running. You can use the
kubectl get pods
command to check that the pods are running sucessfully. It is normal for the coredns pods to be in the Pending state at this time.nixkubectl get pods -Anixkubectl get pods -AoutputtextNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-787d4945fb-8p8nz 0/1 Pending 0 3m41skube-system coredns-787d4945fb-dngw9 0/1 Pending 0 3m41skube-system etcd-controlplane 1/1 Running 0 3m52skube-system kube-apiserver-controlplane 1/1 Running 0 3m52skube-system kube-controller-manager-controlplane 1/1 Running 0 3m54skube-system kube-proxy-rk7wg 1/1 Running 0 3m41skube-system kube-scheduler-controlplane 1/1 Running 0 3m57soutputtextNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-787d4945fb-8p8nz 0/1 Pending 0 3m41skube-system coredns-787d4945fb-dngw9 0/1 Pending 0 3m41skube-system etcd-controlplane 1/1 Running 0 3m52skube-system kube-apiserver-controlplane 1/1 Running 0 3m52skube-system kube-controller-manager-controlplane 1/1 Running 0 3m54skube-system kube-proxy-rk7wg 1/1 Running 0 3m41skube-system kube-scheduler-controlplane 1/1 Running 0 3m57s -
Install the Project Calico Container Network Interface (CNI) plugin. We’ll use the Project Calico plugin because it supports BGP peering, which we’ll need for connecting the ingress controller to the Kubernetes cluster’s private network.
Refer to the Project Calico Quickstart guide for instructions on installing the operator and custom resource definitions. Note that by default Project Calico expects a pod network CIDR of
192.168.0.0/16
. Since, we are using172.16.0.0/16
instead, edit thecustom-resources.yaml
file before installing it.In the example below, we change the
spec.calicoNetwork.ipPools.cidr
field to172.16.0.0/16
:custom-resources.yamlyaml# This section includes base Calico installation configuration.# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.InstallationapiVersion: operator.tigera.io/v1kind: Installationmetadata:name: defaultspec:# Configures Calico networking.calicoNetwork:bgp: Enabled# Note: The ipPools section cannot be modified post-install.ipPools:- blockSize: 26cidr: 172.16.0.0/16encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()---# This section configures the Calico API server.# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServerapiVersion: operator.tigera.io/v1kind: APIServermetadata:name: defaultspec: {}custom-resources.yamlyaml# This section includes base Calico installation configuration.# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.InstallationapiVersion: operator.tigera.io/v1kind: Installationmetadata:name: defaultspec:# Configures Calico networking.calicoNetwork:bgp: Enabled# Note: The ipPools section cannot be modified post-install.ipPools:- blockSize: 26cidr: 172.16.0.0/16encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()---# This section configures the Calico API server.# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServerapiVersion: operator.tigera.io/v1kind: APIServermetadata:name: defaultspec: {}Then create the resources:
nixkubectl create -f ./custom-resources.yamlnixkubectl create -f ./custom-resources.yaml -
Download the calicoctl command-line tool.
Copy it to the
/usr/local/bin
directory and set its permissions to make it executable:nixsudo cp ./calicoctl /usr/local/binsudo chmod +x /usr/local/bin/calicoctlnixsudo cp ./calicoctl /usr/local/binsudo chmod +x /usr/local/bin/calicoctl -
Create a file named
/etc/calico/calicoctl.cfg
.nixsudo mkdir /etc/calicosudo touch /etc/calico/calicoctl.cfgnixsudo mkdir /etc/calicosudo touch /etc/calico/calicoctl.cfgAdd the following contents to it, which configures
calicoctl
to connect to your Kubernetes cluster using the kubeconfig file from the root user’s home directory.calicoctl.cfgyamlapiVersion: projectcalico.org/v3kind: CalicoAPIConfigmetadata:spec:datastoreType: "kubernetes"kubeconfig: "/root/.kube/config"calicoctl.cfgyamlapiVersion: projectcalico.org/v3kind: CalicoAPIConfigmetadata:spec:datastoreType: "kubernetes"kubeconfig: "/root/.kube/config" -
Create a file named
/etc/calico/calico-bgp.yaml
.nixsudo touch /etc/calico/calico-bgp.yamlnixsudo touch /etc/calico/calico-bgp.yamlAdd the following to it to enable BGP peering with your external network. Change the
peerIp
field to be the IP address of the server where you will run the ingress controller.calico-bgp.yamlyamlapiVersion: projectcalico.org/v3kind: BGPConfigurationmetadata:name: defaultspec:logSeverityScreen: InfonodeToNodeMeshEnabled: trueasNumber: 65000---# ingress controller serverapiVersion: projectcalico.org/v3kind: BGPPeermetadata:name: my-global-peerspec:peerIP: 192.168.56.13asNumber: 65000calico-bgp.yamlyamlapiVersion: projectcalico.org/v3kind: BGPConfigurationmetadata:name: defaultspec:logSeverityScreen: InfonodeToNodeMeshEnabled: trueasNumber: 65000---# ingress controller serverapiVersion: projectcalico.org/v3kind: BGPPeermetadata:name: my-global-peerspec:peerIP: 192.168.56.13asNumber: 65000Argument Description asNumber
Defines the BGP autonomous system (AS) number you wish to use. peerIP
Defines the IP address of the server where you will install the ingress controller. Apply it with the
calicoctl apply
command:nixsudo calicoctl apply -f /etc/calico/calico-bgp.yamlnixsudo calicoctl apply -f /etc/calico/calico-bgp.yaml -
Create an empty ConfigMap resource in your cluster, which the ingress controller requires upon startup.
nixsudo kubectl create configmap haproxy-kubernetes-ingressnixsudo kubectl create configmap haproxy-kubernetes-ingress -
To verify the setup, call
calicoctl node status
. The Info column should show Connection refused. This is expected because we have not configured the ingress controller yet to serve as the neighbor BGP peer.nixsudo calicoctl node statusnixsudo calicoctl node statusoutputtextCalico process is running.IPv4 BGP status+---------------+-----------+-------+----------+--------------------------------+| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |+---------------+-----------+-------+----------+--------------------------------+| 192.168.56.13 | global | start | 22:53:20 | Connect Socket: No route to || | | | | host |+---------------+-----------+-------+----------+--------------------------------+IPv6 BGP statusNo IPv6 peers found.outputtextCalico process is running.IPv4 BGP status+---------------+-----------+-------+----------+--------------------------------+| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |+---------------+-----------+-------+----------+--------------------------------+| 192.168.56.13 | global | start | 22:53:20 | Connect Socket: No route to || | | | | host |+---------------+-----------+-------+----------+--------------------------------+IPv6 BGP statusNo IPv6 peers found.
Configure the Kubernetes worker nodes Jump to heading
Kubernetes worker nodes host pods. On each server that you wish to register as a worker node in the Kubernetes cluster, after following the steps in the Prepare servers for Kubernetes, perform these additional steps:
-
On the control plane server, call
kubeadm token create --print-join-command
, which shows thekubeadm join
command you need to join a server to the cluster.For example:
nixkubeadm token create --print-join-commandnixkubeadm token create --print-join-commandCopy its output and run it on the worker node server:
nixsudo kubeadm join 192.168.56.10:6443 \--token jqfhgn.bgvy9xko70q82awu \--discovery-token-ca-cert-hash sha256:ce4dfb0efa64a0bb9071268c7a94258a9fef56be89e909a21f16f2528d8c880bnixsudo kubeadm join 192.168.56.10:6443 \--token jqfhgn.bgvy9xko70q82awu \--discovery-token-ca-cert-hash sha256:ce4dfb0efa64a0bb9071268c7a94258a9fef56be89e909a21f16f2528d8c880boutputtextThis node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.outputtextThis node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster. -
Optional: If the server has more than one IP address assigned to it, you must configure the Kubernetes kubelet service to use the correct one. In the file
/etc/default/kubelet
(or/etc/sysconfig/kubelet
), set the--node-ip
argument to your server’s IP address. It’s also a good idea to set the path to the Containerd socket explicitly via the--container-runtime-endpoint
argument. Then restart the service.nixsudo touch /etc/default/kubeletecho "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.11 --container-runtime-endpoint=unix:///run/containerd/containerd.sock" | sudo tee /etc/default/kubeletsudo systemctl daemon-reloadsudo systemctl restart kubeletnixsudo touch /etc/default/kubeletecho "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.11 --container-runtime-endpoint=unix:///run/containerd/containerd.sock" | sudo tee /etc/default/kubeletsudo systemctl daemon-reloadsudo systemctl restart kubelet
Install the ingress controller outside of your cluster Jump to heading
On a separate server not joined to your Kubernetes cluster, follow these steps to install the HAProxy Kubernetes Ingress Controller as a standalone process.
-
Copy the
/etc/kubernetes/admin.conf
kubeconfig file from the control plane server to this server and store it in the root user’s home directory. The ingress controller will use this to connect to the Kubernetes API.nixsudo mkdir -p /root/.kubesudo cp admin.conf /root/.kube/configsudo chown -R root:root /root/.kubenixsudo mkdir -p /root/.kubesudo cp admin.conf /root/.kube/configsudo chown -R root:root /root/.kube -
HAProxy Kubernetes Ingress Controller is compatible with a specific version of HAProxy. Install the HAProxy package on your Linux distribution based on the table below. For Ubuntu and Debian, follow the install steps at haproxy.debian.net.
Ingress controller version Compatible HAProxy version 3.0 3.0 1.11 2.8 1.10 2.7 1.9 2.6 1.8 2.5 1.7 2.4 -
Stop and disable the HAProxy service.
nixsudo systemctl stop haproxysudo systemctl disable haproxynixsudo systemctl stop haproxysudo systemctl disable haproxy -
Call the
setcap
command to allow HAProxy to bind to ports 80 and 443:nixsudo setcap cap_net_bind_service=+ep /usr/sbin/haproxynixsudo setcap cap_net_bind_service=+ep /usr/sbin/haproxy -
Download the ingress controller from the project’s GitHub Releases page.
Extract it and then copy it to the
/usr/local/bin
directory.example
nixwget https://github.com/haproxytech/kubernetes-ingress/releases/download/v1.8.8/haproxy-ingress-controller_1.8.8_Linux_x86_64.tar.gztar -xzvf haproxy-ingress-controller_1.8.8_Linux_x86_64.tar.gzsudo cp ./haproxy-ingress-controller /usr/local/bin/nixwget https://github.com/haproxytech/kubernetes-ingress/releases/download/v1.8.8/haproxy-ingress-controller_1.8.8_Linux_x86_64.tar.gztar -xzvf haproxy-ingress-controller_1.8.8_Linux_x86_64.tar.gzsudo cp ./haproxy-ingress-controller /usr/local/bin/ -
Create the file
/lib/systemd/system/haproxy-ingress.service
and add the following to it:haproxy-ingress.serviceini[Unit]Description="HAProxy Kubernetes Ingress Controller"Documentation=https://www.haproxy.com/Requires=network-online.targetAfter=network-online.target[Service]Type=simpleUser=rootGroup=rootExecStart=/usr/local/bin/haproxy-ingress-controller --external --configmap=default/haproxy-kubernetes-ingress --program=/usr/sbin/haproxy --disable-ipv6 --ipv4-bind-address=0.0.0.0 --http-bind-port=80 --ingress.class=haproxyExecReload=/bin/kill --signal HUP $MAINPIDKillMode=processKillSignal=SIGTERMRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targethaproxy-ingress.serviceini[Unit]Description="HAProxy Kubernetes Ingress Controller"Documentation=https://www.haproxy.com/Requires=network-online.targetAfter=network-online.target[Service]Type=simpleUser=rootGroup=rootExecStart=/usr/local/bin/haproxy-ingress-controller --external --configmap=default/haproxy-kubernetes-ingress --program=/usr/sbin/haproxy --disable-ipv6 --ipv4-bind-address=0.0.0.0 --http-bind-port=80 --ingress.class=haproxyExecReload=/bin/kill --signal HUP $MAINPIDKillMode=processKillSignal=SIGTERMRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target -
Enable and start the service.
nixsudo systemctl enable haproxy-ingresssudo systemctl start haproxy-ingressnixsudo systemctl enable haproxy-ingresssudo systemctl start haproxy-ingress
Install the BIRD Internet Routing Daemon Jump to heading
To enable the ingress controller to route requests to pods in your Kubernetes cluster, it must get routing information via BGP from the Project Calico network plugin. To do that, install the BIRD Internet Routing Daemon, which acts as a software-defined router that adds IP routes to the ingress controller server.
Supported BIRD versions
Only BIRD 1.x is supported at this time.
-
On the ingress controller server, install BIRD.
nixsudo add-apt-repository -y ppa:cz.nic-labs/birdsudo apt updatesudo apt install birdnixsudo add-apt-repository -y ppa:cz.nic-labs/birdsudo apt updatesudo apt install bird -
Edit the file named
bird.conf
in the/etc/bird
directory. Add the following contents to it, but change:- the
router id
to the current server’s IP address. This is the IP address of the ingress controller server. - the
local
line’s IP address in eachprotocol
section to the current server’s IP address. Again, this the IP address of the ingress controller server. - the
neighbor
line to the IP address of a node in your Kubernetes cluster. One of these should be the control plane server’s IP address. - the
import filter
should match the pod network’s IP range that you set earlier withkubeadm init
.
bird.confjavascriptrouter id 192.168.56.13;log syslog all;# control plane nodeprotocol bgp {local 192.168.56.13 as 65000;neighbor 192.168.56.10 as 65000;direct;import filter {if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;};export none;}# worker nodeprotocol bgp {local 192.168.56.13 as 65000;neighbor 192.168.56.11 as 65000;direct;import filter {if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;};export none;}# Inserts routes into the kernel routing tableprotocol kernel {scan time 60;export all;}# Gets information about network interfaces from the kernelprotocol device {scan time 60;}bird.confjavascriptrouter id 192.168.56.13;log syslog all;# control plane nodeprotocol bgp {local 192.168.56.13 as 65000;neighbor 192.168.56.10 as 65000;direct;import filter {if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;};export none;}# worker nodeprotocol bgp {local 192.168.56.13 as 65000;neighbor 192.168.56.11 as 65000;direct;import filter {if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;};export none;}# Inserts routes into the kernel routing tableprotocol kernel {scan time 60;export all;}# Gets information about network interfaces from the kernelprotocol device {scan time 60;}Each
protocol bgp
section connects BIRD to a Kubernetes node via iBGP. Each is considered a neighbor. This example uses 65000 as the Autonomous System number, but you can choose a different value. - the
-
Enable and start the BIRD service.
nixsudo systemctl enable birdsudo systemctl restart birdnixsudo systemctl enable birdsudo systemctl restart bird -
After completing these steps, the ingress controller is configured to communicate with your Kubernetes cluster and, once you’ve added an Ingress resource using
kubectl
, it can route traffic to pods. Learn about creating Ingress resources for routing traffic in the section Use HAProxy Kubernetes Ingress Controller to route HTTP traffic.Be sure to allow the servers to communicate by adding rules to your firewall.
On the ingress controller server, calling
sudo birdc show protocols
should show that connections have been established with the control plane server and any worker nodes.nixsudo birdc show protocolsnixsudo birdc show protocolsoutputtextBIRD 1.6.8 ready.name proto table state since infobgp1 BGP master up 22:38:44 Establishedbgp2 BGP master up 22:38:43 Establishedkernel1 Kernel master up 22:38:43device1 Device master up 22:38:43outputtextBIRD 1.6.8 ready.name proto table state since infobgp1 BGP master up 22:38:44 Establishedbgp2 BGP master up 22:38:43 Establishedkernel1 Kernel master up 22:38:43device1 Device master up 22:38:43On the control plane server, calling
calicoctl node status
should show that BGP peering has been established with the ingress controller, which has a peer type of global, and any worker nodes, which are connected through the Project Calico node-to-node mesh.nixsudo calicoctl node statusnixsudo calicoctl node statusoutputtextCalico process is running.IPv4 BGP status+---------------+-------------------+-------+----------+-------------+| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |+---------------+-------------------+-------+----------+-------------+| 192.168.56.13 | global | up | 23:06:44 | Established || 192.168.56.11 | node-to-node mesh | up | 23:12:00 | Established |+---------------+-------------------+-------+----------+-------------+IPv6 BGP statusNo IPv6 peers found.outputtextCalico process is running.IPv4 BGP status+---------------+-------------------+-------+----------+-------------+| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |+---------------+-------------------+-------+----------+-------------+| 192.168.56.13 | global | up | 23:06:44 | Established || 192.168.56.11 | node-to-node mesh | up | 23:12:00 | Established |+---------------+-------------------+-------+----------+-------------+IPv6 BGP statusNo IPv6 peers found.
Do you have any suggestions on how we can improve the content of this page?