Gateway API tutorials

Use TCPRoute

Available since

version 1.10

The Gateway API defines specialized resources for routing different types of network traffic. You would use a TCPRoute resource to route TCP traffic.

To add routing for TCP traffic, you must define a Gateway that references your GatewayClass. You must also define a TCPRoute that uses that Gateway, specifying the ports required for your Service(s), and update your HAProxy Kubernetes Ingress Controller to listen on those same ports. Your Service(s) will then be able to send and receive TCP traffic through your Route.

Define a Gateway Jump to heading

With Gateway objects, cluster operators can choose which Gateway API implementations to use. In this example, we will use the HAProxy Kubernetes Ingress Controller which implements Gateway API.

Prerequisite: Configure the ingress controller

Before proceeding, be sure you have updated your ingress controller to enable the Gateway API.

To create a Gateway that listens on port 8000 and handles routing for Services in the default namespace:

  1. Create a file that defines a Gateway object.

    Below, in a file named example-gateway.yaml, we define a Gateway that uses the ingress controller the haproxy-ingress-gatewayclass GatewayClass references. You created this GatewayClass when you enabled Gateway API for the ingress controller:

    example-gateway.yaml
    yaml
    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: Gateway
    metadata:
    name: default-namespace-gateway
    namespace: default
    spec:
    gatewayClassName: haproxy-ingress-gatewayclass
    listeners:
    - allowedRoutes:
    kinds:
    - group: gateway.networking.k8s.io
    kind: TCPRoute
    namespaces:
    from: Same
    name: listener1
    port: 8000
    protocol: TCP
    example-gateway.yaml
    yaml
    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: Gateway
    metadata:
    name: default-namespace-gateway
    namespace: default
    spec:
    gatewayClassName: haproxy-ingress-gatewayclass
    listeners:
    - allowedRoutes:
    kinds:
    - group: gateway.networking.k8s.io
    kind: TCPRoute
    namespaces:
    from: Same
    name: listener1
    port: 8000
    protocol: TCP

    In this example, the Gateway is deployed to the default namespace, and will accept routes from the same namespace. Cluster operators can also deploy Gateways that accept routes from specific namespaces by changing the allowedRoutes.namespaces section to have a from attribute of either:

    Value Description
    All Matches routes from any namespace.
    Same Matches routes from the same namespace where the Gateway is deployed.
    Selector Matches routes from namespaces matching the selector attribute. In this case, add a selector attribute to define the match criteria.

    This Gateway object includes TCPRoute in its allowedRoutes.kinds section. This advertises that this Gateway, and by extension the ingress controller, watches for routes of this kind.

  2. Apply the changes with kubectl:

    nix
    kubectl apply -f example-gateway.yaml
    nix
    kubectl apply -f example-gateway.yaml
  3. Optional: If you will be creating TCPRoute objects that are in a namespace different from the namespace of the target Service, you must define a ReferenceGrant object that allows cross-namespace communication.

    The ReferenceGrant definition below allows a Route in the default namespace to reference a Service in the foo namespace. Note that the to section does not need a namespace because a ReferenceGrant can refer only to resources defined in its own namespace.

    foo-referencegrant.yaml
    yaml
    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: ReferenceGrant
    metadata:
    name: refgrantns1
    namespace: foo
    spec:
    from:
    - group: "gateway.networking.k8s.io"
    kind: "TCPRoute"
    namespace: default
    to:
    - group: ""
    kind: "Service"
    foo-referencegrant.yaml
    yaml
    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: ReferenceGrant
    metadata:
    name: refgrantns1
    namespace: foo
    spec:
    from:
    - group: "gateway.networking.k8s.io"
    kind: "TCPRoute"
    namespace: default
    to:
    - group: ""
    kind: "Service"

    Apply the changes with kubectl:

    nix
    kubectl apply -f foo-referencegrant.yaml
    nix
    kubectl apply -f foo-referencegrant.yaml

Define Routes Jump to heading

Setting allowed Routes

Earlier when defining the Gateway, you set allowedRoutes to accept Routes of kind TCPRoute. This means that only those types of Routes will be handled by that Gateway.

To define routing for TCP traffic to an application named example-service:

  1. Create a file named example-route.yaml with the following contents:

    example-route.yaml
    yaml
    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: TCPRoute
    metadata:
    name: example-route
    namespace: default
    spec:
    parentRefs:
    - group: gateway.networking.k8s.io
    kind: Gateway
    name: default-namespace-gateway
    namespace: default
    rules:
    - backendRefs:
    - group: ''
    kind: Service
    name: example-service
    port: 8000
    weight: 10
    example-route.yaml
    yaml
    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: TCPRoute
    metadata:
    name: example-route
    namespace: default
    spec:
    parentRefs:
    - group: gateway.networking.k8s.io
    kind: Gateway
    name: default-namespace-gateway
    namespace: default
    rules:
    - backendRefs:
    - group: ''
    kind: Service
    name: example-service
    port: 8000
    weight: 10

    In this definition:

    • The parentRefs section references the Gateways to which a Route wants to attach. This TCPRoute will attach to listeners defined in the Gateway whose allowedRoutes have matching kind and namespace rules.
    • The backendRefs section refers to Services where connections should be sent. Each item’s port attribute is the Service’s listening port. The weight attribute sets the proportion of connections that should go to the Service, which is calculated by weight/(sum of all weights in this backendRefs list).
  2. Apply the changes with kubectl:

    nix
    kubectl apply -f example-route.yaml
    nix
    kubectl apply -f example-route.yaml

Update your ingress controller Jump to heading

To enable connectivity between the ingress controller and your Gateway, and therefore Route, you must update the ingress controller’s Service and Deployment objects to include the ports you configured in your Gateway and Route. Whether you installed the ingress controller via Helm or via kubectl will determine how you perform these updates.

Update with Helm Jump to heading

  1. Create a file named values.yaml that sets TCP ports where the controller should listen. If you created a values file for your initial installation, or for altering other settings, such as enabling Gateway API, reuse this file, updating it with the following:

    values.yaml
    yaml
    controller:
    kubernetesGateway:
    enabled: true
    gatewayControllerName: haproxy.org/gateway-controller
    service:
    tcpPorts:
    - name: listener1
    protocol: TCP
    port: 8000
    targetPort: 8000
    values.yaml
    yaml
    controller:
    kubernetesGateway:
    enabled: true
    gatewayControllerName: haproxy.org/gateway-controller
    service:
    tcpPorts:
    - name: listener1
    protocol: TCP
    port: 8000
    targetPort: 8000
    • For each TCP port, add a section to controller.service.tcpPorts. Add the following for each port:
      • Provide a name for the port. The name of the port cannot exceed 11 characters.
      • port and targetPort are both the port at which you will listen for TCP traffic.
      • Set protocol to TCP.

    Caution

    Be sure your values file also includes the properties you added when you enabled Gateway API, namely the section kubernetesGateway. If you leave these out, Gateway API will be disabled.

  2. Execute the helm upgrade command, providing the name of the YAML values file with -f.

    nix
    helm upgrade haproxy-kubernetes-ingress haproxytech/kubernetes-ingress \
    --namespace haproxy-controller \
    -f values.yaml
    nix
    helm upgrade haproxy-kubernetes-ingress haproxytech/kubernetes-ingress \
    --namespace haproxy-controller \
    -f values.yaml
    nix
    helm upgrade haproxy-kubernetes-ingress haproxytech/kubernetes-ingress \
    --create-namespace \
    --namespace haproxy-controller \
    --set controller.imageCredentials.registry=kubernetes-registry.haproxy.com \
    --set controller.imageCredentials.username=<KEY> \
    --set controller.imageCredentials.password=<KEY> \
    --set controller.image.repository=kubernetes-registry.haproxy.com/hapee-ingress \
    --set controller.image.tag=v3.0 \
    -f values.yaml
    nix
    helm upgrade haproxy-kubernetes-ingress haproxytech/kubernetes-ingress \
    --create-namespace \
    --namespace haproxy-controller \
    --set controller.imageCredentials.registry=kubernetes-registry.haproxy.com \
    --set controller.imageCredentials.username=<KEY> \
    --set controller.imageCredentials.password=<KEY> \
    --set controller.image.repository=kubernetes-registry.haproxy.com/hapee-ingress \
    --set controller.image.tag=v3.0 \
    -f values.yaml
    About Helm upgrade

    Performing a helm upgrade in this way uses the values file to automatically update the kubernetes-ingress-controller Deployment and Service. You can view the changes using kubectl get as follows:

    nix
    kubectl get deployment haproxy-kubernetes-ingress -n haproxy-controller -o yaml
    nix
    kubectl get deployment haproxy-kubernetes-ingress -n haproxy-controller -o yaml

    You will see the --gateway-controller-name startup argument was added to the Deployment:

    deployment/haproxy-kubernetes-ingress
    yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    [...]
    name: haproxy-kubernetes-ingress
    namespace: haproxy-controller
    spec:
    [...]
    template:
    [...]
    spec:
    containers:
    - args:
    - --gateway-controller-name=haproxy.org/gateway-controller
    deployment/haproxy-kubernetes-ingress
    yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    [...]
    name: haproxy-kubernetes-ingress
    namespace: haproxy-controller
    spec:
    [...]
    template:
    [...]
    spec:
    containers:
    - args:
    - --gateway-controller-name=haproxy.org/gateway-controller

    The Deployment and Service name for the community version is kubernetes-ingress-controller and is haproxy-ingress for the enterprise version. To see the Service issue this command:

    nix
    kubectl get service haproxy-kubernetes-ingress -n haproxy-controller -o yaml
    nix
    kubectl get service haproxy-kubernetes-ingress -n haproxy-controller -o yaml

    You will see that your TCP ports have been added to the Service:

    haproxy-kubernetes-ingress service
    yaml
    apiVersion: v1
    kind: Service
    metadata:
    [...]
    name: haproxy-kubernetes-ingress
    namespace: haproxy-controller
    spec:
    [...]
    ports:
    [...]
    - name: listener1
    nodePort: 30305
    port: 8000
    protocol: TCP
    targetPort: 8000
    haproxy-kubernetes-ingress service
    yaml
    apiVersion: v1
    kind: Service
    metadata:
    [...]
    name: haproxy-kubernetes-ingress
    namespace: haproxy-controller
    spec:
    [...]
    ports:
    [...]
    - name: listener1
    nodePort: 30305
    port: 8000
    protocol: TCP
    targetPort: 8000

Update with kubectl Jump to heading

To enable connectivity between your Service and the Gateway and Route using kubectl and YAML files, we will create patch files to patch the ingress controller Deployment and Service.

  1. Examine your current ingress controller Deployment. This command will show your Deployment in YAML. Make note of any additional arguments in args for the haproxy-ingress container. You will need these arguments in the next step.

    nix
    kubectl get deployment haproxy-kubernetes-ingress -n haproxy-controller -o yaml
    nix
    kubectl get deployment haproxy-kubernetes-ingress -n haproxy-controller -o yaml
    nix
    kubectl get deployment haproxy-ingress -n ingress-controller -o yaml
    nix
    kubectl get deployment haproxy-ingress -n ingress-controller -o yaml
  2. Create a new file named deployment-enable-gateway-api-patch.yaml and add the following to it. Be sure to include in this Deployment patch file any additional startup arguments (args) that exist, or startup arguments that you have added to your Deployment. The entire args list is replaced upon applying the patch:

    deployment-enable-gateway-api-patch.yaml
    yaml
    spec:
    template:
    spec:
    containers:
    - name: haproxy-ingress
    args:
    - --configmap=haproxy-controller/haproxy-kubernetes-ingress
    - --gateway-controller-name=haproxy.org/gateway-controller
    deployment-enable-gateway-api-patch.yaml
    yaml
    spec:
    template:
    spec:
    containers:
    - name: haproxy-ingress
    args:
    - --configmap=haproxy-controller/haproxy-kubernetes-ingress
    - --gateway-controller-name=haproxy.org/gateway-controller
  3. Apply the Deployment patch:

    nix
    kubectl patch deployment haproxy-kubernetes-ingress --patch-file=deployment-enable-gateway-api-patch.yaml -n haproxy-controller
    nix
    kubectl patch deployment haproxy-kubernetes-ingress --patch-file=deployment-enable-gateway-api-patch.yaml -n haproxy-controller
    output
    text
    deployment.apps/haproxy-kubernetes-ingress patched
    output
    text
    deployment.apps/haproxy-kubernetes-ingress patched
    nix
    kubectl patch deployment haproxy-ingress --patch-file=deployment-enable-gateway-api-patch.yaml -n haproxy-controller
    nix
    kubectl patch deployment haproxy-ingress --patch-file=deployment-enable-gateway-api-patch.yaml -n haproxy-controller
    output
    text
    deployment.apps/haproxy-ingress patched
    output
    text
    deployment.apps/haproxy-ingress patched
  4. (Optional): Add an annotation to the Deployment to track the change within the resource. This will make it so that when you review the rollout history of the Deployment, this change has a record associated with it, which may assist in tracking changes and performing rollbacks. Note that this overwrites the original entry, which was blank.

    nix
    kubectl annotate deployment haproxy-kubernetes-ingress kubernetes.io/change-cause="Updated haproxy-kubernetes-ingress Deployment to enable Gateway API support" --overwrite=true -n haproxy-controller
    nix
    kubectl annotate deployment haproxy-kubernetes-ingress kubernetes.io/change-cause="Updated haproxy-kubernetes-ingress Deployment to enable Gateway API support" --overwrite=true -n haproxy-controller
    output
    text
    deployment.apps/haproxy-kubernetes-ingress annotated
    output
    text
    deployment.apps/haproxy-kubernetes-ingress annotated

    Check the rollout history:

    nix
    kubectl rollout history deployment/haproxy-kubernetes-ingress -n haproxy-controller
    nix
    kubectl rollout history deployment/haproxy-kubernetes-ingress -n haproxy-controller
    output
    text
    REVISION CHANGE-CAUSE
    1 <none>
    2 Updated haproxy-kubernetes-ingress deployment to enable Gateway API support
    output
    text
    REVISION CHANGE-CAUSE
    1 <none>
    2 Updated haproxy-kubernetes-ingress deployment to enable Gateway API support
    nix
    kubectl annotate deployment haproxy-ingress kubernetes.io/change-cause="Updated haproxy-ingress Deployment to enable Gateway API support" --overwrite=true -n haproxy-controller
    nix
    kubectl annotate deployment haproxy-ingress kubernetes.io/change-cause="Updated haproxy-ingress Deployment to enable Gateway API support" --overwrite=true -n haproxy-controller
    output
    text
    deployment.apps/haproxy-kubernetes-ingress annotated
    output
    text
    deployment.apps/haproxy-kubernetes-ingress annotated

    Check the rollout history:

    nix
    kubectl rollout history deployment/haproxy-ingress --revision=2 -n haproxy-controller
    nix
    kubectl rollout history deployment/haproxy-ingress --revision=2 -n haproxy-controller
    output
    text
    REVISION CHANGE-CAUSE
    1 <none>
    2 Updated haproxy-ingress deployment to enable Gateway API
    output
    text
    REVISION CHANGE-CAUSE
    1 <none>
    2 Updated haproxy-ingress deployment to enable Gateway API
  5. Update the ingress controller’s Service object to include the ports you configured in your Gateway and Route. Consider the following snippet from the HAProxy Kubernetes Ingress Controller’s Service:

    Tip

    To retrieve the details for the Service in YAML format, use the following command. This command is for the enterprise version of the ingress controller:

    nix
    kubectl get service haproxy-ingress -n haproxy-controller -o yaml haproxy-ingress-service.yaml
    nix
    kubectl get service haproxy-ingress -n haproxy-controller -o yaml haproxy-ingress-service.yaml

    The haproxy-ingress-service.yaml file will contain the YAML representation of the Service. Change the Service name you provide to the command from haproxy-ingress to haproxy-kubernetes-ingress to use this command with the community version.

    haproxy-ingress.yaml
    yaml
    spec:
    selector:
    run: haproxy-ingress
    type: NodePort
    ports:
    - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    - name: stat
    port: 1024
    protocol: TCP
    targetPort: 1024
    - name: listener1
    protocol: TCP
    port: 8000
    targetPort: 8000
    haproxy-ingress.yaml
    yaml
    spec:
    selector:
    run: haproxy-ingress
    type: NodePort
    ports:
    - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    - name: stat
    port: 1024
    protocol: TCP
    targetPort: 1024
    - name: listener1
    protocol: TCP
    port: 8000
    targetPort: 8000

    We want to add listener1 to the list of ports on which the ingress controller listens. We can accomplish this using a patch. To patch the Service:

  6. Create a new file named service-enable-gateway-api-patch.yaml and add an entry for each new port to the ports section. When you apply the patch, it will append new ports to the existing ports list; it will not overwrite the existing ports.

    service-enable-gateway-api-patch.yaml
    yaml
    spec:
    ports:
    - name: listener1
    protocol: TCP
    port: 8000
    targetPort: 8000
    service-enable-gateway-api-patch.yaml
    yaml
    spec:
    ports:
    - name: listener1
    protocol: TCP
    port: 8000
    targetPort: 8000
    • For each TCP port, add a section to ports. Add the following for each port:
      • Provide a name for the port. The name of the port cannot exceed 11 characters.
      • Set protocol to TCP.
      • port and targetPort are both the port at which you will listen for TCP traffic.
  7. Apply the Service patch:

    nix
    kubectl patch service haproxy-kubernetes-ingress --patch-file=service-enable-gateway-api-patch.yaml -n haproxy-controller
    nix
    kubectl patch service haproxy-kubernetes-ingress --patch-file=service-enable-gateway-api-patch.yaml -n haproxy-controller
    output
    text
    service/haproxy-kubernetes-ingress patched
    output
    text
    service/haproxy-kubernetes-ingress patched
    nix
    kubectl patch service haproxy-ingress --patch-file=service-enable-gateway-api-patch.yaml -n haproxy-controller
    nix
    kubectl patch service haproxy-ingress --patch-file=service-enable-gateway-api-patch.yaml -n haproxy-controller
    output
    text
    service/haproxy-ingress patched
    output
    text
    service/haproxy-ingress patched
    Track changes to Service resources

    Unlike Deployments, Services do not support rollout history. You can, however, view the previous version of the resource by calling kubectl get service as follows. This example is for the enterprise version. Change the service name haproxy-ingress to haproxy-kubernetes-ingress for use with the community version.

    nix
    kubectl get service haproxy-ingress -n haproxy-controller -o yaml service_last_revision.yaml
    nix
    kubectl get service haproxy-ingress -n haproxy-controller -o yaml service_last_revision.yaml

    In the output file, there will be a property in metadata.annotations named kubectl.kubernetes.io/last-applied-configuration. Consider this a record of the previous configuration, prior to the patch being applied.

    yaml
    apiVersion: v1
    kind: Service
    metadata:
    annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
    {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"run":"haproxy-ingress"},"name":"haproxy-ingress","namespace":"haproxy-controller"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":80},{"name":"https","port":443,"protocol":"TCP","targetPort":443},{"name":"stat","port":1024,"protocol":"TCP","targetPort":1024}],"selector":{"run":"haproxy-ingress"},"type":"NodePort"}}
    yaml
    apiVersion: v1
    kind: Service
    metadata:
    annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
    {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"run":"haproxy-ingress"},"name":"haproxy-ingress","namespace":"haproxy-controller"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":80},{"name":"https","port":443,"protocol":"TCP","targetPort":443},{"name":"stat","port":1024,"protocol":"TCP","targetPort":1024}],"selector":{"run":"haproxy-ingress"},"type":"NodePort"}}

    You can use a utility such as yq to convert this JSON to YAML after saving the JSON to its own file, such as one named last_revision.json:

    nix
    yq -oy last_revision.json &> last_revision.yaml
    nix
    yq -oy last_revision.json &> last_revision.yaml

    You can then examine, edit, save as a backup, or apply the previous YAML that is now in the last_revision.yaml file.

Deploy an example application to test the Route Jump to heading

To test connectivity with the Gateway and Route, we will deploy an example application using a Deployment and a Service. Here, we define an example application using the busybox Docker container image. It will run netcat on port 8080, receiving TCP traffic:

  1. Create a file named busybox.yaml and paste the following YAML into it to define the example application. Both the Deployment and the Service are present in this YAML:

    example-service.yaml
    yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: busybox-deployment
    labels:
    app: busybox
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: busybox
    template:
    metadata:
    labels:
    app: busybox
    spec:
    containers:
    - name: busybox
    image: busybox
    command: ["sh", "-c", "while true; do nc -v -lk -p 8080; done"]
    ports:
    - containerPort: 8080
    protocol: TCP
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: example-service
    spec:
    selector:
    app: busybox
    ports:
    - protocol: TCP
    port: 8000
    targetPort: 8080
    example-service.yaml
    yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: busybox-deployment
    labels:
    app: busybox
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: busybox
    template:
    metadata:
    labels:
    app: busybox
    spec:
    containers:
    - name: busybox
    image: busybox
    command: ["sh", "-c", "while true; do nc -v -lk -p 8080; done"]
    ports:
    - containerPort: 8080
    protocol: TCP
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: example-service
    spec:
    selector:
    app: busybox
    ports:
    - protocol: TCP
    port: 8000
    targetPort: 8080

    Note that the Service is named example-service, the same Service name you specified when you defined your Route in the previous steps. This is how the TCPRoute knows what Service to route to.

  2. Apply the changes with kubectl:

    nix
    kubectl apply -f busybox.yaml
    nix
    kubectl apply -f busybox.yaml
Test the connection through the load balancer (click to expand)

To test the connection through the load balancer (and through the Gateway and TCPRoute) to the BusyBox instance running netcat:

  1. Get the name of the BusyBox pod by calling kubectl get pod:

    nix
    kubectl get pod
    nix
    kubectl get pod
    Example output
    NAME READY STATUS RESTARTS AGE busybox-deployment-6fbb645fd4-cfkwp 1/1 Running 0 12m
    Example output
    NAME READY STATUS RESTARTS AGE busybox-deployment-6fbb645fd4-cfkwp 1/1 Running 0 12m
  2. Use the following command to find the NodePort assigned to your Service. In this example the NodePort is 30190, the NodePort associated with the TCP port 8000.

    nix
    kubectl get service -n haproxy-controller
    nix
    kubectl get service -n haproxy-controller
    output
    text
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    haproxy-kubernetes-ingress NodePort 10.106.101.211 <none> 80:30187/TCP,443:32147/TCP,443:32147/UDP,1024:30168/TCP,6060:30721/TCP,8000:30190/TCP 20m
    output
    text
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    haproxy-kubernetes-ingress NodePort 10.106.101.211 <none> 80:30187/TCP,443:32147/TCP,443:32147/UDP,1024:30168/TCP,6060:30721/TCP,8000:30190/TCP 20m
  3. From a server that has connection to your cluster, such as the server from which you run kubectl, use netcat to connect to the port assigned as the NodePort for your TCP Service. There is no output from this command.

    nix
    nc 127.0.0.1 30190
    nix
    nc 127.0.0.1 30190
  4. Check the logs of the BusyBox pod to confirm that a connection was made:

    nix
    kubectl logs busybox-deployment-6fbb645fd4-cfkwp
    nix
    kubectl logs busybox-deployment-6fbb645fd4-cfkwp
    output
    connect to [::ffff:10.0.2.134]:8080 from 10-0-1-186.haproxy-kubernetes-ingress.haproxy-controller.svc.cluster.local:55288 ([::ffff:10.0.1.186]:55288)
    output
    connect to [::ffff:10.0.2.134]:8080 from 10-0-1-186.haproxy-kubernetes-ingress.haproxy-controller.svc.cluster.local:55288 ([::ffff:10.0.1.186]:55288)

Optional: route to multiple TCPRoutes from the same Gateway Jump to heading

You can use the same Gateway to route to multiple TCPRoutes. In this case, you would want to alter your Gateway as follows, adding a separate listener for each each TCPRoute. Note that listener1 listens for external traffic on port 8000 and listener2 on port 8001:

example-gateway.yaml
yaml
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: Gateway
metadata:
name: default-namespace-gateway
namespace: default
spec:
gatewayClassName: haproxy-ingress-gatewayclass
listeners:
- allowedRoutes:
kinds:
- group: gateway.networking.k8s.io
kind: TCPRoute
namespaces:
from: Same
name: listener1
port: 8000
protocol: TCP
- allowedRoutes:
kinds:
- group: gateway.networking.k8s.io
kind: TCPRoute
namespaces:
from: Same
name: listener2
port: 8001
protocol: TCP
[...]
example-gateway.yaml
yaml
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: Gateway
metadata:
name: default-namespace-gateway
namespace: default
spec:
gatewayClassName: haproxy-ingress-gatewayclass
listeners:
- allowedRoutes:
kinds:
- group: gateway.networking.k8s.io
kind: TCPRoute
namespaces:
from: Same
name: listener1
port: 8000
protocol: TCP
- allowedRoutes:
kinds:
- group: gateway.networking.k8s.io
kind: TCPRoute
namespaces:
from: Same
name: listener2
port: 8001
protocol: TCP
[...]

Reference each listener in separate TCPRoutes. In this example, there are two routes, example-route1 corresponding to listener1, and example-route2 corresponding to listener2. Use the sectionName property to denote which listener the TCPRoute should use from the specified Gateway. Note that both Routes will communicate with their respective Services on port 8000:

example-routes.yaml
yaml
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: example-route1
namespace: default
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: default-namespace-gateway
namespace: default
sectionName: listener1
rules:
- backendRefs:
- group: ''
kind: Service
name: example-service1
port: 8000
weight: 10
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: example-route2
namespace: default
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: default-namespace-gateway
namespace: default
sectionName: listener2
rules:
- backendRefs:
- group: ''
kind: Service
name: example-service2
port: 8000
weight: 10
example-routes.yaml
yaml
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: example-route1
namespace: default
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: default-namespace-gateway
namespace: default
sectionName: listener1
rules:
- backendRefs:
- group: ''
kind: Service
name: example-service1
port: 8000
weight: 10
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: example-route2
namespace: default
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: default-namespace-gateway
namespace: default
sectionName: listener2
rules:
- backendRefs:
- group: ''
kind: Service
name: example-service2
port: 8000
weight: 10

See also: Jump to heading

Kubernetes SIG Network TCP routing guide

Do you have any suggestions on how we can improve the content of this page?