Route

Routes is an OpenShift concept and solution that allows you to expose Kubernetes Services at a public URL so that external clients can reach your applications running in the Kubernetes cluster.

In CSM Operator, you set up external cluster access using Openshift routes by adding a route type listener to your Kafka resource (listener.type:route).

Once configuration is done, CSM Operator deploys multiple routes as well as multiple ClusterIP type Kubernetes Services. This means that you will have the following:

  • A route and a corresponding ClusterIP that serves as an external bootstrap. This is used by clients for the initial connection and to receive metadata (advertised listeners) from the Kafka cluster.
  • A unique route and a CluserIP for each Kafka Broker. The routes and the corresponding ClusterIPs are used to access the brokers directly and to distinguish the traffic for different brokers.

Kafka clients connect to the bootstrap route, which routes the request through the bootstrap ClusterIP to one of the brokers. From this broker, the client receives metadata that contains the hostnames of the per-broker routes. The client uses these addresses to connect to the routes dedicated to the specific broker. Afterward, the route directs traffic through its corresponding ClusterIP to its corresponding broker.

CSM Operator uses the HAProxy router and sets up routes with passthrough termination. This results in the following:

  • Traffic going through a route is always secured and uses TLS encryption.
  • Encrypted traffic is sent to the ClusterIP Service without data being decrypted in the process.
  • The port that the routes listen on is fixed and is always 443. This is because HAProxy uses port 443 by default for HTTPS requests.

CSM Operator collects the hostnames assigned to the routes and uses the addresses to configure the advertised listeners in the Kafka brokers. So brokers are automatically configured to advertise the right address and ports. As a result, once setup is complete, you can connect your clients running outside of the Kubernetes network by directing them to the bootstrap route. Kubernetes and OpenShift handle everything else and ensure that client requests are routed to the correct brokers.

Configuring route listeners

Complete the following steps to set up and configure a route type listener in CSM Operator. The following steps also include an example on how to connect a Kafka console client to the cluster.

These steps demonstrate basic listener configuration with typical customizations. In addition to the configuration shown here, you can further customize your listener and specify a client authentication mechanism with the authentication property and add various additional configurations using the configuration property. For a comprehensive list of available properties, see GenericKafkaListener schema reference in the Strimzi API reference.

  1. Configure your Kafka resource.
    Add an external listener that has its type property set to route. Additionally, you must ensure that tls is set to true as TLS/SSL encryption is mandatory when using routes.

    Optionally, you can further customize the listener. For example, the following configuration snippet shows an example where the hostnames of routes are specified with the host property.

    #...
    kind: Kafka
    spec:
      kafka:
        listeners:
          - name: external
            port: 9094
            type: route
            tls: true
            authentication:
              type: tls
            configuration:
              bootstrap:
                host: kafka-bootstrap.router.com
              brokers:
                - broker: 0
                  host: kafka-0.router.com
                - broker: 1
                  host: kafka-1.router.com
                - broker: 2
                  host: kafka-2.router.com
    
  2. Verify that the configured services are created and ready.
    oc get svc
  3. Get the host of the bootstrap route.
    oc get routes [***CLUSTER NAME***]-kafka-bootstrap \
      --output=jsonpath='{.status.ingress[0].host}{"\n"}'
  4. Extract the TLS certificate from your broker and import it into a Java truststore file.

    Extracting the TLS certificate is required because TLS encryption is mandatory when using routes. Because of this, you must run your clients with a valid certificate. You can use the OpenShift CLI (oc) to extract the certificate and the keytool utility to import the certificate into a Java truststore file. For example:

    oc extract secret/[***CLUSTER NAME***]-cluster-ca-cert \
      --keys=ca.crt --to=- > ca.crt
    keytool -import -trustcacerts -alias [***ALIAS***] \
      -file ca.crt \
      -keystore truststore.jks \
      -storepass [***PASSWORD***] \
      -noprompt
  5. Ensure that the resulting truststore is available on the machine where you run your client and that the client has access to the file.
  6. Configure and run your client.
    The following example shows a Kafka console producer.
    kafka-console-producer.sh \
      --bootstrap-server [***BOOTSTRAP ROUTE HOST***]:443 \
      --producer-property security.protocol=SSL \
      --producer-property ssl.truststore.password=[***PASSWORD***] \
      --producer-property ssl.truststore.location=[***TRUSTSTORE LOCATION***] \
      --topic [***TOPIC***]