Diagnostics

Learn about collecting diagnostics information, the diagnostic tools shipped with CSM Operator, as well as a number of useful kubectl commands that you can use to gather diagnostic information.

Cloudera provides various command line tools that you can use to capture diagnostic bundles, thread dumps, and other types of information about your CSM Operator installation. You use these tools when contacting Cloudera support or when troubleshooting issues.

There are three tools available.

  • report.sh – A diagnostic bundle tool that captures various information about your CSM Operator installation.
  • java_thread_dump.sh – A thread dump capturing tool that collects thread dumps of containers in a specified pod.
  • kafka_shell.sh – An administrative tool that sets up a pod where you can easily run Kafka command line tools.

Diagnostic tools are not downloaded, deployed, or installed when you install CSM Operator and its components. You must download and run them separately. All tools are available for download from the Cloudera Archive. They are located in the /csm-operator/1.0/tools/ directory.

In addition to the tools provided by Cloudera, you can also use kubectl to gather diagnostics and troubleshooting data.

Capturing a diagnostic bundle with report.sh

Use report.sh to capture diagnostic information about your deployment.

CSM Operator diagnostic bundles are captured using the report.sh command line tool. The bundle that the tool captures is used as the baseline when contacting Cloudera support for assistance with CSM Operator. The bundle captures all available, cluster-wide information about CSM Operator.

  • Ensure that you have access to your Cloudera credentials (username and password).
  • Ensure that the environment where you run the tool has the following:
    • Bash 4 or higher
    • GNU utilities:
      • echo
      • grep
      • sed
      • date
    • base64
    • kubectl or oc
    • kubeconfig configured to target Kubernetes cluster
    • zip
  1. Download the tool.
    curl --user [***USERNAME***] \
      https://archive.cloudera.com/p/csm-operator/1.0/tools/report.sh \
      --output report.sh \
    && chmod +x report.sh

    Replace [*** USERNAME***] with your Cloudera username. Enter your Cloudera password when prompted.

  2. Capture a diagnostic bundle.
    ./report.sh

    The tool prints the resources it collects information on. Afterward it generates a diagnostic bundle ZIP (report file). The path to the generated ZIP is printed on the standard output.

Capturing a thread dump of a pod with java_thread_dump.sh

Use java_thread_dump.sh to capture a thread dump of a pod.

Some types of issues require investigating the threads of the components running in a CSM Operator installation. You can use the java_thread_dump.sh command line tool to capture the thread dumps of all containers of a specific pod with the specified number of samples and frequency.
  • Ensure that you have access to your Cloudera credentials (username and password).
  • Ensure that the environment where you run the tool has the following:
    • Bash 4 or higher
    • GNU utilities:
      • echo
      • grep
      • sed
      • date
    • kubectl or oc
    • kubeconfig configured to target Kubernetes cluster
    • zip
  1. Download the tool.
    curl --user [***USERNAME***] \
      https://archive.cloudera.com/p/csm-operator/1.0/tools/java_thread_dump.sh \
      --output java_thread_dump.sh \
    && chmod +x java_thread_dump.sh
    Replace [***USERNAME***] with your Cloudera username. Enter your Cloudera password when prompted.
  2. Capture a thread dump of a pod.
    ./dump.sh --namespace=[***POD NAMESPACE***] \
      --pod=[***POD NAME***] \
      --dumps=[***NUMBER OF THREAD DUMPS***] \
      --interval=[***DUMP INTERVAL IN SECONDS***]

    The tool collects the specified number of thread dumps for the specified pod with the specified interval. Afterward, it generates a ZIP (report file) containing the thread dumps.

Using kafka_shell.sh

Use kafka_shell.sh to set up a pod where Kafka CLI tools are readily available.

Kafka is shipped with a number of useful CLI tools. Easy access to these tools is essential for administering and troubleshooting your cluster. The kafka_shell.sh command line tool creates a pod where all Kafka CLI tools are readily available, and full Kafka admin client configurations are prepared.

The pod created by kafka_shell.sh:

  • Uses the Kafka docker image. This means that Kafka CLI tools are readily accessible within the pod.
  • Has both a truststore and keystore present that give you administrative privileges.
  • Has a ready-to-use client configuration file available at /tmp/client.properties.
  • Has bootstrap server configuration available in the BOOTSTRAP_SERVERS environment variable.

You can use the tool in two ways. You either use it interactively, or run one-off commands using pipe.

  • Ensure that you have access to your Cloudera credentials (username and password).
  • Ensure that the environment where you run the tool has the following:
    • Bash 4 or higher
    • GNU utilities:
      • echo
      • grep
      • sed
      • date
      • cut
      • head
    • kubectl or oc
    • kubeconfig configured to target Kubernetes cluster
    • zip
  1. Download the tool.
    curl --user [***USERNAME***] \
      https://archive.cloudera.com/p/csm-operator/1.0/tools/kafka_shell.sh \
      --output kafka_shell.sh \
    && chmod +x kafka_shell.sh
    Replace [***USERNAME***] with your Cloudera username. Enter your Cloudera password when prompted.
  2. Use the tool.
    You have two choices. You can either use the tool interactively. In this case, you run the tool which opens an interactive shell window where you run your Kafka CLI commands. Alternatively, you can use pipe ( | ) to run Kafka CLI commands one at a time.
    1. Run the tool.
      ./kafka_shell.sh \
        --namespace=[***KAFKA CLUSTER NAMESPACE***] \
        --cluster=[***KAFKA CLUSTER NAME***]
    2. Run your Kafka CLI command within the shell that opens.

      For example, you can list your topics with the following command.

      bin/kafka-topics.sh \
        --list \
        --command-config /tmp/client.properties \
        --bootstrap-server $BOOTSTRAP_SERVERS

      The kafka-shell pod is deleted after you exit the interactive shell.

    To run one-off commands, pipe them into kafka_admin_shell.sh. For example:
    echo 'bin/kafka-topics.sh \
      --list \
      --command-config /tmp/client.properties \
      --bootstrap-server $BOOTSTRAP_SERVERS' \
    | ./kafka_shell.sh --namespace=[***KAFKA CLUSTER NAMESPACE***] \
      --cluster=[***KAFKA CLUSTER NAME***]
    The kafka-shell pod is deleted after you run your command.

Monitoring pod status during reconciliation

You can check the status of the pods after applying a change to the deployment configuration using kubectl get pods.

kubectl get pods --namespace [***NAMESPACE***] --output wide --watch

Reading Strimzi Cluster Operator logs

The Strimzi Cluster Operator log contains useful information about the tasks that the operator performs and details for failed operations. You can check the Strimzi Cluster Operator logs with kubectl logs.

kubectl logs [***STRIMZI CLUSTER OPERATOR POD***] --namespace [***NAMESPACE***]

Reading effective generated Kafka broker properties

You can get the effective Kafka properties of a broker using kubectl exec. Broker properties are generated by the Strimzi Cluster Operator.

kubectl exec -it \
  --namespace [***NAMESPACE***] \
  [***KAFKA BROKER POD***] \
  --container kafka \
  -- /bin/bash -c "cat /tmp/strimzi.properties"

Reading Kafka broker logs

You can check the Kafka broker logs with kubectl logs.

kubectl logs [***KAFKA BROKER POD***] --namespace [***NAMESPACE***] -f