Diagnostics
Learn about collecting diagnostics information, the diagnostic tools shipped with CSM Operator, as well as a number of useful kubectl commands that you can use to gather diagnostic information.
Cloudera provides various command line tools that you can use to capture diagnostic bundles, thread dumps, and other types of information about your CSM Operator installation. You use these tools when contacting Cloudera support or when troubleshooting issues.
There are three tools available.
report.sh
– A diagnostic bundle tool that captures various information about your CSM Operator installation.java_thread_dump.sh
– A thread dump capturing tool that collects thread dumps of containers in a specified pod.kafka_shell.sh
– An administrative tool that sets up a pod where you can easily run Kafka command line tools.
Diagnostic tools are not downloaded, deployed, or installed when you install CSM Operator and
its components. You must download and run them separately. All tools are available for download
from the Cloudera Archive. They are located in the /csm-operator/1.1/tools/
directory.
In addition to the tools provided by Cloudera, you can also use kubectl
to
gather diagnostics and troubleshooting data.
Capturing a diagnostic bundle with report.sh
Use report.sh to capture diagnostic information about your deployment.
CSM Operator diagnostic bundles are captured using the report.sh command line tool. The bundle that the tool captures is used as the baseline when contacting Cloudera support for assistance with CSM Operator. The bundle captures all available, cluster-wide information about CSM Operator.
- Ensure that you have access to your Cloudera credentials (username and password).
- Ensure that the environment where you run the tool has the following:
- Bash 4 or higher
- GNU utilities:
echo
grep
sed
date
base64
kubectl
oroc
kubeconfig
configured to target Kubernetes clusterzip
Capturing a thread dump of a pod with java_thread_dump.sh
Use java_thread_dump.sh to capture a thread dump of a pod.
java_thread_dump.sh
command line tool
to capture the thread dumps of all containers of a specific pod with the specified number of
samples and frequency.- Ensure that you have access to your Cloudera credentials (username and password).
- Ensure that the environment where you run the tool has the following:
- Bash 4 or higher
- GNU utilities:
echo
grep
sed
date
kubectl
oroc
kubeconfig
configured to target Kubernetes clusterzip
Using kafka_shell.sh
Use kafka_shell.sh to set up a pod where Kafka CLI tools are readily available.
Kafka is shipped with a number of useful CLI tools. Easy access to these tools is
essential for administering and troubleshooting your cluster. The
kafka_shell.sh
command line tool creates a pod where all Kafka
CLI tools are readily available, and full Kafka admin client configurations are
prepared.
The pod created by kafka_shell.sh
:
- Uses the Kafka docker image. This means that Kafka CLI tools are readily accessible within the pod.
- Has both a truststore and keystore present that give you administrative privileges.
- Has a ready-to-use client configuration file available at /tmp/client.properties.
- Has bootstrap server configuration available in the
BOOTSTRAP_SERVERS
environment variable.
You can use the tool in two ways. You either use it interactively, or run one-off commands using pipe.
- Ensure that you have access to your Cloudera credentials (username and password).
- Ensure that the environment where you run the tool has the following:
- Bash 4 or higher
- GNU utilities:
echo
grep
sed
date
cut
head
kubectl
oroc
kubeconfig
configured to target Kubernetes clusterzip
Monitoring pod status during reconciliation
You can check the status of the pods after applying a change to the deployment configuration using kubectl get pods.
kubectl get pods --namespace [***NAMESPACE***] --output wide --watch
Reading Strimzi Cluster Operator logs
The Strimzi Cluster Operator log contains useful information about the tasks that the operator performs and details for failed operations. You can check the Strimzi Cluster Operator logs with kubectl logs.
kubectl logs [***STRIMZI CLUSTER OPERATOR POD***] --namespace [***NAMESPACE***]
Reading effective generated Kafka broker properties
You can get the effective Kafka properties of a broker using kubectl exec. Broker properties are generated by the Strimzi Cluster Operator.
kubectl exec -it \
--namespace [***NAMESPACE***] \
[***KAFKA BROKER POD***] \
--container kafka \
-- /bin/bash -c "cat /tmp/strimzi.properties"
Reading effective generated Kafka Connect worker properties
You can get the effective properties of a worker using kubectl exec. Worker properties are generated by the Strimzi Cluster Operator.
kubectl exec -it \
--namespace [***NAMESPACE***] \
[***KAFKA CONNECT POD***] \
--container [***CONNECT CLUSTER NAME***]-connect \
-- /bin/bash -c "cat /tmp/strimzi-connect.properties"
Reading Kafka broker logs
You can check the Kafka broker logs with kubectl logs.
kubectl logs [***KAFKA BROKER POD***] --namespace [***NAMESPACE***] -f