Streams Messaging Manager does not show any metrics
for Kafka or Kafka Connect when multiple listeners are set in Kafka.
Streams Messaging Manager cannot identify
multiple listeners and still points to bootstrap server using the default broker port
(9093 for SASL_SSL). You need to override the bootstrap server URL by
performing the following steps:
In Cloudera Manager, go to Streams Messaging Manager > Configuration > Streams Messaging Manager Rest Admin Server Advanced Configuration
Snippet (Safety Valve)
Override bootstrap server URL (hostname:port as set in the
listeners for broker) for streams-messaging-manager.yaml.
Save your changes.
Restart SMM.
KAFKA-2561: Performance degradation when SSL Is enabled
In some configuration scenarios, significant performance
degradation can occur when SSL is enabled. The impact varies depending on your CPU, JVM
version, Kafka configuration, and message size. Consumers are typically more affected
than producers.
Configure brokers and clients with
ssl.secure.random.implementation = SHA1PRNG. It often reduces this
degradation drastically, but its effect is CPU and JVM dependent.
RANGER-3809: Idempotent Kafka producer fails to initialize due
to an authorization failure
Kafka producers that have idempotence enabled require the
Idempotent Write permission to be set on the cluster resource in Ranger. If permission
is not given, the client fails to initialize and an error similar to the following is
thrown:
org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state
at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:1125)
at org.apache.kafka.clients.producer.internals.TransactionManager.maybeAddPartition(TransactionManager.java:442)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:1000)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:914)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:800)
.
.
.
Caused by: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
Idempotence
is enabled by default for clients in Kafka 3.0.1, 3.1.1, and any version after 3.1.1.
This means that any client updated to 3.0.1, 3.1.1, or any version after 3.1.1 is
affected by this issue.
This issue has two workarounds, do either of the following:
Explicitly disable idempotence for the producers. This can be done by setting
enable.idempotence to false.
Update your policies in Ranger and ensure that producers have Idempotent Write
permission on the cluster resource.
CDPD-49304: AvroConverter does not support composite default
values
The Debezium Db2 Source connector does not support the evolution
(updates) of schemas. In addition, schema change events are not emitted to the schema
change topic if there is a change in the schema of a table that is in capture mode. For
more information, see DBZ-4990.
None.
CFM-3532: The Stateless NiFi Source, Stateless NiFi Sink, and
HDFS Stateless Sink connectors cannot use Snappy compression
This issue only affects Stateless NiFi Source and Sink
connectors if the connector is running a dataflow that uses a processor that uses Hadoop
libraries and is configured to use Snappy compression. The HDFS Stateless Sink connector
is only affected if the Compression Codec or Compression Codec
for Parquet properties are set to SNAPPY.
If you are
affected by this issue, errors similar to the following will be present in the logs.
Failed to write to HDFS due to java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()
Failed to write to HDFS due to java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.
Download and deploy missing libraries.
Create the /opt/nativelibs
directory.
mkdir /opt/nativelibs
Change the owner to
kafka.
chown kafka:kafka /opt/nativelibs
Locate the directory containing the Hadoop native libraries and copy its
contents to the directory you
created.
The rolling restart action does not work in Kafka Connect when
the ssl.client.auth option is set to required. The health check fails with a timeout
which blocks restarting the subsequent Kafka Connect instances.
You can set ssl.client.auth to
requested instead of required and initiate a rolling
restart again. Alternatively, you can perform the rolling restart manually by restarting
the Kafka Connect instances one-by-one and checking periodically whether the service
endpoint is available before starting the next one.
Unsupported Features
The following Kafka features are not supported in Cloudera
Only Java and .Net based clients are supported. Clients developed with C, C++, Python,
and other languages are currently not supported.
The Kafka default authorizer is not supported. This includes setting ACLs and all
related APIs, broker functionality, and command-line tools.
SASL/SCRAM is only supported for delegation token based authentication. It is not
supported as a standalone authentication mechanism.
Kafka KRaft in this release of Cloudera Runtime is in technical
preview and does not support the following:
Deployments with multiple log directories. This includes deployments that use JBOD
for storage.
Delegation token based authentication.
Migrating an already running Kafka service from ZooKeeper to KRaft.
Atlas Integration.
Limitations
Collection of partition level metrics may cause Cloudera Manager performance to degrade
If the Kafka service operates with a large number of
partitions, collection of partition level metrics may cause Cloudera Manager performance to degrade.
If you are observing
performance degradation and your cluster is operating with a high number of
partitions, you can choose to disable the collection of partition level metrics.
Complete the following steps to turn off the collection of partition
level metrics:
Obtain the Kafka service name.
In Cloudera Manager, Select the Kafka service.
Select any available chart, and select Open in Chart Builder
from the configuration icon drop-down.
Find $SERVICENAME= near the top of the display.
The
Kafka service name is the value of
$SERVICENAME.
Turn off the collection of partition level metrics.
Go to Hosts > Hosts Configuration.
Find and configure the Cloudera Manager
Agent Monitoring Advanced Configuration Snippet (Safety Valve)
configuration property.
Enter the following to turn off the
collection of partition level
metrics: