Known issues and technical limitations for Kafka are addressed in Cloudera Runtime 7.3.2, its service packs, and cumulative hotfixes.
Known issues identified in Cloudera Runtime 7.3.2
There are no new known issues identified in this release.
Known issues identified before Cloudera Runtime 7.3.2
Known issues identified before Cloudera Runtime 7.3.2 include only
unresolved issues from previous releases that continue to affect the Cloudera Runtime 7.3.2 base release.
OPSAPS-59553: Streams Messaging Manager bootstrap
server config should be updated based on Kafka's listeners
7.1.7 and its SP and CHF
releases, 7.1.9 and its SP and CHF releases, 7.3.1 and its SP and CHF releases,
7.3.2
Streams Messaging Manager does not show any metrics
for Kafka or Kafka Connect when multiple listeners are set in Kafka.
Streams Messaging Manager cannot identify
multiple listeners and still points to bootstrap server using the default broker port
(9093 for SASL_SSL). You need to override the bootstrap server URL by
performing the following steps:
In Cloudera Manager, go to Streams Messaging Manager > Configuration > Streams Messaging Manager Rest Admin Server Advanced Configuration
Snippet (Safety Valve)
Override bootstrap server URL (hostname:port as set in the
listeners for broker) for streams-messaging-manager.yaml.
Save your changes.
Restart SMM.
KAFKA-2561: Performance degradation when SSL Is enabled
7.1.7 and its SP and CHF
releases, 7.1.9 and its SP and CHF releases, 7.3.1 and its SP and CHF releases,
7.3.2
In some configuration scenarios, significant performance
degradation can occur when SSL is enabled. The impact varies depending on your CPU, JVM
version, Kafka configuration, and message size. Consumers are typically more affected
than producers.
Configure brokers and clients with
ssl.secure.random.implementation = SHA1PRNG. It often reduces this
degradation drastically, but its effect is CPU and JVM dependent.
RANGER-3809: Idempotent Kafka producer fails to initialize due
to an authorization failure
7.1.7 and its SP and CHF
releases, 7.1.9 and its SP and CHF releases, 7.3.1 and its SP and CHF releases,
7.3.2
Kafka producers that have idempotence enabled require the
Idempotent Write permission to be set on the cluster resource in Ranger. If permission
is not given, the client fails to initialize and an error similar to the following is
thrown:
org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state
at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:1125)
at org.apache.kafka.clients.producer.internals.TransactionManager.maybeAddPartition(TransactionManager.java:442)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:1000)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:914)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:800)
.
.
.
Caused by: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
Idempotence is enabled by default for clients in Kafka 3.0.1,
3.1.1, and any version after 3.1.1. This means that any client updated to 3.0.1, 3.1.1,
or any version after 3.1.1 is affected by this issue.
This issue has two workarounds, do either of the following:
Explicitly disable idempotence for the producers. This can be done by setting
enable.idempotence to false.
Update your policies in Ranger and ensure that producers have Idempotent Write
permission on the cluster resource.
CDPD-49304: AvroConverter does not support composite default
values
7.1.7 and its SP and CHF
releases, 7.1.9 and its SP and CHF releases, 7.3.1 and its SP and CHF releases,
7.3.2.0
AvroConverter cannot handle schemas containing a
STRUCT type default value.
None.
CFM-3532: The Stateless NiFi Source, Stateless NiFi Sink, and
HDFS Stateless Sink connectors cannot use Snappy compression
7.1.7 and its SP and CHF
releases, 7.1.9 and its SP and CHF releases, 7.3.1 and its SP and CHF releases,
7.3.2
This issue only affects Stateless NiFi Source and Sink
connectors if the connector is running a dataflow that uses a processor that uses Hadoop
libraries and is configured to use Snappy compression. The HDFS Stateless Sink connector
is only affected if the Compression Codec or Compression Codec
for Parquet properties are set to SNAPPY.
If you are
affected by this issue, errors similar to the following will be present in the logs.
Failed to write to HDFS due to java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()
Failed to write to HDFS due to java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.
Download and deploy missing libraries.
Create the /opt/nativelibs
directory.
mkdir /opt/nativelibs
Change the owner to
kafka.
chown kafka:kafka /opt/nativelibs
Locate the directory containing the Hadoop native libraries and copy its
contents to the directory you
created.
Change the owner of every entry within /opt/nativelibs to
kafka.
chown -h kafka:kafka /opt/nativelibs/*
In Cloudera Manager, go to Kafka service > Configuration.
Add the following key-value pair to Kafka Connect Environment
Advanced Configuration Snippet (Safety Valve).
Key: LD_LIBRARY_PATH
Value: /opt/nativelibs
Click Save Changes.
Restart the Kafka service.
Unsupported Features
The following Kafka features are not supported in Cloudera:
Only Java and .Net based clients are supported. Clients developed with C, C++, Python,
and other languages are currently not supported.
The Kafka default authorizer is not supported. This includes setting ACLs and all
related APIs, broker functionality, and command-line tools.
SASL/SCRAM is only supported for delegation token based authentication. It is not
supported as a standalone authentication mechanism.
Limitations
Collection of partition level metrics may cause Cloudera Manager performance to degrade
If the Kafka service operates with a large number of
partitions, collection of partition level metrics may cause Cloudera Manager performance to degrade.
If you are observing
performance degradation and your cluster is operating with a high number of
partitions, you can choose to disable the collection of partition level metrics.
Complete the following steps to turn off the collection of
partition level metrics:
Obtain the Kafka service name.
In Cloudera Manager, Select the Kafka service.
Select any available chart, and select Open in Chart Builder
from the configuration icon drop-down.
Find $SERVICENAME= near the top of the display.
The
Kafka service name is the value of
$SERVICENAME.
Turn off the collection of partition level metrics.
Go to Hosts > Hosts Configuration.
Find and configure the Cloudera Manager
Agent Monitoring Advanced Configuration Snippet (Safety Valve)
configuration property.
Enter the following to turn off the
collection of partition level
metrics: