Known Issues in Apache Kafka
Learn about the known issues in Kafka, the impact or changes to the functionality, and the workaround.
- Topics created with the
kafka-topicstool are only accessible by the user who created them when the deprecated
--zookeeperoption is used
By default all created topics are secured. However, when topic creation and deletion
is done with the kafka-topics tool using the
--zookeeperoption, the tool talks directly to Zookeeper. Because security is the responsibility of ZooKeeper authorization and authentication, Kafka cannot prevent users from making ZooKeeper changes. As a result, if the
--zookeeperoption is used, only the user who created the topic will be able to carry out administrative actions on it. In this scenario Kafka will not have permissions to perform tasks on topics created this way.
--bootstrap-serveroption that does not require direct access to Zookeeper.
- Certain Kafka command line tools require direct access to Zookeeper
The following command line tools talk directly to ZooKeeper and therefore are not secured via Kafka:
offsets.topic.replication.factorproperty must be less than or equal to the number of live brokers
offsets.topic.replication.factorbroker configuration is now enforced upon auto topic creation. Internal auto topic creation will fail with a
GROUP_COORDINATOR_NOT_AVAILABLEerror until the cluster size meets this replication factor requirement.
- Requests fail when sending to a nonexistent topic with
auto.create.topics.enableset to true
The first few
producerequests fail when sending to a nonexistent topic with
auto.create.topics.enableset to true.
- Increase the number of retries in the producer configuration
- Custom Kerberos principal names cannot be used for kerberized ZooKeeper and Kafka instances
- When using ZooKeeper authentication and a custom Kerberos principal, Kerberos-enabled Kafka does not start. You must disable ZooKeeper authentication for Kafka or use the default Kerberos principals for ZooKeeper and Kafka.
- Performance degradation when SSL Is enabled
- In some configuration scenarios, significant performance degradation can occur when SSL is enabled. The impact varies depending on your CPU, JVM version, Kafka configuration, and message size. Consumers are typically more affected than producers.
- Configure brokers and clients with
ssl.secure.random.implementation = SHA1PRNG. It often reduces this degradation drastically, but its effect is CPU and JVM dependent.
- OPSAPS-43236: Kafka garbage collection logs are written to the process directory
- By default Kafka garbage collection logs are written to the agent process directory. Changing the default path for these log files is currently unsupported.
- CDPD-8546: Repeated ZooKeeper client log trace in Kafka server logs
If the Enable Secure Connection to ZooKeeper property is set to true and the RANGER Service property is configured, both the Kafka Zookeeper client and Ranger Zookeeper client will be configured to connect to Zookeeper via secure channels. However, the Ranger Zookeeper client will try to establish a TLS/SSL connection to an unsecure port (2181). This results in the client repeatedly trying and failing to connect to Zookeeper, which in turn causes the
org.apache.zookeeper.Login: TGT refresh thread started.log message as well as other related log messages to repeatedly appear in the Kafka logs.
- In Cloudera Manager, select the Kafka service.
- Select Configuration and find the Kafka Broker Advanced Configuration Snippet (Safety Valve) for ranger-kafka-audit.xml property.
- Add the following to the
name : xasecure.audit.destination.solr.zookeepers value: ZOOKEEPER_HOST:SECURE_PORT/solr
ZOOKEEPER_HOST:SECURE_PORTwith the hostname and secure port of the ZooKeeper host that Solr depends on.
- Enter a Reason for change, and click Save Changes to commit the changes.
- Restart the service.
The following Kafka features are not supported in Cloudera Data Platform:
- Only Java based clients are supported. Clients developed with C, C++, Python, .NET and other languages are currently not supported.
- The Kafka default authorizer is not supported. This includes setting ACLs and all related APIs, broker functionality, and command-line tools.
- Collection of Partition Level Metrics May Cause Cloudera Manager’s Performance to Degrade
If the Kafka service operates with a large number of partitions, collection of partition level metrics may cause Cloudera Manager's performance to degrade.If you are observing performance degradation and your cluster is operating with a high number of partitions, you can choose to disable the collection of partition level metrics.Complete the following steps to turn off the collection of partition level metrics:
- Obtain the Kafka service name:
- In Cloudera Manager, Select the Kafka service.
- Select any available chart, and select Open in Chart Builder from the configuration icon drop-down.
$SERVICENAME=near the top of the display.The Kafka service name is the value of
- Turn off the collection of partition level metrics:
- Go to .
- Find and configure the Cloudera Manager Agent Monitoring
Advanced Configuration Snippet (Safety Valve) configuration
property.Enter the following to turn off the collection of partition level metrics:
[KAFKA_SERVICE_NAME]with the service name of Kafka obtained in step 1. The service name should always be in lower case.
- Click Save Changes.
- Obtain the Kafka service name: