Known Issues in Streams Messaging
Learn about the known issues in Streams Messaging clusters, the impact or changes to the functionality, and the workaround.
Learn about the known issues and limitations in Kafka in this release:
- Topics created with the
kafka-topicstool are only accessible by the user who created them when the deprecated
--zookeeperoption is used
- By default all created topics are secured. However, when topic
creation and deletion is done with the kafka-topics tool using the
--zookeeperoption, the tool talks directly to Zookeeper. Because security is the responsibility of ZooKeeper authorization and authentication, Kafka cannot prevent users from making ZooKeeper changes. As a result, if the
--zookeeperoption is used, only the user who created the topic will be able to carry out administrative actions on it. In this scenario Kafka will not have permissions to perform tasks on topics created this way.
--bootstrap-serveroption that does not require direct access to Zookeeper.
- Certain Kafka command line tools require direct access to Zookeeper
- The following command line tools talk directly to ZooKeeper and
therefore are not secured via Kafka:
offsets.topic.replication.factorproperty must be less than or equal to the number of live brokers
offsets.topic.replication.factorbroker configuration is now enforced upon auto topic creation. Internal auto topic creation will fail with a
GROUP_COORDINATOR_NOT_AVAILABLEerror until the cluster size meets this replication factor requirement.
- Requests fail when sending to a nonexistent topic with
auto.create.topics.enableset to true
- The first few
producerequests fail when sending to a nonexistent topic with
auto.create.topics.enableset to true.
- Increase the number of retries in the producer
- Custom Kerberos principal names cannot be used for kerberized ZooKeeper and Kafka instances
- When using ZooKeeper authentication and a custom Kerberos principal, Kerberos-enabled Kafka does not start. You must disable ZooKeeper authentication for Kafka or use the default Kerberos principals for ZooKeeper and Kafka.
- KAFKA-2561: Performance degradation when SSL Is enabled
- In some configuration scenarios, significant performance degradation can occur when SSL is enabled. The impact varies depending on your CPU, JVM version, Kafka configuration, and message size. Consumers are typically more affected than producers.
- Configure brokers and clients with
ssl.secure.random.implementation = SHA1PRNG. It often reduces this degradation drastically, but its effect is CPU and JVM dependent.
- OPSAPS-43236: Kafka garbage collection logs are written to the process directory
- By default Kafka garbage collection logs are written to the agent process directory. Changing the default path for these log files is currently unsupported.
- Collection of Partition Level Metrics May Cause Cloudera Manager’s Performance to Degrade
If the Kafka service operates with a large number of partitions, collection of partition level metrics may cause Cloudera Manager's performance to degrade.If you are observing performance degradation and your cluster is operating with a high number of partitions, you can choose to disable the collection of partition level metrics.Complete the following steps to turn off the collection of partition level metrics:
- Obtain the Kafka service name:
- In Cloudera Manager, Select the Kafka service.
- Select any available chart, and select Open in Chart Builder from the configuration icon drop-down.
$SERVICENAME=near the top of the display.The Kafka service name is the value of
- Turn off the collection of partition level metrics:
- Go to .
- Find and configure the Cloudera Manager Agent Monitoring Advanced
Configuration Snippet (Safety Valve) configuration
property.Enter the following to turn off the collection of partition level metrics:
[KAFKA_SERVICE_NAME]with the service name of Kafka obtained in step 1. The service name should always be in lower case.
- Click Save Changes.
- Obtain the Kafka service name:
There are no known issues in Schema Registry in this release.
Streams Messaging Manager
- OPSAPS-59553: SMM's bootstrap server config should be updated based on Kafka's listeners
- SMM does not show any metrics for Kafka or Kafka Connect when multiple listeners are set in Kafka.
- SMM cannot identify multiple listeners and still points
to bootstrap server using the default broker port (9093 for SASL_SSL). You would have
to override bootstrap server URL (hostname:port as set in the listeners for broker) in
the following path:
Cloudera Manager > SMM > Configuration > Streams Messaging Manager Rest Admin Server Advanced Configuration Snippet (Safety Valve) for streams-messaging-manager.yaml > Save Changes > Restart SMM.
- OPSAPS-59597: SMM UI logs are not supported by Cloudera Manager
- Cloudera Manager does not support the log type used by SMM UI.
- View the SMM UI logs on the host.
- OPSAPS-59828: SMM cannot connect to Schema Registry when TLS is enabled
- When TLS is enabled, SMM by default cannot properly connect to
Schema Registry.As a result, when viewing topics in the SMM Data Explorer with the
deserializer key or value set to Avro, the following error messages are shown:
- Error deserializing key/value for partition [***PARTITION***] at offset [***OFFSET***]. If needed, please seek past the record to continue consumption.
- Failed to fetch value schema versions for topic : '[***TOPIC**]'.
- javax.net.ssl.SSLHandshakeException: PKIX path building failed:...
- Additional security properties must be set for SMM.
- In Cloudera Manager, select the SMM service.
- Go to Configuration.
- Find and configure the SMM_JMX_OPTS property.
Add the following JVM SSL properties:
- Djavax.net.ssl.trustStore=[***SMM TRUSTSTORE LOCATION***]
Streams Replication Manager
Learn about the known issues and limitations in Streams Replication Manager in this release:
- CDPD-22089: SRM does not sync re-created source topics until the offsets have caught up with target topic
- Messages written to topics that were deleted and re-created are not replicated until the source topic reaches the same offset as the target topic. For example, if at the time of deletion and re-creation there are a 100 messages on the source and target clusters, new messages will only get replicated once the re-created source topic has 100 messages. This leads to messages being lost.
- CDPD-14019: SRM may automatically re-create deleted topics
auto.create.topics.enableis enabled, deleted topics are automatically recreated on source clusters.
- Prior to deletion, remove the topic from the topic
whitelist with the
srm-controltool. This prevents topics from being re-created.
srm-control topics --source [SOURCE_CLUSTER] --target [TARGET_CLUSTER] --remove [TOPIC1][TOPIC2]
- CDPD-60823: Configuring the SRM Client's secure storage is mandatory for unsecured environments
- In an unsecured environment the
srm-controltool should not need any additional configuration to run. However, due to an issue with the automatic generation of the default configuration, configuring the SRM Client's secure storage is mandatory for the
srm-controltool. This is true even if none of the clusters that the tool connects to are secured.If a secure storage is not configured, the tool will fail with the following NullPointerException:
java.lang.NullPointerException at com.cloudera.dim.mirror.SecureConfigProvider.retrievePassword(SecureConfigProvider.java:99) at com.cloudera.dim.mirror.SecureConfigProvider.configure(SecureConfigProvider.java:113) at org.apache.kafka.common.config.AbstractConfig.instantiateConfigProviders(AbstractConfig.java:533) at org.apache.kafka.common.config.AbstractConfig.resolveConfigVariables(AbstractConfig.java:477) at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:107) at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:142) at org.apache.kafka.connect.mirror.MirrorMakerConfig.<init>(MirrorMakerConfig.java:88) at com.cloudera.dim.mirror.MirrorControlCommand$SourceTargetCommand.init(MirrorControlCommand.java:97) at com.cloudera.dim.mirror.MirrorControlCommand.issueCommand(MirrorControlCommand.java:369) at com.cloudera.dim.mirror.MirrorControlCommand.main(MirrorControlCommand.java:346)
- Configure a secure storage password and set it as an
environment variable in your CLI session before running the
- In Cloudera Manager, select the Streams Replication Manager service.
- Go to Configuration.
- Find and configure the SRM Client's Secure Storage
Take note of the password that you configure.
Click Save changes.
- Restart the SRM service
- SSH into one of the SRM hosts in your cluster.
- Set the secure storage password as an environment
Replace [***SECURE STORAGE ENV VAR***] with the name of the environment variable you specified in Environment Variable Holding SRM Client's Secure Storage Password. Replace [***SRM SECURE STORAGE PASSWORD***] with the password you specified in SRM Client's Secure Storage Password. For example:
export [***SECURE STORAGE ENV VAR***]=”[***SECURE STORAGE PASSWORD***]”
- OPSAPS-61001: Saving configuration changes for SRM is not possible
- Cloudera Manager incorrectly labels the SRM
Client's Secure Storage Password property as mandatory. Moreover, it
does not offer this property for configuration when SRM is installed with
the Add Service Wizard.
As a result, it is possible to install and start SRM without configuring this property. However, in a case like this, making changes to SRM's configuration is not possible until the SRM Client's Secure Storage Password property is set.
- Configure the SRM Client's Secure Storage Password property.
- SRM cannot replicate Ranger authorization policies to or from Kafka clusters
- Due to a limitation in the Kafka-Ranger plugin, SRM cannot
replicate Ranger policies to or from clusters that are configured to use Ranger for
authorization. If you are using SRM to replicate data to or from a cluster that uses
Ranger, disable authorization policy synchronization in SRM. This can be achieved by
clearing the Sync Topic Acls Enabled