Known issues in Streams Messaging

Learn about the known issues in Streams Messaging clusters, the impact or changes to the functionality, and the workaround.

Kafka

Learn about the known issues and limitations in Kafka in this release:

Known Issues

OPSAPS-59553: SMM's bootstrap server config should be updated based on Kafka's listeners

SMM does not show any metrics for Kafka or Kafka Connect when multiple listeners are set in Kafka.

Workaround: SMM cannot identify multiple listeners and still points to bootstrap server using the default broker port (9093 for SASL_SSL). You would have to override bootstrap server URL (hostname:port as set in the listeners for broker) in the following path:

Cloudera Manager > SMM > Configuration > Streams Messaging Manager Rest Admin Server Advanced Configuration Snippet (Safety Valve) for streams-messaging-manager.yaml > Save Changes > Restart SMM.

The offsets.topic.replication.factor property must be less than or equal to the number of live brokers
The offsets.topic.replication.factor broker configuration is now enforced upon auto topic creation. Internal auto topic creation will fail with a GROUP_COORDINATOR_NOT_AVAILABLE error until the cluster size meets this replication factor requirement.
None
Requests fail when sending to a nonexistent topic with auto.create.topics.enable set to true
The first few produce requests fail when sending to a nonexistent topic with auto.create.topics.enable set to true.
Increase the number of retries in the producer configuration setting retries.
KAFKA-2561: Performance degradation when SSL Is enabled
In some configuration scenarios, significant performance degradation can occur when SSL is enabled. The impact varies depending on your CPU, JVM version, Kafka configuration, and message size. Consumers are typically more affected than producers.
Configure brokers and clients with ssl.secure.random.implementation = SHA1PRNG. It often reduces this degradation drastically, but its effect is CPU and JVM dependent.
OPSAPS-43236: Kafka garbage collection logs are written to the process directory
By default Kafka garbage collection logs are written to the agent process directory. Changing the default path for these log files is currently unsupported.
None
CDPD-45183: Kafka Connect active topics might be visible to unauthorised users
The Kafka Connect active topics endpoint (/connectors/[***CONNECTOR NAME***]/topics) and the Connect Cluster page on the SMM UI disregard the user permissions configured for the Kafka service in Ranger. As a result, all active topics of connectors might become visible to users who do not have permissions to view them. Note that user permission configured for Kafka Connect in Ranger are not affected by this issue and are correctly applied.
None.
RANGER-3809: Idempotent Kafka producer fails to initialize due to an authorization failure
Kafka producers that have idempotence enabled require the Idempotent Write permission to be set on the cluster resource in Ranger. If permission is not given, the client fails to initialize and an error similar to the following is thrown:
org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state
    at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:1125)
    at org.apache.kafka.clients.producer.internals.TransactionManager.maybeAddPartition(TransactionManager.java:442)
    at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:1000)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:914)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:800)
    .
    .
    .
Caused by: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
Idempotence is enabled by default for clients in Kafka 3.0.1, 3.1.1, and any version after 3.1.1. This means that any client updated to 3.0.1, 3.1.1, or any version after 3.1.1 is affected by this issue.
This issue has two workarounds, do either of the following:
  • Explicitly disable idempotence for the producers. This can be done by setting enable.idempotence to false.
  • Update your policies in Ranger and ensure that producers have Idempotent Write permission on the cluster resource.
DBZ-4990: The Debezium Db2 Source connector does not support schema evolution
The Debezium Db2 Source connector does not support the evolution (updates) of schemas. In addition, schema change events are not emitted to the schema change topic if there is a change in the schema of a table that is in capture mode. For more information, see DBZ-4990.
None.
CFM-3532: The Stateless NiFi Source, Stateless NiFi Sink, and HDFS Stateless Sink connectors cannot use Snappy compression
This issue only affects Stateless NiFi Source and Sink connectors if the connector is running a dataflow that uses a processor that uses Hadoop libraries and is configured to use Snappy compression. The HDFS Stateless Sink connector is only affected if the Compression Codec or Compression Codec for Parquet properties are set to SNAPPY.
If you are affected by this issue, errors similar to the following will be present in the logs.
Failed to write to HDFS due to java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()
Failed to write to HDFS due to java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.
Download and deploy missing libraries.
  1. Create the /opt/nativelibs directory.
    mkdir /opt/nativelibs
  2. Change the owner to kafka.
    chown kafka:kafka /opt/nativelibs
  3. Locate the directory containing the Hadoop native libraries and copy its contents to the directory you created.
    cp /opt/cloudera/parcels/CDH/lib/hadoop/lib/native/* /opt/nativelibs
  4. Verify that libsnappy.so was copied to the directory you created.
  5. Remove the following from /opt/nativelibs.
    libhadoop.a
    libhadoop.so
    libhadoop.so.1.0.0
  6. Run the following command.
    hadoop version

    The command returns the Hadoop version running in the cluster. Note down the first three digits in the version.

  7. Go to https://archive.apache.org/dist/hadoop/common/ and download the Hadoop version that matches the first three digits of the version running in the cluster.

    For example, if your Hadoop version is 3.1.1.7.1.9.0-296, then you need to download Hadoop 3.1.1.

  8. Extract the downloaded archive.
  9. Copy the following libraries from the downloaded archive to /opt/nativelibs on the cluster host.
    libhadoop.a
    libhadoop.so.1.0.0

    The libraries are located in hadoop-[***VERSION***]/lib/native.

  10. Create a symlink named libhadoop.so and point it to /opt/nativelibs/libhadoop.so.1.0.0.
    ln -s /opt/nativelibs/libhadoop.so.1.0.0 /opt/nativelibs/libhadoop.so
  11. Change the owner of every entry within /opt/nativelibs to kafka.
    chown -h kafka:kafka /opt/nativelibs/*
  12. In Cloudera Manager, go to Kafka service > Configuration.
  13. Add the following key-value pair to Kafka Connect Environment Advanced Configuration Snippet (Safety Valve).
    • Key: LD_LIBRARY_PATH
    • Value: /opt/nativelibs
  14. Click Save Changes.
  15. Restart the Kafka service.
Unsupported features
The following Kafka features are not supported in Cloudera Data Platform:
  • Only Java and .Net based clients are supported. Clients developed with C, C++, Python, and other languages are currently not supported.
  • The Kafka default authorizer is not supported. This includes setting ACLs and all related APIs, broker functionality, and command-line tools.
  • SASL/SCRAM is only supported for delegation token based authentication. It is not supported as a standalone authentication mechanism.
  • Kafka KRaft in this release of Cloudera Runtime is in technical preview and does not support the following:
    • Deployments with multiple log directories. This includes deployments that use JBOD for storage.
    • Delegation token based authentication.
    • Migrating an already running Kafka service from ZooKeeper to KRaft.
    • Atlas Integration.

Limitations

Collection of Partition Level Metrics May Cause Cloudera Manager’s Performance to Degrade

If the Kafka service operates with a large number of partitions, collection of partition level metrics may cause Cloudera Manager's performance to degrade.

If you are observing performance degradation and your cluster is operating with a high number of partitions, you can choose to disable the collection of partition level metrics.
Complete the following steps to turn off the collection of partition level metrics:
  1. Obtain the Kafka service name:
    1. In Cloudera Manager, Select the Kafka service.
    2. Select any available chart, and select Open in Chart Builder from the configuration icon drop-down.
    3. Find $SERVICENAME= near the top of the display.
      The Kafka service name is the value of $SERVICENAME.
  2. Turn off the collection of partition level metrics:
    1. Go to Hosts > Hosts Configuration.
    2. Find and configure the Cloudera Manager Agent Monitoring Advanced Configuration Snippet (Safety Valve) configuration property.
      Enter the following to turn off the collection of partition level metrics:
      [KAFKA_SERVICE_NAME]_feature_send_broker_topic_partition_entity_update_enabled=false
      
      Replace [KAFKA_SERVICE_NAME] with the service name of Kafka obtained in step 1. The service name should always be in lower case.
    3. Click Save Changes.

Schema Registry

Learn about the known issues and limitations in Schema Registry in this release:

OPSAPS-68708: Schema Registry might fail to start if a load balancer address is specified in Ranger
Schema Registry does not start if the address specified in the Load Balancer Address Ranger property does not end with a trailing slash (/).
Set the value of the RANGER_REST_URL Schema Registry environment variable to an address that includes a trailing slash.
  1. In Cloudera Manager, select the Schema Registry service.
  2. Go to Configuration.
  3. Find the Schema Registry Server Environment Advanced Configuration Snippet (Safety Valve) property and add the following:
    Key: RANGER_REST_URL
    Value: [***RANGER REST API URL***]
    Replace [***RANGER REST API URL***] with an address that can be used by Schema Registry to access Ranger. Ensure that the address ends with a trailing slash. For example: https://ranger-1.cloudera.com:6182/
  4. Restart the Schema Registry service.

Streams Messaging Manager

Learn about the known issues in Streams Messaging Manager in this release.
CDPD-39313: Some numbers are not rendered properly in SMM UI
Very large numbers can be imprecisely represented on the UI. For example, bytes larger than 8 petabytes would lose precision.
None.
CDPD-45183: Kafka Connect active topics might be visible to unauthorised users
The Kafka Connect active topics endpoint (/connectors/[***CONNECTOR NAME***]/topics) and the Connect Cluster page on the SMM UI disregard the user permissions configured for the Kafka service in Ranger. As a result, all active topics of connectors might become visible to users who do not have permissions to view them. Note that user permission configured for Kafka Connect in Ranger are not affected by this issue and are correctly applied.
None.
OPSAPS-59553: SMM's bootstrap server config should be updated based on Kafka's listeners
SMM does not show any metrics for Kafka or Kafka Connect when multiple listeners are set in Kafka.
SMM cannot identify multiple listeners and still points to bootstrap server using the default broker port (9093 for SASL_SSL). You would have to override bootstrap server URL (hostname:port as set in the listeners for broker). Add the bootstrap server details in SMM safety valve in the following path:
Cloudera Manager > SMM > Configuration > Streams Messaging Manager Rest Admin Server Advanced Configuration Snippet (Safety Valve) for streams-messaging-manager.yaml > Add the following value for bootstrap servers > Save Changes > Restart SMM.
streams.messaging.manager.kafka.bootstrap.servers=<comma-separated list of brokers>
OPSAPS-59597: SMM UI logs are not supported by Cloudera Manager
Cloudera Manager does not support the log type used by SMM UI.
View the SMM UI logs on the host.
Limitations
CDPD-36422: 1MB flow.snapshot freezes safari
Importing large connector configurations/ flow.snapshots reduces the usability of the Streams Messaging Manager's Connectors page when using Safari browser.
Use a different browser (Chrome/Firefox/Edge).

Streams Replication Manager

Learn about the known issues and limitations in Streams Replication Manager in this release:

Known Issues
CDPD-22089: SRM does not sync re-created source topics until the offsets have caught up with target topic
Messages written to topics that were deleted and re-created are not replicated until the source topic reaches the same offset as the target topic. For example, if at the time of deletion and re-creation there are a 100 messages on the source and target clusters, new messages will only get replicated once the re-created source topic has 100 messages. This leads to messages being lost.
None
CDPD-11079: Blacklisted topics appear in the list of replicated topics
If a topic was originally replicated but was later disallowed (blacklisted), it will still appear as a replicated topic under the /remote-topics REST API endpoint. As a result, if a call is made to this endpoint, the disallowed topic will be included in the response. Additionally, the disallowed topic will also be visible in the SMM UI. However, it's Partitions and Consumer Groups will be 0, its Throughput, Replication Latency and Checkpoint Latency will show N/A.
None
CDPD-30275: SRM may automatically re-create deleted topics on target clusters
If auto.create.topics.enable is enabled, deleted topics might get automatically re-created on target clusters. This is a timing issue. It only occurs if remote topics are deleted while the replication of the topic is still ongoing.
  1. Remove the topic from the topic allowlist with srm-control. For example:
    srm-control topics --source [SOURCE_CLUSTER] --target [TARGET_CLUSTER] --remove [TOPIC1]
  2. Wait until SRM is no longer replicating the topic.
  3. Delete the remote topic in the target cluster.
Limitations
SRM cannot replicate Ranger authorization policies to or from Kafka clusters
Due to a limitation in the Kafka-Ranger plugin, SRM cannot replicate Ranger policies to or from clusters that are configured to use Ranger for authorization. If you are using SRM to replicate data to or from a cluster that uses Ranger, disable authorization policy synchronization in SRM. This can be achieved by clearing the Sync Topic Acls Enabled (sync.topic.acls.enabled) checkbox.

Cruise Control

Learn about the known issues and limitations in Cruise Control in this release:

Rebalancing with Cruise Control does not work due to the metric reporter failing to report the CPU usage metric
On the Kafka broker, the Cruise control metric reporter plugin may fail to report the CPU usage metric.
If the CPU usage metric is not reported, the numValidWindows in Cruise Control will be 0 and proposal generation as well as partition rebalancing will not work. If this issue is present, the following message will be included in the Kafka logs:
WARN com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter:
          [CruiseControlMetricsReporterRunner]: Failed reporting CPU util.
java.io.IOException: Java Virtual Machine recent CPU usage is not
        available.
This issue is only known to affect Kafka broker hosts that have the following specifications:
  • CPU: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
  • OS: Linux 4.18.5-1.el7.elrepo.x86_64 #1 SMP Fri Aug 24 11:35:05 EDT 2018 x86_64
  • Java version: 8-18
Move the broker to a different machine where the CPU is different. This can be done by performing a manual repair on the affected nodes. For more information, see the Data Hub documentation.