Known Issues in Streams Replication Manager
This topic describes known issues for using Streams Replication Manager in this release of Cloudera Runtime.
- SRM does not sync re-created source topics until the offsets have caught up with target topic
- Messages written to topics that were deleted and re-created are not replicated until the source topic reaches the same offset as the target topic. For example, if at the time of deletion and re-creation there are a 100 messages on the source and target clusters, new messages will only get replicated once the re-created source topic has 100 messages. This leads to messages being lost.
- Workaround: N/A.
- SRM may automatically re-create deleted topics
auto.create.topics.enableis enabled, deleted topics are automatically recreated on source clusters.
- Workaround: Prior to deletion, remove the topic from the topic whitelist with the
srm-controltool. This prevents topics from being re-created.
srm-control topics --source [SOURCE_CLUSTER] --target [TARGET_CLUSTER] --remove [TOPIC1][TOPIC2]
- CSP-462: Replication failing when SRM driver is present on multiple nodes
- Kafka replication fails when the SRM driver is installed on more than one node.
- Workaround: N/A
- SRM Cannot Replicate ACLs to or from Kafka Clusters that are Configured with Sentry.
- When Sentry contains Kafka authorization policies for any
ConsumerGroupresource, SRM cannot replicate authorization rules from one Kafka cluster to another in environments where Sentry is enabled. This is due to a Kafka resource conversion error in Sentry. For more information regarding the underlying Sentry issue, see SENTRY-2535 and the Kafka known issues in CDH Known Issues.
- Workaround: Disable authorization policy synchronization in SRM. This can be
achieved by setting the
sync.topic.acls.enabledproperty to false.
- CDPD-13864 and CDPD-15327: Replication stops after the network configuration of a source or target cluster is changed
- If the network configuration of a cluster which is taking part in a replication flow is changed, for example, port numbers are changed as a result of enabling or disabling TLS, SRM will not update its internal configuration even if SRM is reconfigured and restarted. From SRM’s perspective, it is the cluster identity that has changed. SRM cannot determine whether the new identity corresponds to the same cluster or not, only the owner or administrator of that cluster can know. In this case, SRM tries to use the last known configuration of that cluster which might not be valid, resulting in the halt of replication.
- Workaround: The internal topic storing the configuration of SRM can be deleted.
After a restart SRM will re-create and re-populate it with the configuration data loaded
from its property file. The topic is hosted on the target cluster of the replication flow.
The topic name is:
However, changing a replicated cluster's identity is generally not recommended.