Known Issues in Streams Replication Manager

Learn about the known issues in Streams Replication Manager, the impact or changes to the functionality, and the workaround.

Known Issues

CDPD-22089: SRM does not sync re-created source topics until the offsets have caught up with target topic
Messages written to topics that were deleted and re-created are not replicated until the source topic reaches the same offset as the target topic. For example, if at the time of deletion and re-creation there are a 100 messages on the source and target clusters, new messages will only get replicated once the re-created source topic has 100 messages. This leads to messages being lost.
None
CDPD-11079: Blacklisted topics appear in the list of replicated topics
If a topic was originally replicated but was later excluded for replication, it will still appear as a replicated topic under the /remote-topics REST API endpoint. As a result, if a call is made to this endpoint, this topic will be included in the response. Additionally, the excluded topic will also be visible in the SMM UI. However, it's Partitions and Consumer Groups will be 0, its Throughput, Replication Latency and Checkpoint Latency will show N/A.
None
CDPD-30275: SRM may automatically re-create deleted topics on target clusters
If auto.create.topics.enable is enabled, deleted topics might get automatically re-created on target clusters. This is a timing issue. It only occurs if remote topics are deleted while the replication of the topic is still ongoing.
  1. Remove the topic from the topic allowlist with srm-control. For example:
    srm-control topics --source [SOURCE_CLUSTER] --target [TARGET_CLUSTER] --remove
                  [TOPIC1]
  2. Wait until SRM is no longer replicating the topic.
  3. Delete the remote topic in the target cluster.
OPSAPS-71258:Zstandard and Snappy compression do not support /tmp mounted as noexec
SRM cannot process messages compressed with Zstandard or Snappy if /tmp is mounted as noexec.
The workaround steps for Zstandard compression are the following. You need to complete these steps for all the collected hosts.
  1. Identify the hosts running SRM.
  2. Verify the Zstandard version by checking the classpath of the running process.
    ps aux
    By default, SRM uses version libzstd-jni-1.5.2-1.

    The Zstandard JAR file is located at /opt/cloudera/parcels/CDH/jars/zstd-jni-1.5.2-1.jar. This file includes the architecture and OS-specific binaries. Check Zstandard JNI GitHub repository for the full list of supported native libraries.

  3. Extract the binary corresponding to the target system similar to the following example.
    /usr/java/default/bin/jar xf /opt/cloudera/parcels/CDH/jars/zstd-jni-1.5.2-1.jar linux/amd64/libzstd-jni-1.5.2-1.so
  4. Copy the extracted binary to a location included in java.library.path similar to the following example.
    cp linux/amd64/libzstd-jni-1.5.2-1.so /lib
  5. Restart SRM.
The workaround steps for Snappy compression are the following:
  1. In Cloudera Manager, select the SRM service and go to Configuration.
  2. Add the following line to SRM_JVM_PERF_OPTS.
    -org.xerial.snappy.tempdir==[***PATH***]
    Where [***PATH***] is a directory that is not /tmp.
  3. Restart SRM.

Limitations

SRM cannot replicate Ranger authorization policies to or from Kafka clusters
Due to a limitation in the Kafka-Ranger plugin, SRM cannot replicate Ranger policies to or from clusters that are configured to use Ranger for authorization. If you are using SRM to replicate data to or from a cluster that uses Ranger, disable authorization policy synchronization in SRM. This can be achieved by clearing the Sync Topic Acls Enabled (sync.topic.acls.enabled) checkbox.
SRM cannot ensure the exactly-once semantics of transactional source topics
SRM data replication uses at-least-once guarantees, and as a result cannot ensure the exactly-once semantics (EOS) of transactional topics in the backup/target cluster.
SRM checkpointing is not supported for transactional source topics
SRM does not correctly translate checkpoints (committed consumer group offsets) for transactional topics. Checkpointing assumes that the offset mapping function is always increasing, but with transactional source topics this is violated. Transactional topics have control messages in them, which take up an offset in the log, but they are never returned on the consumer API. This causes the mappings to decrease, causing issues in the checkpointing feature. As a result of this limitation, failover operations for transactional topics is not possible.