Fixed Issues
This section lists fixed issues in the Replication Manager service.
- DMX-348: Export remote Hive Metastore step failed while using Replication Manager
- Problem: While creating a replication policy with no Sentry permissions
for the source database/tables, with latest Data Lake clusters, an error message
appears
The remote command failed with error message: Command (HDFS replication (433)) has failed
.
- DMX-355: Currently, Replication Manager does not work efficiently with high-frequency policies.
- Workaround: Recommend using policy frequency which is greater than 30 minutes.
- DMX-364: Multiple instance of Replication Manager UI displays as "In Progress"
- Problem: When the replication policy is scheduled with a specific interval, it is seen from the UI that all the instances are "In Progress" and one of the instances is failed.
- DMX-391: Schedule Run option displays only "Does not repeat" and "Custom" options
- Workaround: Recommend to use "Does not repeat" or "Run now" option to schedule the replication policy.
- DMX-636: Inconsistency in the value of Timestamp Type column when source ORC is replicated, and source and target clusters are in different time zones
- Problem: From an Auto TLS source cluster, when a ORC table with static data is replicated, the data in the Timestamp Type column does not match in the target cluster.
- DMX-666: Replication fails when the exception "connection timed out" is not handled in Cloudera Manager
- Workaround: Ensure there is connectivity between Source Cloudera Manager
and SDX CM. If the source hostname is not resolved to IP, add the host mapping
of Cloudera Manager host to
/etc/hosts
entries of SDX CM.
- CDPSDX-2879: Ranger import fails when you create a Hive replication policy for a medium duty Data Lake cluster
- When you create a Hive replication policy with the Include Sentry Permissions with Metadata or Skip URI Privileges option for a medium duty Data Lake cluster, Ranger import fails. Before you choose the Include Sentry Permissions with Metadata option for a Hive replication policy for a medium duty Data Lake cluster, contact Cloudera Support.
- OPSAPS-61288: Replication setup fails when a non-default namespace is not created on the destination cluster
- Workaround: Create a namespace on the destination cluster before you create an HBase replication policy.
- OPSAPS-61596: HBase policy returns "different schema" when tables on source and destination clusters have the same column families.
- Problem: This issue appears because HBase replication
policies handle tables that have table attributes incorrectly. Tables with table
attributes in replicated tables might lead to other errors as
well.
Workaround: Remove the table attributes for the HBase tables.
- OPSAPS-62910, OPSAPS-62948, OPSAPS-62956, and OPSAPS-62998
- Problem: When you perform HBase replication policy creation, policy update, or policy delete operations on multiple policies between the same cluster pair at once, different failures appear. This is because the HBase peer does not get synchronized during these operations as expected.
- OPSAPS-62995
- Problem: The HBase replication policy first time setup fails when the destination cluster is a Data Hub.
- OPSAPS-63138
- Problem: The HBase Replication First Time Setup command runs successfully in the destination cluster though the Admin Setup HBase replication subcommand fails on the source cluster.
- OPSAPS-63071
- Problem: HBase replication policies from on-premises (CDH5 and CDH6) clusters fail when the source cluster Cloudera Manager version is 7.6.0.