Fixed Issues

This section lists fixed issues in the Replication Manager service.

DMX-348: Export remote Hive Metastore step failed while using Replication Manager
Problem: While creating a replication policy with no Sentry permissions for the source database/tables, with latest Data Lake clusters, an error message appears The remote command failed with error message: Command (HDFS replication (433)) has failed.
Workaround: Make sure that while using CDH cluster(s), you do not use MapReduce 1 service.
DMX-355: Currently, Replication Manager does not work efficiently with high-frequency policies.
Workaround: Recommend using policy frequency which is greater than 30 minutes.
DMX-364: Multiple instance of Replication Manager UI displays as "In Progress"
Problem: When the replication policy is scheduled with a specific interval, it is seen from the UI that all the instances are "In Progress" and one of the instances is failed.
Workaround: Refresh the web browser and try again.
DMX-391: Schedule Run option displays only "Does not repeat" and "Custom" options
Workaround: Recommend to use "Does not repeat" or "Run now" option to schedule the replication policy.
DMX-636: Inconsistency in the value of Timestamp Type column when source ORC is replicated, and source and target clusters are in different time zones
Problem: From an Auto TLS source cluster, when a ORC table with static data is replicated, the data in the Timestamp Type column does not match in the target cluster.
DMX-666: Replication fails when the exception "connection timed out" is not handled in Cloudera Manager
Workaround: Ensure there is connectivity between Source Cloudera Manager and SDX CM. If the source hostname is not resolved to IP, add the host mapping of Cloudera Manager host to /etc/hosts entries of SDX CM.
CDPSDX-2879: Ranger import fails when you create a Hive replication policy for a medium duty Data Lake cluster
When you create a Hive replication policy with the Include Sentry Permissions with Metadata or Skip URI Privileges option for a medium duty Data Lake cluster, Ranger import fails. Before you choose the Include Sentry Permissions with Metadata option for a Hive replication policy for a medium duty Data Lake cluster, contact Cloudera Support.
OPSAPS-61288: Replication setup fails when a non-default namespace is not created on the destination cluster
Workaround: Create a namespace on the destination cluster before you create an HBase replication policy.
OPSAPS-61596: HBase policy returns "different schema" when tables on source and destination clusters have the same column families.
Problem: This issue appears because HBase replication policies handle tables that have table attributes incorrectly. Tables with table attributes in replicated tables might lead to other errors as well.

Workaround: Remove the table attributes for the HBase tables.

OPSAPS-62910, OPSAPS-62948, OPSAPS-62956, and OPSAPS-62998
Problem: When you perform HBase replication policy creation, policy update, or policy delete operations on multiple policies between the same cluster pair at once, different failures appear. This is because the HBase peer does not get synchronized during these operations as expected.
Workaround: Perform HBase replication policy creation, modification, or deletion action one at a time on each HBase replication policy of a cluster pair.
OPSAPS-62995
Problem: The HBase replication policy first time setup fails when the destination cluster is a Data Hub.
Workaround: This issue appears when the HBase classpath is not configured automatically as expected.

To resolve this issue, perform the following steps in Cloudera Manager of the Data Hub before setting up an HBase replication policy:

  1. Navigate to the HBase service.

  2. Search for the following Advanced Configuration Snippet (Safety Valve) values, and add the key and value as shown below:
    • HBase Service Environment Advanced Configuration Snippet (Safety Valve)

      Key: HBASE_CLASSPATH

      Value: HBASE_CLASSPATH="$HBASE_CLASSPATH:/opt/cloudera/parcels/CDH/lib/hbase-cdp-jars/"

    • HBase Client Environment Advanced Configuration Snippet (Safety Valve)

      Key: HBASE_CLASSPATH

      Value: HBASE_CLASSPATH="$HBASE_CLASSPATH:/opt/cloudera/parcels/CDH/lib/hbase-cdp-jars/"

  3. Restart HBase service.

OPSAPS-63138
Problem: The HBase Replication First Time Setup command runs successfully in the destination cluster though the Admin Setup HBase replication subcommand fails on the source cluster.
OPSAPS-63071
Problem: HBase replication policies from on-premises (CDH5 and CDH6) clusters fail when the source cluster Cloudera Manager version is 7.6.0.
Workaround: To resolve this issue, perform the following steps in the source cluster Cloudera Manager before you create an HBase replication policy:
  1. Navigate to the HBase service.

  2. Search for the following Advanced Configuration Snippet (Safety Valve) values, and add the key-value pairs as shown below:

    • HBase Service Environment Advanced Configuration Snippet (Safety Valve)

      Key: HBASE_CLASSPATH

      Value: HBASE_CLASSPATH="$HBASE_CLASSPATH:[***location of HBase replication parcel jar file***]"

    • HBase Client Environment Advanced Configuration Snippet (Safety Valve)

      Key: HBASE_CLASSPATH

      Value: HBASE_CLASSPATH="$HBASE_CLASSPATH:[***location of HBase replication parcel jar file***]"

    For example, the location of the HBase replication parcel jar in your CDH source cluster might be /opt/cloudera/parcels/CLOUDERA_OPDB_REPLICATION-1.0-1.CLOUDERA_OPDB_REPLICATION5.14.4.p0.9261092/lib/

  3. Restart HBase service.