Troubleshooting replication policies in CDP Public Cloud

The troubleshooting scenarios in this topic help you to troubleshoot issues in the Replication Manager service in CDP Public Cloud.

What are different methods to identify errors while troubleshooting a failed replication policy?

Remedy

You can choose one of the following methods to identify the errors to troubleshoot a job failure:
  1. On the Replication Policies page, click the failed job in the Job History pane. The errors for the failed job appear.
    The following sample image shows the Job History pane for a replication policy job:
  2. In the source and target Cloudera Manager, click Running Commands on the left hand navigation bar. The recent command history shows the failing commands.
    The following sample image shows the Running Commands page for an HBase replication policy:
  3. On the source cluster and target cluster, open the service logs to track the errors (For example, HBase service logs).

    You can also search on the Cloudera Manager > Diagnostics > Logs page to view the logs.

HDFS replication policies fail when the export HTTPS_PROXY environment variable is set to access AWS through proxy servers

Remedy

To resolve this issue, perform the following steps:
  1. Open the core-site.xml file on the source cluster.
  2. Enter the following properties in the file:
    <property>
      <name>fs.s3a.proxy.host</name>
      <description>Hostname of the (optional) proxy server for S3 connections.</description>
    </property>
    
    <property>
      <name>fs.s3a.proxy.port</name>
      <description>Proxy server port. If this property is not set
        but fs.s3a.proxy.host is, port 80 or 443 is assumed (consistent with
        the value of fs.s3a.connection.ssl.enabled).</description>
    </property>
    
  3. Save and close the file.
  4. Restart the source Cloudera Manager.
  5. Run the failed HDFS replication policies in Replication Manager.

Cannot find destination clusters when you ping using their host names

Condition

The source cluster hosts for HBase replication policies do not find the destination clusters when you ping by their host names.

Cause

This might occur for on-premises clusters such as CDP Private Cloud Base clusters or CDH clusters because the source clusters are not on the same network as the destination Data Hub. Therefore, hostnames cannot be resolved by the DNS service on the source cluster.

Remedy

Add the destination Region Server and Zookeeper IP to host name mappings in the /etc/hosts files of all the Region Servers on the source cluster.
The following snippet shows the contents in a sample /etc/hosts file:
10.115.74.181 dx-7548-worker2.dx-hbas.x2-8y.dev.dr.work
10.115.72.28 dx-7548-worker1.dx-hbas.x2-8y.dev.dr.work
10.115.73.231 dx-7548-worker0.dx-hbas.x2-8y.dev.dr.work
10.115.72.20 dx-7548-master1.dx-hbas.x2-8y.dev.dr.work
10.115.74.156 dx-7548-master0.dx-hbas.x2-8y.dev.dr.work
10.115.72.70 dx-7548-leader0.dx-hbas.x2-8y.dev.dr.work

HBase replication policy fails when Perform Initial Snapshot is chosen

Condition

An HBase replication policy fails for COD on Microsoft Azure when the Perform Initial Snapshot option is chosen but data replication is successful when the option is not chosen.

Cause

This issue appears when the required managed identity of source roles are not assigned.

Remedy

Assign the managed identity of source roles, Storage Blob Data Owner or Storage Blob Data Contributor, to the destination storage data container and vice versa for bidirectional replication.
The roles allow writing a snapshot in the destination cluster container.