DLM Release Notes
Also available as:
PDF

Known Issues

DLM 1.5.1 has the following known issues, scheduled for resolution in a future release. Where available, a workaround has been provided.

Hortonworks Bug ID Category Summary
DMX-606 DLM Engine

Problem: Hive replication fails at REPL_DUMP stage due to Knox Gateway timeout.

Description:
Exception occurred while doing replication instance execution:  (QuartzJob:166)
com.hortonworks.beacon.exceptions.BeaconException: com.hortonworks.beacon.client.BeaconClientException: <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 500 Server Error</title>
</head>
<body><h2>HTTP ERROR 500</h2>
<p>Problem accessing /gateway/beacon-proxy/beacon/api/beacon/operation/REPL_DUMP.
Workaround: Add the following parameters in Custom gateway-site of Knox and restart Knox.
  • gateway.httpclient.connectionTimeout=30 minutes
  • gateway.httpclient.socketTimeout=30 minutes

Later, rerun the actual replication policy.

BUG-77340

Restart of HiveServer2 and Knox

Problem: HS2 failover requires Knox restart if cookie use is enabled for HS2

Description: When HiveServer2 is accessed via Knox Gateway and HiveServer2 has cookie-based authentication enabled, a HiveServer2 restart requires that Knox also be restarted to get Knox-HiveServer2 interaction working again.

Workaround: Set hive.server2.thrift.http.cookie.auth.enabled=false in hive-site.xml in Ambari.

BUG-111066 DLM UI Problem: DLM App start succeeds even with wrong master password for DataPlane Service keystore

Description: After upgrading DLM App from older version of 1.1.3 to 1.1.3.0-28 , all the cloud credentials were marked as unregistered.

Workaround: Re-install the DLP App and initiate the app again. Provide the valid password to proceed.

BUG-112068 Atlas Problem: Atlas replication does not work for incremental changes.

Description: Incremental export not seen with fresh HDP installation.

Workaround: Restart Atlas service once and later all incremental atlas replication works correctly.

BUG-114953 DLM UI

Problem Incorrect statistics displayed for failed replication job.

BUG -115909 HDFS replication Problem: HDFS replication fails from HDP 3.1 to 2.6.5 with Atlas replication enabled.

Description:The data types of Atlas has changed between HDP 3.1 and 2.6.5. Atlas replication does not work from HDP 3.1 to 2.6.5.

Workaround: You can enable HDFS replication from HDP 3.1 to 2.6.5 by disabling Atlas replication.

BUG-120302 HDFS replication Problem: The policy instances are not triggered. It shows no jobs in the instances.

Description: The replication policy is submitted successfully. But the instances do not get triggered even though it should according to the schedule. The policy row displays as no jobs and clicking on it displays 0 policy instances. This can happen if HDFS or any other related service is stopped or not running. It is applicable in case of Hive as well.

Workaround: Check the cluster health and verify that all the required services are up and running.

ENGESC-848 HDFS replication Problem: HDFS replication with encryption zone while using same-key setup is not working.

Description: In the DLM App, when you create HDFS replication policy for a dataset inside an encryption zone and check "Same Key" option for replication and validate, the raw content is replicated but it cannot be decrypted on target cluster.

Workaround: Create a file named distcp-default.xml at /etc/beacon/conf/distcp-default.xml with the following content and later restart DLM Engine. Note that distcp.preserve.rawxattrs is the required parameter for enabling same key replication.

<configuration>
    <property>
        <name>distcp.dynamic.strategy.impl</name>
        <value>org.apache.hadoop.tools.mapred.lib.DynamicInputFormat</value>
        <description>Implementation of dynamic input format</description>
    </property>
    <property>
        <name>distcp.static.strategy.impl</name>
        <value>org.apache.hadoop.tools.mapred.UniformSizeInputFormat</value>
        <description>Implementation of static input format</description>
    </property>
    <property>
        <name>mapred.job.map.memory.mb</name>
        <value>1024</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx640m</value>
    </property>
    <property>
        <name>mapred.job.reduce.memory.mb</name>
        <value>1024</value>
    </property>
    <property>
        <name>mapred.reducer.new-api</name>
        <value>true</value>
    </property>
    <property>
        <name>mapreduce.reduce.class</name>
        <value>org.apache.hadoop.mapreduce.Reducer</value>
    </property>
    <property>
        <name>distcp.preserve.rawxattrs</name>
        <value>true</value>
    </property>
</configuration>