DLM has the following known issues, scheduled for resolution in a future release. Where available, a workaround has been provided.
|Hortonworks Bug ID||Category||Summary|
Restart of HiveServer2 and Knox
Problem: HS2 failover requires knox restart if cookie use is enabled for HS2
Description: When HiveServer2 is accessed via Knox Gateway and HiveServer2 has cookie-based authentication enabled, a HiveServer2 restart requires that Knox also be restarted to get Knox-HiveServer2 interaction working again.
Workaround: Set hive.server2.thrift.http.cookie.auth.enabled=false in hive-site.xml in Ambari.
|BUG-111066||DLM UI||Problem: DLM app start succeeds even with wrong master password for
DataPlane Service keystore
Description: After upgrading DLM app from older version of 1.1.3 to 220.127.116.11-28 , all the cloud credentials were marked as unregistered.
Workaround: Re-install the DLP App and initiate the app again. Provide the valid password to proceed.
|BUG-112068||Atlas||Problem: Atlas replication does not work for incremental
Description: Incremental export not seen with fresh HDP installation.
Workaround: Restart Atlas service once and later all incremental atlas replication works correctly.
ProblemIncorrect statistics displayed for failed replication job.
|BUG -115909||HDFS replication||Problem: HDFS replication fails from HDP 3.1 to 2.6.5 with Atlas
Description:The data types of Atlas has changed between HDP 3.1 and 2.6.5. Atlas replication does not work from HDP 3.1 to 2.6.5.
Workaround: You can enable HDFS replication from HDP 3.1 to 2.6.5 by disabling Atlas replication.
|BUG-120302||HDFS replication||Problem: The policy instances are not triggered. It shows no jobs in the
Description: The replication policy is submitted successfully. But the instances do not get triggered even though it should according to the schedule. The policy row displays as no jobs and clicking on it displays 0 policy instances. This can happen if HDFS or any other related service is stopped or not running. It is applicable in case of Hive as well.
Workaround: Check the cluster health and verify that all the required services are up and running.