Known Issues
Hortonworks Bug ID | DPS Service | Summary |
---|---|---|
BUG-87028 | DPS Platform UI |
Summary: Cluster status is not consistent between the DLM App and the DPS Platform pages. Description: The status for NodeManager and for the DataNode on a cluster can take up to 10 minutes to update in DPS Platform. Workaround: If the information does not update, wait a few minutes and refresh the page to view the status. |
BUG-90784 | DLM Service UI |
Summary: Ranger UI does not display Deny policy items Description: When a policy with deny conditions is created on Ranger-admin in a replication relationship, the Policy Details page in Ranger does not display the deny policy items. Workaround: If Deny policy items do not appear on the Ranger admin Policy Detail page then please update the respective service-def with enableDenyAndExceptionsInPolicies="true" option. Refer to section "2.2 Enhanced Policy model" in https://cwiki.apache.org/confluence/display/RANGER/Deny-conditions+and+excludes+in+Ranger+policies |
DLM Service UI |
Summary: Under some circumstances, a successful HDFS data transfer displays incorrect information, instead of the actual bytes transferred. Description: The bytes transferred are not properly shown when map tasks are killed because of nodes being lost in a cluster. In an attempt to recover, new map tasks are launched, resulting in improperly displayed statistics. Workaround: None | |
BUG-91081 | DSS |
Issue: During installation, DP Profiler fails to start the DP Profiler service with error: No java installations was detected. Description: There is an issue in the way DP Profiler is locating the Java path. If Java is installed from a tarball, as opposed to via a package manager like yum, it does not get added to the system path. Workaround: The system on which DP Profiler is being installed must have Java in the system's PATH variable, so that Java can be detected correctly. |
BUG-91018 | DLM Engine, API |
Issue: If a slash is appended to the HDFS path, then HDFS replication fails for Ranger. Description: When defining HDFS replication policies, including a slash at the end of the HDFS path causes the replication job to fail. Workaround: Do not add a slash at the end of the HDFS path. This is a limitation when HDFS replication policy is created through a REST API call. |
BUG-91161 | DSS |
Summary: Spark history from DSS jobs are filling up HDFS capacity. Description: The profiler jobs in DSS cause a lot of information to be generated in spark-history on HDFS. These can fill up HDFS capacity if not managed properly. Workaround:
|