Known Issues
Hortonworks Bug ID |
Apache JIRA |
Component |
Summary |
---|---|---|---|
BUG-32401 | Rolling upgrade/downgrade should not be used if truncate is turned on. Workaround: Before starting a rolling upgrade or downgrade process, turn truncate off. | ||
BUG-35942 | YARN |
Users must manually configure ZooKeeper security with ResourceManager High Availability. Right now, the default value of To make it more secure, we can rely on Kerberos to do the authentication for us. We could configure SASL authentication and only Kerberos authenticated user can access to zkrmstatestore. ZooKeeper Configuration Note: This step of securing ZooKeeper is to be done once for the HDP cluster. If this has been done to secure HBase, for example, then you do not need to repeat these ZooKeeper steps if Apache YARN ResourceManager High Availability is to use the same ZooKeeper.
Apache YARN Configuration The following applies to HDP 2.2 and HDP 2.3. Note: All nodes which launched the ResourceManager (active / standby) should make these changes.
HDFS Configuration Note: This applies to HDP 2.1, 2.2, and 2.3.
| |
BUG-36817 | HBASE-13330, HBASE-13647 | HBase | test_IntegrationTestRegionReplica Replication[IntegrationTestRegion ReplicaReplication] fails with READ FAILURES |
BUG-37042 | Hive |
Limitations while using timestamp.formats serde parameter. Two issues involving the timestamp.formats SerDe parameter:
| |
BUG-38046 | Spark |
Spark ATS is missing Kill event If a running Spark application is killed in the YARN ATS ( | |
BUG-38054 | RANGER_577 | Ranger | Ranger should not change Hive config if authorization is disabled |
BUG-38785 | Hive |
With RHEL7, the Workaround: Create your own directory(such as
If you wish to mount the | |
BUG-39265 | OOZIE-2311 | Oozie | NPE in oozie logs while running feed replication tests causes jobs to fail. |
BUG-39282 | HIVE-10978 | Hive |
When HDFS is encrypted (data at rest encryption is enabled) and the Hadoop Trash feature is enabled, DROP TABLE and DROP PARTITION have unexpected behavior. (The Hadoop Trash feature is enabled by setting When Trash is enabled, the data file for the table should be "moved" to the Trash bin, but if the table is inside an Encryption Zone, this "move" operation is not allowed. Workaround: Here are two ways to work around this issue: 1. Use PURGE, as in DROP TABLE ... PURGE. This skips the Trash bin even if Trash is enabled. 2. set |
BUG-39322 | HBase |
The HBase bulk load process is a MapReduce job that typically runs under the
user ID who owns the source data. HBase data files created as a result of the job
are then bulk-loaded into HBase RegionServers. During this process, HBase
RegionServers move the bulk-loaded files from the user's directory, and moves
(renames) the files under the HBase Workaround: Run the MapReduce job as the
| |
BUG-39412 | Hive |
Users should not use Setting | |
BUG-39424 | YARN-2194 | YARN | NM fails to come with error "Not able to enforce CPU weights; cannot write to cgroup." |
BUG-39468 | Spark |
When accessing an HDFS file from pyspark, the HADOOP_CONF_DIR environment must be set. For example: export HADOOP_CONF_DIR=/etc/hadoop/conf [hrt_qa@ip-172-31-42-188 spark]$ pyspark [hrt_qa@ip-172-31-42-188 spark]$ >>>lines = sc.textFile("hdfs://ip-172-31-42-188.ec2.internal:8020/tmp/PySparkTest/file-01") ....... If HADOOP_CONF_DIR is not set properly, you might receive the following error: Py4JJavaError: An error occurred while calling z:org.apache.spark.api. python.PythonRDD.collectAndServe. org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) | |
BUG-39674 | Spark | Spark does not yet support wire encryption, dynamic executor allocation, SparkR, GraphX, Spark Streaming, iPython, or Zeppelin. | |
BUG-39756 | YARN | NM web UI cuts ?user.name when redirecting URL to MR JHS. | |
BUG-39988 | HIVE-11110 | Hive | CBO: Default partition filter is from MetaStore query causing TPC-DS to regress by 3x. |
BUG-40536 | HBASE-13832, HDFS-8510 | HBase |
When rolling upgrade is performed for HDFS, sometimes the HBase Master might run out of datanodes on which to keep its write-pipeline active. When this occurs, the HBase Master Aborts after a few attempts to keep the pipeline going. To avoid this situation: Workaround:
Note: There is a window of time during the rolling upgrade of HDFS when the HBase Master might be working with just one node and if that node fails, the WAL data might be lost. In practice, this is an extremely rare situation. Alternatively, the HBase Master can be turned off during the rolling upgrade of HDFS to avoid the above procedure. If this strategy is taken, client DDL operations and RegionServer failures cannot be handled during this time. A final alternative if the HBase Master fails during rolling upgrade of HDFS, a manual start can be performed. |
BUG-40608 | Tez |
Tez UI View/Download link fails if URL does not match cookie. Workaround: Tez UI View/Download link will work if a browser accesses a URL that matches the cookie. Example: MapReduce JHS cookie is set with an
external IP address. If a user clicks on the link from their internal cluster, the
URL will differ and the request will fail with a | |
BUG-40682 | SLIDER-909 | Slider | Slider HBase app package fails in secure cluster with wire-encryption on |
BUG-40761 | Hue |
Hue is not supported in CentOS 7. Workaround: Deploy Hue on CentOS 6. | |
BUG-41215 | HDFS-8782 | HDFS |
Upgrade to block ID-based DN storage layout delays DN registration. When upgrading from a pre-HDP-2.2 release, a DataNode with a lot of disks, or with blocks that have random block IDs, can take a long time (potentially hours). The DataNode will not register to the NameNode until it finishes upgrading the storage directory. |
BUG-41366 | Hue |
Hue by default is using Impact: May cause performance impact Steps to reproduce: Install Hue in a cluster.
View the Workaround: Modify the hue.ini file in
/etc/hue/conf. Change from | |
BUG-41369 | Hue | Hue About Page may not display the correct version information. | |
BUG-41644, BUG-41484 | Spark | Apache and custom Spark builds need an HDP specific configuration. See the Troubleshooting Spark: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_spark-quickstart/content/ch_troubleshooting-spark-quickstart.html section for more details. | |
BUG-42065 | HADOOP-11618, HADOOP-12304 | HDFS and Cloud Deployment |
HDP 2.3: Cannot set non HDFS FS as default. This prevents S3, WASB, and GCC from working. HDP cannot be configured to use an external file system as the default file system - such as Azure WASB, Amazon S3, Google Cloud Storage. The default file system is configured in core-site.xml using the fs.defaultFS property. Only HDFS can be configured as the default file system. These external file systems can be configured for access as an optional file system, just not as the default file system. |
BUG-42186 | HBase |
HDP 2.3 HBase install needs MapReduce class path modified for HBase functions to work Cluster that have Phoenix enabled placed the following config in hbase-site.xml: Property: hbase.rpc.controllerfactory.class Value:org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory This property points to a class found only in phoenix-server jar. To resolve this class at run time for the above listed Mapreduce Jobs, it needs to be part of the MapReduce classpath. Workaround: Update mapreduce.application.classpath property in mapred-site.xml file to point to /usr/hdp/current/phoenix-client/phoenix-server.jar file. | |
BUG-42355 | HBase |
Moved application from HDP 2.2 to HDP 2.3 and now ACLs don't appear to be functioning the same Workaround: Set
<property> <name>hbase.security.access.early_out</name> <value>false</value> </property> | |
BUG-42500 | HIVE-11587 | Hive |
Hive Hybrid Grace MapJoin can cause OutOfMemory Issues Hive Hybrid Grace Mapjoin is a new feature in HDP 2.3 (Hive 1.2). Mapjoin joins two tables, holding the smaller one in memory. Grace Hybrid Mapjoin spills parts of the small table to disk when the Map Join does not fit in memory at runtime. Right now there is a bug in the code that can cause this implementation to use too much memory, causing an OutOfMemory error. This applies to the Tez execution engine only. Workaround: Turn off hybrid grace map join by setting this property in hive-site.xml:
|
BUG-43524 | STORM-848 | Storm | Issue: STORM-848 (Clean up dependencies and shade as much as possible) is not fixed in HDP 2.3.0. |
BUG-45664 | Kafka | Memory leak in Kafka Broker caused by leak in instance of ConcurrentHashMap/socketContainer | |
BUG-45688 | KAFKA-2012 | Kafka | Kafka index file corruption |
BUG-50531 | Kafka |
Kafka file system support Issue: Encrypted file systems such as SafenetFS are not supported for Kafka. Index file corruption can occur. For more information, see: Install Kafka. | |
BUG-55196 | HIVE-12937 | Hive | DbNotificationListener unable to clean up old notification events |
Technical Service Bulletin | Apache JIRA | Apache Component | Summary |
---|---|---|---|
TSB-405 | N/A | N/A |
Impact of LDAP Channel Binding and LDAP signing changes in Microsoft Active Directory Microsoft has introduced changes in LDAP Signing and LDAP Channel Binding to increase the security for communications between LDAP clients and Active Directory domain controllers. These optional changes will have an impact on how 3rd party products integrate with Active Directory using the LDAP protocol. Workaround Disable LDAP Signing and LDAP Channel Binding features in Microsoft Active Directory if they are enabled For more information on this issue, see the corresponding Knowledge article: TSB-2021 405: Impact of LDAP Channel Binding and LDAP signing changes in Microsoft Active Directory |
TSB-406 | N/A | HDFS |
CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential stealing WebHDFS clients might send SPNEGO authorization header to remote URL without proper verification. A maliciously crafted request can trigger services to send server credentials to a webhdfs path (ie: webhdfs://…) for capturing the service principal For more information on this issue, see the corresponding Knowledge article: TSB-2021 406: CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential stealing |
TSB-434 | HADOOP-17208, HADOOP-17304 | Hadoop |
KMS Load Balancing Provider Fails to invalidate Cache on Key Delete For more information on this issue, see the corresponding Knowledge article: TSB 2020-434: KMS Load Balancing Provider Fails to invalidate Cache on Key Delete |
TSB-465 | N/A | HBase |
Corruption of HBase data stored with MOB feature For more information on this issue, see the corresponding Knowledge article: TSB 2021-465: Corruption of HBase data stored with MOB feature on upgrade from CDH 5 and HDP 2 |
TSB-497 | N/A | Solr |
CVE-2021-27905: Apache Solr SSRF vulnerability with the Replication handler The Apache Solr ReplicationHandler (normally registered at "/replication" under a Solr core) has a "masterUrl" (also "leaderUrl" alias) parameter. The “masterUrl” parameter is used to designate another ReplicationHandler on another Solr core to replicate index data into the local core. To help prevent the CVE-2021-27905 SSRF vulnerability, Solr should check these parameters against a similar configuration used for the "shards" parameter. For more information on this issue, see the corresponding Knowledge article: TSB 2021-497: CVE-2021-27905: Apache Solr SSRF vulnerability with the Replication handler |
TSB-512 | N/A | HBase |
HBase MOB data loss HBase tables with the MOB feature enabled may encounter problems which result in data loss. For more information on this issue, see the corresponding Knowledge article: TSB 2021-512: HBase MOB data loss |