Known Issues
Hortonworks Bug ID | Apache JIRA | Apache Component | Summary | |||
---|---|---|---|---|---|---|
BUG-50023 | PHOENIX-3916 | Phoenix |
Description of Problem: The hbck repair tool sometimes generates local indexes that are inconsistent with table data when overlapping regions are encountered. Workaround: If you know the database schema, fix this issue by dropping and recreating all local indexes of the table after the hbck tool completes operation. Alternatively, rebuild the local indexes using the following ALTER query:
| |||
BUG-60904 | KNOX-823 | Knox |
Description of Problem: When Apache Knox uses Ambari as a proxy, the QuickLinks are not rewritten to go back through the gateway. Workaround: If all access to Ambari is through Knox in the deployment the new Ambari quicklink profile may be used to hide and/or change URLs to go through Knox permanently. A future release will make these reflect the gateway appropriately. | |||
BUG-60904 | KNOX-823 | Knox |
Description of Problem: When Ambari is being proxied by Apache Knox the QuickLinks are not rewritten to go back through the gateway. Workaround: If all access to Ambari is through Knox, the new Ambari quicklink profile may be used to hide and/or change URLs to go through Knox permanently. Future release will make these reflect the gateway appropriately. | |||
BUG-65977 | SPARK-14922 | Spark |
Description of Problem: Since Spark 2.0.0, `DROP PARTITION BY RANGE` is not supported grammatically. In other words, only '=' is supported while `<', '>', '<=', '>=' aren't. Associated Error Message: scala> sql("alter table t drop partition (b<1) ").show org.apache.spark.sql.catalyst.parser.ParseException: mismatched input '<' expecting {')', ','}(line 1, pos 31) == SQL == alter table t drop partition (b<1) -------------------------------^^^ Workaround: To drop partition,
use the exact match with '='. | |||
BUG-65977 | SPARK-14922 | Spark |
Description of Problem: Since Spark 2.0.0, `DROP PARTITION BY RANGE` does not support relative logical operators. In other words, only '=' is supported while `<', '>', '<=', '>=' are not. Error Message: scala> sql().show org.apache.spark.sql.catalyst.parser.ParseException: mismatched input }(line 1, pos 31) == SQL == alter table t drop partition (b<1) -------------------------------^^^ Workaround: To drop partition, use the exact match with '='. scala> sql().show | |||
BUG-70956 | N/A | Zeppelin |
Description of Problem: A Hive query submitted to the %jdbc interpreter returns a proxy validation error. Associated error messages:
Workaround:
| |||
BUG-70956 | N/A | Zeppelin |
Description of Problem: When used with Hive, the %jdbc interpreter might require Hadoop common jar files that need to be added manually. Workaround:
| |||
BUG-74152 | PHOENIX-3688 | Phoenix |
Description of Problem: Rebuild(ALTER INDEX IDX ON TABLE REBUILD) of indexes created on the table having row_timestamp column will result in no data visible to the User for that Index. Workaround: Drop the index and recreate the same index. There will not be any extra overhead of recreating index when compared with rebuild Index. | |||
BUG-75179 | ZEPPELIN-2170 | Zeppelin |
Description of Problem: Zeppelin does not show all WARN messages thrown by spark-shell. The log level that comes as output at the Zeppelin notebook level cannot be changed . Workaround: Currently, there is no known workaround. | |||
BUG-76996 | N/A | Spark 2 (Livy) |
Description of Problem: When upgrading from HDP-2.5.x to HDP-2.6.0 and using Spark2, the Livy interpreter is configured with a scope of 'global', and should be changed to 'scoped'. Workaround: After upgrading from HDP 2.5 to HDP 2.6, set the interpreter mode for %livy (Spark 2) to "scoped" using the pulldown menu in the %livy section of the Interpreters page. | |||
BUG-78919 | N/A | Zeppelin |
Description of problem: "ValueError: No JSON object could be decoded" when restarting Zeppelin, when the disk is 100% full. Associated error message: Get following in error logs Traceback (most recent call last): File , line 312, in <module> Master().execute() File , line 280, in execute method(env) File , line 182, in start self.update_kerberos_properties() File , line 232, in update_kerberos_properties config_data = self.get_interpreter_settings() File , line 207, in get_interpreter_settings config_data = json.loads(config_content) File , line 339, in loads _default_decoder.decode(s) Workaround: Free up some space in disk, then delete /etc/zeppelin/conf/*.json, then restart zeppelin server | |||
BUG-79238 | N/A | Ranger |
Component Affected: Ranger, all Description of Problem: SSL is deprecated; its use in production is not recommended. Use TLS. Workaround: For Ambari: Use
| |||
BUG-80656 | Zeppelin |
Description of Problem: Zeppelin fails to start during the upgrade process from HDP 2.5 to HDP 2.6. The error starts with Exception in thread "main" org.apache.shiro.config.ConfigurationException: Unable to instantiate class org.apache.zeppelin.server.ActiveDirectoryGroupRealm for object named 'activeDirectoryRealm'. Please ensure you've specified the fully qualified class name correctly. Workaround: This error is due to a change in configuration class for Active Directory. In HDP 2.5:
In HDP 2.6:
To resolve this issue, choose one of the following two alternatives:
| ||||
BUG-80901 | N/A | Zeppelin |
Component Affected: Zeppelin/Livy Description of Problem: This occurs when running applications through Zeppelin/Livy that requires 3rd-party libraries. These libraries are not installed on all nodes in the cluster but they are installed on their edge nodes. Running in yarn-client mode, this all works as the job is submitted and runs on the edge node where the libraries are installed. In yarn-cluster mode,this fails because the libraries are missing. Workaround: Set the location
inspark.jars in
| |||
BUG-81637 | N/A | Spark |
Description of Problem: Executing concurrent queries over Spark via Spark1-llap package spawns multiple threads. This may cause multiple queries to fail. However, this will not break the spark thrift server. Spark 1.6 is built using Scala 2.10, which is where this issue manifests (i.e. " synchronize reflection code as scala 2.10 reflection is not threadsafe self"). This issue was subsequently fixed in Scala 2.11 based on this patch https://issues.scala-lang.org/browse/SI-6240. Associated error messages:
Workaround: Isolate the broken queries and re-run them one by one. This will limit the query to one spawned thread. | |||
BUG-86418 | N/A | Zeppelin |
Description of Problem: After upgrading from Ambari 2.4.2 to Ambari 2.5.2 and subsequent HDP stack upgrade from 2.5 to 2.6, jdbc(hive) interpreter fails to work correctly in Zeppelin. Associated Error Message: You might see one of the following errors in the Zeppelin stacktrace after running jdbc(hive):
Workaround:
| |||
BUG-87128 | N/A | Mahout |
Since Mahout is deprecated in favor of Spark ML, and every code change carries the risk of creating additional incompatibilities, we will document these difficulties rather than change these established behaviors in Mahout. These issues affect only Mahout.
| |||
BUG-88614 | N/A | Hive |
Description of Problem: RDMBS schema for Hive metastore contains an index HL_TXNID_INDEX defined as CREATE INDEX HL_TXNID_INDEX ON HIVE_LOCKS USING hash (HL_TXNID); Hash indexes are not recommended by Posgres. Details can be found in https://www.postgresql.org/docs/9.4/static/indexes-types.html. Workaround: It's recommended that
this index is changed to type | |||
BUG-90316 | N/A | Druid |
Description of Problem: Router is an optional component in Druid deployment. If the router component of Druid is deployed on a cluster with Kerberos security, Hive queries on Druid tables sometimes fail. Workaround: If you want to use the optional router component, then ensure that it is hosted on a broker node. | |||
BUG-91304 | HIVE-18099 | Ambari, Hive, MapReduce, Tez |
Description of problem: Running Hive with Tez fails to load configured native library. For example, Snappy compression library. Associated error message: java.lang.RuntimeException: java.io.IOException: Unable to get CompressorType for codec (org.apache.hadoop.io.compress.SnappyCodec). This is most likely due to missing native libraries for the codec. Workaround: Add the configuration
parameter <property> <name>mapreduce.admin.user.env</name> <value>LD_LIBRARY=./tezlib/lib/</value> </property> | |||
BUG-91364 | AMBARI-22506 | Zeppelin |
Description of problem: The pie chart does not display the correct distribution as per data. This occurs when there is a "," in data i.e. there is number formatting applied to data. Associated error message: No error message. Workaround: A manual config in Zeppelin's JDBC interpreter setting, i.e. to add "phoenix.phoenix.query.numberFormat" with value "#.#". | |||
BUG-91996 | LIVY-299 | Livy, Zeppelin |
Description of Problem: Livy Spark interpreter will only print out the last line of code in the output. For example, if you submit the following: print(10) print(11) Livy will only print "11" and ignore the first line. Workaround: If you want to see the output of a particular line, it must be the last line in the code block in a para. | |||
BUG-92483 | HIVE-17900 | Hive |
Description of Problem: Compaction of ACID table might fail if table is partitioned by more than one column. Associated Error message: java.io.IOException: Could not update stats for table ... Workaround: Currently, there is no known workaround for this issue. | |||
BUG-92957 | HIVE-11266 | Hive |
Description of Problem: Hive returns the wrong count result on an external table with table statistics if you change table data files. Workaround: Currently, there is no known workaround for this issue.
| |||
BUG-93550 | N/A | Zeppelin |
Description of Problem: Zeppelin Spark R notebooks can fail due to the Scala version discrepancy between Zeppelin and Spark. Workaround: There are two options you can use as a workaround:
| |||
BUG-94081 | HIVE-18384 | Hive2 |
Description of Problem: The log4j version used in this release can occasionally lead to a failed query or a failure of an LLAP daemon. This is caused by a race condition in the library and can result in a ConcurrentModificationException. LLAP daemons are restarted transparently by the system. However, the system will log queries that cannot recover as failed. Workaround: Re-run the query to resolve the problem. | |||
RMP-7861 | HBASE-14138 | HBase |
Description of Problem: Only an hbase superuser can perform HBbase backup-and-restore. | |||
N/A | N/A | N/A |
Description of problem: Open JDK 8u242 is not supported as it causes Kerberos failure. Workaround: Use a different version of Open JDK. |
Technical Service Bulletin | Apache JIRA | Apache Component | Summary |
---|---|---|---|
TSB-327 | HDFS-5698 | HDFS |
CVE-2018-11768: HDFS FSImage Corruption (potential DoS, file/dir takeover) In very large clusters, the in-memory format to store the user, group, acl, and extended attributes may exceed the size of the on disk format, causing corruption of fsImage. For more information on this issue, see the corresponding Knowledge article: TSB 2021-327:CVE-2018-11768: HDFS FSImage Corruption (potential DoS, file/dir takeover) |
TSB-405 | N/A | N/A |
Impact of LDAP Channel Binding and LDAP signing changes in Microsoft Active Directory Microsoft has introduced changes in LDAP Signing and LDAP Channel Binding to increase the security for communications between LDAP clients and Active Directory domain controllers. These optional changes will have an impact on how 3rd party products integrate with Active Directory using the LDAP protocol. Workaround Disable LDAP Signing and LDAP Channel Binding features in Microsoft Active Directory if they are enabled For more information on this issue, see the corresponding Knowledge article: TSB-2021 405: Impact of LDAP Channel Binding and LDAP signing changes in Microsoft Active Directory |
TSB-406 | N/A | HDFS |
CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential stealing WebHDFS clients might send SPNEGO authorization header to remote URL without proper verification. A maliciously crafted request can trigger services to send server credentials to a webhdfs path (ie: webhdfs://…) for capturing the service principal For more information on this issue, see the corresponding Knowledge article: TSB-2021 406: CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential stealing |
TSB-434 | HADOOP-17208, HADOOP-17304 | Hadoop |
KMS Load Balancing Provider Fails to invalidate Cache on Key Delete For more information on this issue, see the corresponding Knowledge article: TSB 2020-434: KMS Load Balancing Provider Fails to invalidate Cache on Key Delete |
TSB-465 | N/A | HBase |
Corruption of HBase data stored with MOB feature For more information on this issue, see the corresponding Knowledge article: TSB 2021-465: Corruption of HBase data stored with MOB feature on upgrade from CDH 5 and HDP 2 |
TSB-497 | N/A | Solr |
CVE-2021-27905: Apache Solr SSRF vulnerability with the Replication handler The Apache Solr ReplicationHandler (normally registered at "/replication" under a Solr core) has a "masterUrl" (also "leaderUrl" alias) parameter. The “masterUrl” parameter is used to designate another ReplicationHandler on another Solr core to replicate index data into the local core. To help prevent the CVE-2021-27905 SSRF vulnerability, Solr should check these parameters against a similar configuration used for the "shards" parameter. For more information on this issue, see the corresponding Knowledge article: TSB 2021-497: CVE-2021-27905: Apache Solr SSRF vulnerability with the Replication handler |
TSB-512 | N/A | HBase |
HBase MOB data loss HBase tables with the MOB feature enabled may encounter problems which result in data loss. For more information on this issue, see the corresponding Knowledge article: TSB 2021-512: HBase MOB data loss |