Known Issues
Summary of known issues for this release.
Hortonworks Bug ID | Apache JIRA | Apache component | Summary |
---|---|---|---|
BUG-79238 | N/A | Documentation, HBase, HDFS, Hive, MapReduce, Zookeeper | Description of the problem or behavior SSL is deprecated and its use in production is not recommended. Use TLS. Workaround In Ambari: Use ssl.enabled.protocols=TLSv1|TLSv1.1|TLSv1.2 and security.server.disabled.protocols=SSL|SSLv2|SSLv3. For help configuring TLS for other components, contact customer support. Documentation will be provided in a future release. |
BUG-106494 | N/A | Documentation, Hive | Description of Problem When you partition a Hive column of type double, if the column value is 0.0, the actual partition directory is created as "0". An AIOB exception occurs. Associated error message
Workaround Do not partition columns of type double. |
BUG-101082 | N/A | Documentation, Hive | Description of the problem or behavior When running Beeline in batch mode, queries killed by the Workload Management process can on rare occasions mistakenly return success on the command line. Workaround There is currently no workaround. |
BUG-106379 | N/A | Documentation Hive | Description of the Problem The upgrade process fails to perform necessary compaction of ACID tables and can cause permanent data loss. Workaround If you have ACID tables in your Hive metastore, enable ACID operations in Ambari or set Hive configuration properties to enable ACID. If ACID operations are disabled, the upgrade process does not convert ACID tables. This causes permanent loss of data; you cannot recover data in your ACID tables later. |
BUG-106286 | N/A | Documentation Hive | Description of the Problem The upgrade process might fail to make a backup of the Hive metastore, which is critically important. Workaround Manually make a manual backup of your Hive metastore database before upgrading. Making a backup is especially important if you did not use Ambari to install Hive and create the metastore database, but highly recommended in all cases. Ambari might not have the necessary permissions to perform the backup automatically. The upgrade can succeed even if the backup fails, so having a backup is critically important. |
BUG-98628 | HBASE-20530 | HBase | Description of the problem or behavior When running the restore of an incremental backup, the restore task may fail with the error "java.io.IOException: No input paths specified in job". This only happens intermittently. Workaround Because exact causes of the error are unknown, there is no known workaround. Re-running the restore task may succeed; it may not. |
BUG-103495 | N/A | HBase | Description of the problem or behavior Because the region assignment is refactored in HBase, there are unclear issues that may affect the stability of this feature. If you rely on RegionServer Groups feature, you are recommended to wait until a future HDP 3.x release, which will return the stability of this features as it was available in HBase 1.x/HDP 2.x releases. Workaround There is currently no workaround. |
BUG-98727 | N/A | HBase | Description of the problem or behavior Because the region assignment is refactored in HBase, there are unclear issues that may affect the stability of this feature. If you rely on Region replication feature, you are recommended to wait until a future HDP 3.x release, which will return the stability of this features as it was available in HBase 1.x/HDP 2.x releases. Workaround There is currently no workaround. |
BUG-105983 | N/A | HBase |
Description of the problem or behavior An HBase service (Master or RegionServer) stops participating with the rest of the HBase cluster. Associated error message The service's log contains stack traces that contain "Kerberos principal name does NOT have the expected hostname part..." Workaround Retrying the connection solves the problem. |
BUG-94954 | HBASE-20552 | HBase |
Description of the problem or behavior After a rolling restart of HBase, the HBase master may not correctly assign out all Regions to the cluster. Associated error message TThere are regions in transition, including hbase:meta, which result in "Region is not online on RegionServer" messages on Master or RegionServer or messages around errors in ServerCrashProcudure in the Master. Workaround Restart the HBase Master. |
BUG-96402 | HIVE-18687 | Hive | Description of the problem or behavior When HiveServer2 is running in HA (high-availability) mode in HDP 3.0.0, resource plans are loaded in-memory by all HiveServer2 instances. If a client makes changes to a resource plan, the changes are reflected (pushed) only in the HiveServer2 to which the client is connected. Workaround In order for the resource plan changes to be reflected on all HiveServer2 instances, all HiveServer2 instances has to be restarted so that they can reload the resource plan from metastore. |
BUG-88614 | N/A | Hive |
Description of the problem or behavior RDMBS schema for Hive metastore contains an index HL_TXNID_INDEX defined as CREATE INDEX HL_TXNID_INDEX ON HIVE_LOCKS USING hash
(HL_TXNID) ; Hash indexes are not recommended by PostgreSQL. For more information, see https://www.postgresql.org/docs/9.4/static/indexes-types.html Workaround It's recommended that this index is changed to type
|
BUG-101836 | Hive | Description of the problem or behavior Statistics-based optimizations for metadata-only queries, such as count, count(distinct <partcol>), do not currently work for managed tables. |
|
BUG-101836 | N/A | Hive | Description of the problem or behavior Statistics-based optimizations for metadata-only queries, such as count, count(distinct <partcol>), do not currently work for managed tables. Workaround There is currently no workaround. |
BUG-107434 | N/A | Hive | Description of the problem or behavior Tables that have buckets must be recreated when you upgrade a cluster to HDP 3.0. The hash function for bucketing has changed in HDP 3.0, causing a problem in certain operations, such as INSERT and JOIN. In these cases, Hive does not handle queries correctly if you mix old and new tables in the same query. To avoid this problem, recreate bucketed tables using the following workaround after upgrading, but before running any queries on the cluster. Workaround:
|
BUG-120655 | N/A | Hive |
Description of the problem or behavior The Hive Warehouse Connector does not support non-ORC file formats for writes. Workaround There is currently no workaround. |
BUG-60904 | KNOX-823 | Knox | Description of the problem or behavior When Ambari is being proxied by Apache Knox, the QuickLinks are not rewritten to go back through the gateway. If all access to Ambari is through Knox in the deployment, the new Ambari QuickLink profile may be used to hide and/or change URLs to go through Knox permanently. Future release will make these reflect the gateway appropriately. Workaround There is currently no workaround. |
BUG-107399 | N/A | Knox | Description of the problem or behavior After upgrade from previous HDP versions, certain topology deployments may return a 503 error.This includes, but may not be limited to, knoxsso.xml for the KnoxSSO enabled services. Workaround When this is encountered, a minor change through Ambari (whitespace even) to the knoxsso topology (or any other with this issue) and restart of the Knox gateway server should eliminate the issue. |
BUG-91996 | LIVY-299 | Livy Zeppelin | Description of the problem or behavior Livy Spark interpreter will only print the output of the last code line on the output. For example, if we submitted: print(10) print(11) Only "11" will be printed out, the output of first line "10" will be ignored. Workaround If you want to see the output of a particular line, then it must be the last line in the code block. |
BUG-106266 | OOZIE-3156 | Oozie | Description of the problem or behavior When check() method of SshActionExecutor gets invoked, Oozie will execute the command "ssh <host-ip> ps -p <pid>" to determine whether the ssh action completes or not. If the connection to the host fails during the action status check, the command will return with an error code, but the action status will be determined as OK, which may not be correct. Associated error message Ssh command exits with the exit status of the remote command or with 255 if an error occurred Workaround Retrying the connection solves the problem. |
BUG-107236 | N/A | Ranger | Description of the problem or behavior Atlas Rest sync source is not supported for tagsync. Workaround Using Kafka is recommended. |
BUG-101227 | N/A | Spark | Description of the problem or behavior When Spark Thriftserver has to run several queries concurrently, some of them can fail with a timeout exception when performing broadcast join. Associated error message
Workaround You can resolve this issue by increasing the spark.sql.broadcastTimeout value. |
BUG-100187 | SPARK-23942 | Spark | Description of the problem or behavior In Spark, users can register QueryExecutionListener to add callbacks for query executions, for example, for an action such as collect, foreach and show in DataFrame. You may use spark.session().listenerManager().register(...) or spark.sql.queryExecutionListeners configuration to set the query execution listener. This usually works in other API languages as well; however, this was a bug in collect Python API - the callback was not being called. Now, it is being called correctly within Spark side. Workaround Workaround is to manually call the callbacks right after collect in Python API with a try-catch. |
BUG-109607 | N/A | Spark | Description of the problem or behavior With wire encryption enabled with containerized Spark on YARN with Docker, Spark submit fails in "cluster" deployment mode. Spark submit in "client" deployment mode works successfully. Workaround There is currently no workaround. |
BUG-65977 | SPARK-14922 | Spark | Description of the problem or behavior Since Spark 2.0.0, `DROP PARTITION BY RANGE` is not supported grammatically. In other words, only '=' is supported while `<', '>', '<=', '>=' aren't. Associated error message
Workaround To drop partition, use the exact match with '='.
|
BUG-110970 | N/A | Spark |
Description of the problem or behavior For long-running SparkSQL jobs on a Kerberized cluster, some JDBC clients may randomly fail with a "no token found in cache" error after the delegation token expiry period. Workaround
|
BUG-106917 | N/A | Sqoop | Description of the problem or behavior In HDP 3, managed Hive tables must be transactional
( Associated error message
Workaround When using --hive-import with
--as-parquetfile , users must also provide
--external-table-dir with a fully qualified
location of the table:
|
BUG-102672 | N/A | Sqoop |
Description of the problem or behavior In HDP 3, managed Hive tables must be transactional (hive.strict.managed.tables=true). Writing transactional table with HCatalog is not supported by Hive. This leads to errors during HCatalog Sqoop imports if the specified Hive table does not exist or is not external. Associated error message Store into a transactional table db.table from Pig/Mapreduce is not supportedWorkaround Before running the HCatalog import with Sqoop, the user must create the external table in Hive. The --create-hcatalog-table does not support creating external tables. |
RMP-11408 | ZEPPELIN-2170 | Zeppelin | Description of the problem or behavior Zeppelin does not show all WARN messages thrown by spark-shell at the Zeppelin's notebook level. Workaround There is currently no workaround for this. |
BUG-91364 | AMBARI-22506 | Zeppelin | Description of the problem or behavior Pie charts in Zeppelin does not display the correct distribution as per the provided data. This occurs when there is a "," in data i.e. there is number formatting applied to the data. Workaround Add a manual configuration setting in Zeppelin's JDBC interpreter setting. Add "phoenix.phoenix.query.numberFormat" with value "#.#". |
N/A | N/A | N/A |
Description of the problem or behavior Open JDK 8u242 is not supported as it causes Kerberos failure. Workaround Use a different version of Open JDK. |
Technical Service Bulletin | Apache JIRA | Apache component | Summary |
---|---|---|---|
TSB-327 | HDFS-5698 | HDFS | CVE-2018-11768: HDFS FSImage Corruption (potential DoS, file/dir
takeover) In very large clusters, the in-memory format to store the user, group, acl, and extended attributes may exceed the size of the on disk format, causing corruption of fsImage. For the latest update on this issue, see the corresponding Knowledge article: CVE-2018-11768: HDFS FSImage Corruption (potential DoS, file/dir takeover) |
TSB-405 | N/A | N/A | Impact of LDAP Channel Binding and LDAP signing changes in Microsoft
Active Directory Microsoft has introduced changes in LDAP Signing and LDAP Channel Binding to increase the security for communications between LDAP clients and Active Directory domain controllers. These optional changes will have an impact on how 3rd party products integrate with Active Directory using the LDAP protocol. Workaround Disable LDAP Signing and LDAP Channel Binding features in Microsoft Active Directory if they are enabled For the latest update on this issue see the corresponding Knowledge article:TSB-2021 405: Impact of LDAP Channel Binding and LDAP signing changes in Microsoft Active Directory |
TSB-406 | N/A | HDFS | CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential
stealing WebHDFS clients might send SPNEGO authorization header to remote URL without proper verification. A maliciously crafted request can trigger services to send server credentials to a webhdfs path (ie: webhdfs://…) for capturing the service principal For the latest update on this issue see the corresponding Knowledge article: TSB-2021 406: CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential stealing |
TSB-450 | HBASE-21000 | HBase | Default limits for PressureAwareCompactionThroughputController are too
low HDP and CDH releases suffer from low compaction throughput limits, which cause storefiles to back up faster than compactions can re-write them. For the latest update on this issue see the corresponding Knowledge article: Cloudera Customer Advisory: Default limits for PressureAwareCompactionThroughputController are too low |
TSB-463 | N/A | HBase | HBase Performance Issue The HDFS short-circuit setting dfs.client.read.shortcircuit is overwritten to disabled by hbase-default.xml. HDFS short-circuit reads bypass access to data in HDFS by using a domain socket (file) instead of a network socket. This alleviates the overhead of TCP to read data from HDFS which can have a meaningful improvement on HBase performance (as high as 30-40%). For the latest update on this issue see the corresponding Knowledge article: TSB 2021-463: HBase Performance Issue |
TSB-480/2 | HIVE-24224 | Hive | Hive ignores the property to skip a header or footer in a compressed
file Incorrect results can occur running SELECT queries if count value is greater than 0. For the latest update on this issue see the corresponding Knowledge article: TSB 2021-480.2: Hive ignores the property to skip a header or footer in a compressed file |
TSB-494 | HBase | Accumulated WAL Files Cannot be Cleaned up When Using Phoenix Secondary
Global Indexes The Write-ahead-log (WAL) files for Phoenix tables that have secondary global indexes defined on them, cannot be automatically cleaned up by HBase, leading to excess storage usage and possible error due to filling up the storage. Workaround Perform rolling restart of HBase if the number of znodes under hbase-secure/splitWAL in ZooKeeper is greater than 8000. For the latest update on this issue see the corresponding Knowledge article: TSB 2021-494: Accumulated WAL Files Cannot be Cleaned up When Using Phoenix Secondary Global Indexes |
|
TSB-497 | N/A | Solr | CVE-2021-27905: Apache Solr SSRF vulnerability with the Replication
handler The Apache Solr ReplicationHandler (normally registered at "/replication" under a Solr core) has a "masterUrl" (also "leaderUrl" alias) parameter. The “masterUrl” parameter is used to designate another ReplicationHandler on another Solr core to replicate index data into the local core. To help prevent the CVE-2021-27905 SSRF vulnerability, Solr should check these parameters against a similar configuration used for the "shards" parameter. For the latest update on this issue, see the corresponding Knowledge article: TSB 2021-497: CVE-2021-27905: Apache Solr SSRF vulnerability with the Replication handler |
TSB-512 | N/A | HBase | HBase MOB data loss HBase tables with the MOB feature enabled may encounter problems which result in data loss. For the latest update on this issue, see the corresponding Knowledge article: TSB 2021-512: HBase MOB data loss |