Fixed Issues in CDH 6.2.1
CDH 6.2.1 fixes the following issues:
- Data loss with restore snapshot
- CDH users must not use Apache HBase's OfflineMetaRepair tool
- Potential to bypass transaction and idempotent ACL checks in Apache Kafka
- Kafka Broker Java configuration options in Cloudera Manager 6.2.0 are not applied to the broker JVM process
- Oozie database upgrade fails when PostgreSQL version 9.6 or higher is used
- GPU and Custom Resource Types Are Not Added to the YARN Client's Configuration File When Enabled
- Error when executing Java classes from a CDH cluster running on Ubuntu 18
- Hadoop LdapGroupsMapping does not support LDAPS for self-signed LDAP server
- The Idempotent and Transactional Capabilities of Kafka are Incompatible with Sentry
- WebHCat service cannot log
- Attempt to move table between encryption zones corrupts metadata
- Upstream Issues Fixed
Data loss with restore snapshot
The restore snapshot command causes data loss when the target table was split or truncated after snapshot creation.
Products affected: HBase
-
CDH 6.0.x
-
CDH 6.1.x
-
CDH 6.2.0
-
CDH 6.3.0
User affected: Users relying on Restore Snapshot functionality.
Impact: Restored table could have missing data when split or truncate happened after snapshot creation.
Immediate action required: Update to a version of CDH containing the fix.
hbase> disable 'table' hbase> drop 'table' hbase> clone_snapshot 'snapshot_name', 'table' hbase> enable 'table'
-
CDH 6.2.1
-
CDH 6.3.2
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2020-379: Data loss with restore snapshot
CDH users must not use Apache HBase's OfflineMetaRepair tool
OfflineMetaRepair helps you to rebuild the HBase meta table from the underlying file system. This tool is often used to correct meta table corruption or loss. It is designed to work only with hbase-1.x (CDH 5.x). Users must not run the OfflineMetaRepair tool against CDH 6.x since it uses hbase-2.x. If a user runs OfflineMetaRepair tool in CDH 6.x, then it will break or corrupt the HBase meta table.
If you have already corrupted your meta table or you believe your meta table requires the use of something like the former OfflineMetaRepair tool, do not attempt any further changes, contact Cloudera Support.
Products affected: CDH
-
CDH 6.0.0, 6.0.1
-
CDH 6.1.0, 6.1.1
-
CDH 6.2.0
-
CDH 6.3.0
User affected: Clusters with HBase installed.
Impact: Cluster becomes inoperable.
Immediate action required: Update to a version of CDH containing the fix.
Workaround: Do not run OfflineMetaRepair tool.
-
CDH 6.2.1
-
CDH 6.3.2
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2020-376: CDH users must not use Apache HBase's OfflineMetaRepair tool
Potential to bypass transaction and idempotent ACL checks in Apache Kafka
It is possible to manually craft a Produce request which bypasses transaction and idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability.
- CDH
- CDK Powered by Apache Kafka
-
CDH versions 6.0.x, 6.1.x, 6.2.0
-
CDK versions 3.0.x, 3.1.x, 4.0.x
Users affected: All users who run Kafka in CDH and CDK.
Date/time of detection: September, 2018
Severity (Low/Medium/High):7.1 (High) (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:H/A:H)
Impact: Attackers can exploit this issue to bypass certain security restrictions to perform unauthorized actions. This can aid in further attacks.
CVE: CVE-2018-17196
Immediate action required: Update to a version of CDH containing the fix.
-
CDH 6.2.1, 6.3.2
-
CDK 4.1.0
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2020-378: Potential to bypass transaction and idempotent ACL checks in Apache Kafka
Kafka Broker Java configuration options in Cloudera Manager 6.2.0 are not applied to the broker JVM process
Cloudera Manager allows the configuration of JVM option for Kafka brokers via the Additional Broker Java Options (broker_java_opts) service parameter. In Cloudera Manager 6.2.0, when managing CDH 6.2.0 clusters, ‘broker_java_opts’ are ignored when starting the Kafka broker process, resulting in using default JVM configuration options. This can lead to the following problems (depending on other environment variables):
- Kafka broker process does not use the recommended garbage collector settings leading to poor performance and increased resource (heap memory) utilization.
- Kafka broker process allows remote connection to JMX interface making the process vulnerable to remote code execution on the broker nodes.
Products affected: Apache Kafka
- CDH 6.2.0
- Cloudera Manager 6.2.0
- CDH 6.2.1, 6.3.0
For the latest update on this issue see the corresponding Knowledge article:TSB 2019-377: Kafka Broker Java configuration options in Cloudera Manager 6.2.0 are not applied to the broker JVM process Labels:
Oozie database upgrade fails when PostgreSQL version 9.6 or higher is used
Oozie database upgrade fails when PostgreSQL version 9.6 or higher is used due to a sys table change in PostgreSQL from version 9.5 to 9.6. The failure only happens if Oozie uses a JDBC driver earlier than 9.4.1209.
- After the parcels of the new version are distributed, replace the PostgreSQL JDBC driver with a newer one (version 9.4.1209 or higher) in the new parcel, at the following locations:
- /opt/cloudera/parcels/${newparcel.version}/lib/oozie/lib/
- /opt/cloudera/parcels/${newparcel.version}/lib/oozie/libtools/
- Perform the upgrade.
- /usr/lib/oozie/libtools/
- /usr/lib/oozie/lib/
You can download the driver from the PostgreSQL JDBC driver homepage.
Affected Versions: CDH 6.0.0 and higher
Fixed Version: CDH 6.2.1 and higher
Cloudera Issue: CDH-75951
GPU and Custom Resource Types Are Not Added to the YARN Client's Configuration File When Enabled
When GPU or other custom resource type is configured in Cloudera Manager, the appropiate resource (for example yarn.io/gpu) is not added to the YARN client's configuration (yarn-site.xml) file. As a result, jobs that use GPU or the configured custom resource type will fail.
- In Cloudera Manager select YARN service and go to Configuration.
- Search for YARN Client Advanced Configuration Snippet (Safety Valve) for yarn-site.xml
- Add the following snippet:
<property> <name>yarn.resource-types</name> <value>yarn.io/gpu</value> </property>
Affected Versions: CDH 6.2.0
Cloudera Issue: OPSAPS-49507
Error when executing Java classes from a CDH cluster running on Ubuntu 18
#hadoop org.apache.hadoop.conf.Configuration /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.914039/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2366: HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.x.p0.914039/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2331: HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.x.p0.914039/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2426: HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_OPTS: bad substitution
This issue occurs only in CDH 6.2 clusters running on Ubuntu 18 and the error messages can be safely ignored.
Workaround: Run the java command directly using hadoop classpath to get the classpath. For example, instead of hadoop org.apache.hadoop.conf.Configuration, you can run java -cp `hadoop classpath` org.apache.hadoop.conf.Configuration.
Affected Versions: CDH 6.2.0
Fixed Versions: CDH 6.2.1
Apache Issue: HADOOP-16167
Hadoop LdapGroupsMapping does not support LDAPS for self-signed LDAP server
Hadoop LdapGroupsMapping does not work with LDAP over SSL (LDAPS) if the LDAP server certificate is self-signed. This use case is currently not supported even if Hadoop User Group Mapping LDAP TLS/SSL Enabled, Hadoop User Group Mapping LDAP TLS/SSL Truststore, and Hadoop User Group Mapping LDAP TLS/SSL Truststore Password are filled properly.
Affected Versions: CDH 5.x and 6.0.x versions
Fixed Versions: CDH 6.1.0
Apache Issue: HADOOP-12862
Cloudera Issue: CDH-37926
The Idempotent and Transactional Capabilities of Kafka are Incompatible with Sentry
The idempotent and transactional capabilities of Kafka are not compatible with Sentry. The issue is due to Sentry being unable to handle authorization policies for Kafka transactions. As a result, users cannot use Kafka transaction in combination with Sentry.
Workaround: Use the Sentry super user in applications where idempotent producing is a requirement or disable Sentry.
Affected Versions: CDK 4.0 and later, CDH 6.0.0, 6.0.1, 6.1.0, 6.1.1, 6.2.0, 6.3.0
Fixed Versions: CDH 6.2.1, 6.3.1
Apache Issue: N/A
Cloudera Issue: CDH-80606
WebHCat service cannot log
Any WebHCat commands referencing Hive are failing with error (main ERROR Cannot access RandomAccessFile java.io.IOException: Could not create directory /opt/cloudera/parcels/CDH-6.2.x-XXX/lib/hive/logs).
Affected Versions: 6.1.0, 6.1.1, 6.2.0
Fixed Versions: 6.2.1, 6.3.0
Apache Issue: N/A
Cloudera Issue: CDH-77160
Attempt to move table between encryption zones corrupts metadata
An attempt to move a table between different encryption zones fails as expected, but the remaining table is unusable and can only be deleted. This fix prevents metadata corruption.
Affected Versions: 5.15.1, 5.16.2, 6.1.1, 6.2.0
Fixed Versions: 5.16.3, 6.2.1, 6.3.0
Apache Issue: N/A
Cloudera Issue: CDH-77745
Upstream Issues Fixed
The following upstream issues are fixed in CDH 6.2.1:
Apache Accumulo
There are no notable fixed issues in this release.
Apache Avro
There are no notable fixed issues in this release.
Apache Crunch
There are no notable fixed issues in this release.
Apache Flume
There are no notable fixed issues in this release.
Apache Hadoop
The following issues are fixed in CDH 6.2.1:
- HADOOP-16011 - OsSecureRandom very slow compared to other SecureRandom implementations
- HADOOP-16018 - DistCp won't reassemble chunks when blocks per chunk > 0.
- HADOOP-16167 - Fixed Hadoop shell script for Ubuntu 18.
- HADOOP-16238 - Add the possbility to set SO_REUSEADDR in IPC Server Listener
HDFS
The following issues are fixed in CDH 6.2.1:
- HDFS-10477 - Stop decommission a rack of DataNodes caused NameNode fail over to standby
- HDFS-12781 - After Datanode down, In Namenode UI Datanode tab is throwing warning message.
- HDFS-13101 - Yet another fsimage corruption related to snapshot
- HDFS-13244 - Add stack, conf, metrics links to utilities dropdown in NN webUI
- HDFS-13677 - Dynamic refresh Disk configuration results in overwriting VolumeMap
- HDFS-14111 - hdfsOpenFile on HDFS causes unnecessary IO from file offset 0
- HDFS-14314 - fullBlockReportLeaseId should be reset after registering to NN
- HDFS-14359 - Inherited ACL permissions masked when parent directory does not exist
- HDFS-14389 - getAclStatus returns incorrect permissions and owner when an iNodeAttributeProvider is configured
- HDFS-14687 - Standby Namenode never come out of safemode when EC files are being written
- HDFS-14746 - Trivial test code update after HDFS-14687
MapReduce 2
The following issue is fixed in CDH 6.2.1:
- MAPREDUCE-7225 - Fix broken current folder expansion during MR job start
YARN
Apache HBase
The following issues are fixed in CDH 6.2.1:
- HBASE-19893 - restore_snapshot is broken in master branch when region splits
- HBASE-21736 - Remove the server from online servers before scheduling SCP for it in hbck
- HBASE-21800 - RegionServer aborted due to NPE from MetaTableMetrics coprocessor
- HBASE-21960 - RESTServletContainer not configured for REST Jetty server
- HBASE-21978 - Should close AsyncRegistry if we fail to get cluster id when creating AsyncConnection
- HBASE-21991 - Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements
- HBASE-22128 - Move namespace region then master crashed make deadlock
- HBASE-22144 - Correct MultiRowRangeFilter to work with reverse scans
- HBASE-22169 - Open region failed cause memory leak
- HBASE-22200 - WALSplitter.hasRecoveredEdits should use same FS instance from WAL region dir
- HBASE-22581 - User with "CREATE" permission can grant, but not revoke permissions on created table
- HBASE-22615 - Make TestChoreService more robust to timing
- HBASE-22617 - Recovered WAL directories not getting cleaned up
- HBASE-22690 - Deprecate / Remove OfflineMetaRepair in hbase-2+
- HBASE-22759 - Extended grant and revoke audit events with caller info - ADDENDUM
Apache Hive
The following issues are fixed in CDH 6.2.1:
- HIVE-13278 - Avoid FileNotFoundException when map/reduce.xml is not available
- HIVE-16811 - Estimate statistics in absence of stats
The corresponding Cloudera Issue is CDH-80169 (A query fails with IllegalArgumentException Size requested for unknown type: java.util.Collection). It is a Cloudera specific fix, a partial backport of HIVE-16811.
Hue
The following issues are fixed in CDH 6.2.1:
- HUE-4327 - [editor] Turn off batch mode for query editors
- HUE-8140 - [editor] Additional improvements to multi statement execution
- HUE-8691 - [useradmin] Fix group sync fail to import member
- HUE-8717 - [oozie] Fix Sqoop1 editor fail to execute
- HUE-8720 - [importer] Fix importer with custom separator
- HUE-8727 - [frontend] Prevent Chrome from autofilling user name in various input elements
- HUE-8734 - [editor] Fix zero width column filter in the results
- HUE-8746 - [pig] Add hcat support in the Pig Editor in Hue
- HUE-8759 - [importer] Fix import to index, importing to hive instead
- HUE-8802 - [assist] Fix js exception on assist index refresh
- HUE-8829 - [core] Fix redirect stops at /hue/accounts/login
- HUE-8860 - [beeswax] Truncate column size to 5000 if too large
- HUE-8878 - [oozie] Fix Hive Document Action variable with prefilled value
- HUE-8879 - [core] Fix ldaptest not allow space in user_filter
- HUE-8880 - [oozie] Fix KeyError when execute coordinator
- HUE-8922 - [frontend] Show dates and times in local format with timezone offset details
- HUE-8933 - [editor] Make sure to clear any previous result when the execute call returns
- HUE-8950 - [core] Fix error of saving copied document
Apache Impala
The following issues are fixed in CDH 6.2.1:
- IMPALA-7800 - Impala now times out new connections after it reaches the maximum number of concurrent client connections. The limit is specified by the --fe_service_threads startup flag. The default value is 64 with which 64 queries can run simultaneously. Previously the connection attempts that could not be serviced were hanging infinitely.
-
IMPALA-7802 - Idle client connections are now closed to conserve front-end service threads.
-
IMPALA-8469 - Fixed the issue where Impala clusters with dedicated coordinators incorrectly rejected queries destined for memory pools with configured limits.
-
IMPALA-8549 - Added support for scanning DEFLATE text files.
-
IMPALA-8595 - Impala supports TLS v1.2 with the Python version 2.7.9 and older in impala-shell.
-
IMPALA-8673 - Added the DEFAULT_HINTS_INSERT_STATEMENT query option for setting the default hints for the INSERT statements when no optimizer hint was specified.
Authenticated user with access to active session or query id can hijack other Impala session or query
If an authenticated Impala user supplies a valid query id to Impala's HS2 and Beeswax interfaces, they can perform operations on other sessions or queries when normally they do not have privileges to do so.
- CDH 5.16.x and lower
- CDH 6.0.x
- CDH 6.1.x
- CDH 6.2.0
Users affected: All Impala users of affected versions.
Date/time of detection: 21st May 2019
Severity (Low/Medium/High): 7.5 (High) (CVSS 3.0: AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:N/A:N)
Impact: Neither the original issue or the fix affect the normal use of the system.
CVE: CVE-2019-10084
Immediate action required: There is no workaround, upgrade to a version of CDH containing the fix.
Addressed in release/refresh/patch: CDH 6.2.1 and higher versions
Apache Kafka
The following issue is fixed in CDH 6.2.1:
- KAFKA-7697 - Process DelayedFetch without holding leaderIsrUpdateLock
Apache Kudu
Apache Oozie
The following issues are fixed in CDH 6.2.1:
- OOZIE-3365 - Workflow and coordinator action status remain as RUNNING after rerun.
- OOZIE-3397 - Improve logging in NotificationXCommand.
- OOZIE-3478 - Oozie needs execute permission on the submitting user's home directory.
Apache Parquet
There are no notable fixed issues in this release.
Apache Pig
There are no notable fixed issues in this release.
Cloudera Search
There are no notable fixed issues in this release.
Apache Sentry
The following issues are fixed in CDH 6.2.1:
- SENTRY-2276 - Sentry-Kafka integration does not support Kafka's Alter/DescribeConfigs and IdempotentWrite operations
- SENTRY-2511 - Debug level logging on HMSPaths significantly affects performance
- SENTRY-2528 - Format exception when fetching a full snapshot
Apache Spark
The following issues are fixed in CDH 6.2.1:
- SPARK-25139 - [SPARK-18406][CORE][2.4] Avoid NonFatals to kill the Executor in PythonRunner
- SPARK-25429 - [SQL] Use Set instead of Array to improve lookup performance
- SPARK-26003 - Improve SQLAppStatusListener.aggregateMetrics performance
- SPARK-26089 - [CORE] Handle corruption in large shuffle blocks
- SPARK-26349 - [PYSPARK] Forbid insecure py4j gateways
- SPARK-27094 - [YARN] Work around RackResolver swallowing thread interrupt.
- SPARK-27112 - [CORE] : Create a resource ordering between threads to resolve the deadlocks encountered ...
- SPARK-28150 - [CORE] Log in user before getting delegation tokens.
- SPARK-28335 - [DSTREAMS][TEST] DirectKafkaStreamSuite wait for Kafka async commit
Apache Sqoop
There are no notable fixed issues in this release.
Apache Zookeeper
The following issues are fixed in CDH 6.2.1:
- ZOOKEEPER-1392 - Request READ or ADMIN permission for getAcl()
- ZOOKEEPER-2141 - ACL cache in DataTree never removes entries