Fixed Issues in CDH 6.3.4

High DDL usage in Hue Impala Editor may issue flood of INVALIDATE Calls

Issuing DDL statements using Hue’s Impala editor or invoking Hue’s “Refresh Cache” function in the left-side metadata browser results in Hue issuing INVALIDATE METADATA calls to the Impala service. This call is expensive and can result in a significant system impact, up to and including full system outage, when repeated in sufficient volume. This has been corrected in HUE-8882.

Components affected:
  • Hue
  • Impala
Products affected:
  • Cloudera Enterprise 5
  • Cloudera Enterprise 6
Releases affected:
  • CDH 5.15.1, 5.15.2
  • CDH 5.16.x
  • CDH 6.1.1
  • CDH 6.2.x
  • CDH 6.3.0, 6.3.1, 6.3.2, 6.3.3

Users affected: End-users using Impala editor in Hue.

Severity: High

Impact: Users running DDL statements using the Hue Impala editor or invoking Hue’s Refresh Cache function causes INVALIDATE METADATA commands to be sent to Impala. Impala’s metadata invalidation is an expensive operation and could cause impact on the performance of subsequent queries, hence leading to the potential for significant impact on the entire cluster, including the potential for whole-system outage.

Action required:
  • CDH 6.x customers: Upgrade to CDH 6.3.4 that contains the fix.
  • CDH 5.x customers: Contact Cloudera Support for further assistance.

Apache issue: HUE-8882

Knowledge article: For the latest update on this issue see the corresponding Knowledge article: Cloudera Customer Advisory: High DDL usage in Hue Impala Editor may issue flood of INVALIDATE Calls

Default limits for PressureAwareCompactionThroughputController are too low

HDP and CDH releases suffer from low compaction throughput limits, which cause storefiles to back up faster than compactions can re-write them. This was originally identified upstream in HBASE-21000.

Products affected:
  • HDP
  • CDH
Releases affected:
  • HDP 3.0.0 through HDP 3.1.2
  • CDH 6.0.x
  • CDH 6.1.x
  • CDH 6.2.x
  • CDH 6.3.0, 6.3.1, 6.3.2, 6.3.3

Users affected: Users of above mentioned HDP and CDH versions.

Severity: Medium

Impact: For non-read-only workloads, this will eventually cause back-pressure onto new writes when the blocking store files limit is reached.

Action required:
  • Upgrade: Upgrade to the latest release version: CDP 7.1.4, HDP 3.1.5, CDH 6.3.4
  • Workaround:
    • Set the hbase.hstore.compaction.throughput.higher.bound property to 104857600 and the hbase.hstore.compaction.throughput.lower.bound property to 52428800 in hbase-site.xml.
    • An alternative solution is to set the hbase.regionserver.throughput.controller property to org.apache.hadoop.hbase.regionserver.throttle.NoLimitThroughputController which will remove all compaction throughput limitations (which has been observed to cause other pressure).

Apache issue: HBASE-21000

Knowledge article: For the latest update on this issue see the corresponding Knowledge article: Cloudera Customer Advisory: Default limits for PressureAwareCompactionThroughputController are too low

Kudu tablet server might crash in certain workflows where a tablet is dropped right after ALTER TABLE statement

DDL and DML operations can accumulate in the Kudu tablet replica's write ahead log (WAL) during normal operation. Upon the shutdown of a tablet replica (for example, right before removing the replica), information on the accumulated operations (first 50) are printed into the tablet server's INFO log file.

A bug was introduced with the fix for KUDU-2690. The code contains a flipped if-condition that results in de-referencing of an invalid pointer while reporting on a pending ALTER TABLE operation in the tablet replica's WAL. The issue manifests itself in kudu-tserver processes crashing with SIGSEGV (segmentation fault).

The occurrence of the issue is limited to scenarios which result in accumulating at least one pending ALTER TABLE operation in the tablet replica's WAL at the time when the tablet replica is shut down. An example scenario is an ALTER TABLE request (for example, adding a column) immediately followed by a request to drop a tablet (for example, drop a range partition). Another example scenario is shutting down a tablet server while it's still processing an ALTER TABLE request for one of its tablet replicas. A slowness in file system operations increases the chances for the issue to manifest itself.

Apache issue: KUDU-2690

Component affected: Kudu

Products affected: CDH

Releases affected:
  • CDH 6.2.0, 6.2.1
  • CDH 6.3.0, 6.3.1, 6.3.2, 6.3.3

Users affected: Kudu clusters with the impacted releases.

Impact:In the worst case, multiple kudu-tserver processes can crash in a Kudu cluster, making data unavailable until the affected tablet servers are started back.

Severity: High

Action required:
  • Workaround: Avoid dropping range partitions and tablets right after issuing ALTER TABLE request. Wait for the pending ALTER TABLE requests to complete before dropping tablets or shutting down tablet servers.
  • Solution: Upgrade to CDH 6.3.4 or CDP

Knowledge article: For the latest update on this issue see the corresponding Knowledge article:

TSB 2020-449: Kudu tablet server might crash in certain workflows where a tablet is dropped right after ALTER TABLE statement

YARN Resource Managers will stay in standby state after failover or startup

On startup or failover the YARN Resource Manager will stay in the standby state due to a failure to load the recovery data. The failure is logged as a Null Pointer exception in the YARN Resource Manager log:
ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to load/recover state
java.lang.NullPointerException at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.addApplicationAttempt

This issue is fixed as YARN-7913.

Products affected: CDH with Fair Scheduler

Releases affected:
  • CDH 6.0.x

  • CDH 6.1.x

  • CDH 6.2.0, CDH 6.2.1

  • CDH 6.3.0, CDH 6.3.1, CDH 6.3.2, CDH 6.3.3

User affected:

Any cluster running the Hadoop YARN service with the following configuration:

  • Scheduler set to Fair Scheduler

  • The YARN Resource Manager Work Preserving Recovery feature is enabled. That includes High Available setups.

Impact:

On startup or failover the YARN Resource Manager will process the state store to recover the workload that is currently running in the cluster. The recovery fails with a “null pointer exception” being logged.

Due to the recovery failure the YARN Resource Manager will not become active. In a cluster with High Availability configured the standby YARN Resource Manager will fail with the same exception leaving both YARN Resource Managers in a standby state. Even if the YARN Resource Managers are restarted, they still stay in standby state.

Immediate action required:
  • Customers requiring an urgent fix who are using CDH 6.2.x or earlier: Raise a support case to request a new patch.
  • Customers on CDH 6.3.x: Upgrade to the latest maintenance release.
Addressed in release/refresh/patch:
  • CDH 6.3.4

Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2020-408: YARN Resource Managers will stay in standby state after failover or startup snapshot

Upstream Issues Fixed

The following upstream issues are fixed in CDH 6.3.4:

Apache Accumulo

There are no notable fixed issues in this release.

Apache Avro

The following issues are fixed in CDH 6.3.4:
  • Dependency upgrade: org.codehaus.plexus:plexus-utils:1.5.6 to org.codehaus.plexus:plexus-utils:3.3.0 due to CVE-2017-1000487 (fixed in AVRO-2710 and AVRO-2865).
  • Dependency upgrade: Tukaani upgraded to version 1.8 due to CVE.

Apache Crunch

There are no notable fixed issues in this release.

Apache Flume

There are no notable fixed issues in this release.

Apache Hadoop

The following issues are fixed in CDH 6.3.4:

  • HADOOP-14154 - Persist isAuthoritative bit in Dynamo DB MetaStore.
  • HADOOP-14734 - Add an option to tag the created Dynamo DB tables.
  • HADOOP-14833 - Remove s3a user:secret authentication.
  • HADOOP-15168 - Add kdiag tool to the hadoop command.
  • HADOOP-15281 - Distcp to add no-rename copy option.
  • HADOOP-15370 - S3A log message on rm s3a://bucket/ not intuitive
  • HADOOP-15426 - Make s3guard client resilient to Dynamo DB throttle events and network failures.
  • HADOOP-15428 - The s3guard bucket-info command creates the s3guard table if FS is set to do this automatically.
  • HADOOP-15495 - Upgrade commons-lang version to 3.7 in hadoop-common-project and hadoop-tools.
  • HADOOP-15552 - Move logging APIs over to slf4j in hadoop-tools.
  • HADOOP-15583 - Stabilize S3A Assumed Role support.
  • HADOOP-15621 - S3Guard: Implement time-based (TTL) expiry for Authoritative Directory Listing.
  • HADOOP-15635 - The s3guard set-capacity command to fail if the bucket is unguarded.
  • HADOOP-15642 - Update aws-sdk version to 1.11.375.
  • HADOOP-15709 - Move the s3Guard LocalMetadataStore constants to org.apache.hadoop.fs.s3a.Constants.
  • HADOOP-15729 - [s3a] Allow core threads to time out.
  • HADOOP-15837 - DynamoDB table Update can fail s3a filesystem initialization.
  • HADOOP-15843 - The s3guard bucket-info command to not print a stack trace on bucket-not-found.
  • HADOOP-15845 - Require explicit URI on CLI for the s3guard init and destroy commands.
  • HADOOP-15882 - Upgrade maven-shade-plugin from 2.4.3 to 3.2.0.
  • HADOOP-15926 - Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK.
  • HADOOP-15932 - Oozie unable to create sharelib in the s3a filesystem.
  • HADOOP-15970 - Upgrade plexus-utils from 2.0.5 to 3.1.0.
  • HADOOP-15988 - DynamoDBMetadataStore#innerGet should support empty directory flag when using authoritative listings.
  • HADOOP-15999 - S3Guard: Better support for out-of-band operations.
  • HADOOP-16093 - Move DurationInfo from hadoop-aws to hadoop-common org.apache.hadoop.util.
  • HADOOP-16117 - Update AWS SDK to 1.11.563.
  • HADOOP-16124 - Extend documentation in testing.md about S3 endpoint constants.
  • HADOOP-16201 - S3AFileSystem#innerMkdirs builds needless lists
  • HADOOP-16278 - With the s3a filesystem, long running services perform a lot of garbage collection and eventually crash.
  • HADOOP-16385 - Namenode crashes with 'RedundancyMonitor thread received Runtime exception'.
  • HADOOP-16393 - The s3guard init command uses global settings and not those of the target bucket.
  • HADOOP-16580 - Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException
  • HADOOP-16683 - Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException
  • HADOOP-17068 - The client fails when the NameNode address is changed.
  • HADOOP-17209 - Fix to the erasure coding native library memory leak.

HDFS

The following issues are fixed in CDH 6.3.4:

  • HDFS-10659 - NameNode crashes after JournalNode re-installation in an HA cluster due to missing paxos directory.
  • HDFS-12339 - NFS Gateway on shutdown gives unregistration failure.
  • HDFS-12748 - NameNode memory leak when accessing the webhdfs GETHOMEDIRECTORY.
  • HDFS-12914 - Block report leases cause missing blocks until next report.
  • HDFS-13101 - An fsimage corruption issue related to snapshots.
  • HDFS-14218 - The hdfs dfs -ls -e command fails with an exception when the directory erasure coding policy is disabled
  • HDFS-14274 - Exception when listing for a directory that its EC policy set as replicate.
  • HDFS-14535 - The default 8KB buffer in requestFileDescriptors#BufferedOutputStream is causing lots of heap allocation in HBase when using short-circut read
  • HDFS-14668 - Support Fuse with users from multiple security realms.
  • HDFS-14699 - Erasure Coding: Storage not considered in live replica when the replication streams hard limit is reached.
  • HDFS-14754 - Erasure Coding: The number of under replicated blocks does not reduce.
  • HDFS-14847 - Erasure Coding: Blocks are over-replicated when EC is decommissioning.
  • HDFS-14849 - Erasure Coding: The internal block is replicated many times when the DataNode is decommissioning
  • HDFS-14920 - Erasure Coding: Decommission might get stuck if one or more DataNodes are out of service.
  • HDFS-14946 - Erasure Coding: Block recovery fails during decommissioning.
  • HDFS-15012 - NameNode fails to parse edit logs after applying HDFS-13101.
  • HDFS-15186 - Erasure Coding: Decommission might generate the parity block's content with all 0s in some cases.
  • HDFS-15313 - Ensure that inodes in the active filesytem are not deleted during a snapshot delete operation.
  • HDFS-15372 - Files in snapshots no longer see attribute provider permissions.
  • HDFS-15386 - The ReplicaNotFoundException is observed after removing the data directories of multiple DataNodes.
  • HDFS-15446 - Snapshot creation fails during edit log loading for /.reserved/raw/path with java.io.FileNotFoundException: Directory does not exist: /.reserved/raw/path.

MapReduce 2

The following issues are fixed in CDH 6.3.4:

  • MAPREDUCE-7240 - Fix Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER.
  • MAPREDUCE-7249 - Fix Invalid event TA_TOO_MANY_FETCH_FAILURE at SUCCESS_CONTAINER_CLEANUP causes job failure.
  • MAPREDUCE-7273 - Fixed Job History server token renewal.
  • MAPREDUCE-7278 - Speculative execution behavior is observed even when mapreduce.map.speculative and mapreduce.reduce.speculative are false

YARN

The following issues are fixed in CDH 6.3.4:

  • YARN-5714 - ContainerExecutor does not order environment map
  • YARN-7818 - COMPX-2568: Remove privileged operation warnings during container launch for the ContainerRuntimes
  • YARN-7913 - Improve error handling when application recovery fails with exception
  • YARN-7962 - Race Condition When Stopping DelegationTokenRenewer causes RM crash during failover.
  • YARN-8242 - YARN NM: OOM error while reading back the state store on recovery
  • YARN-8373 - RM Received RMFatalEvent of type CRITICAL_THREAD_CRASH
  • YARN-8751 - Reduce conditions that mark node manager as unhealthy.
  • YARN-9639 - DecommissioningNodesWatcher cause memory leak
  • YARN-9984 - FSPreemptionThread can cause NullPointerException while app is unregistered with containers running on a node
  • YARN-10107 - Fix GpuResourcePlugin#getNMResourceInfo to honor Auto Discovery Enabled
  • YARN-10286 - PendingContainers bugs in the scheduler outputs

Apache HBase

The following issues are fixed in CDH 6.3.4:

  • HBASE-7191 - HBCK - Add offline create/fix hbase.version and hbase.id
  • HBASE-22403 - Balance in RSGroup should consider throttling and a failure affects the whole
  • HBASE-22527 - [hbck2] Add a master web ui to show the problematic regions
  • HBASE-22709 - Add a chore thread in master to do hbck checking
  • HBASE-22737 - Add a new admin method and shell cmd to trigger the hbck chore to run
  • HBASE-22741 - Show catalogjanitor consistency complaints in new 'HBCK Report' page
  • HBASE-22771 - [HBCK2] fixMeta method and server-side support
  • HBASE-22777 - Add a multi-region merge
  • HBASE-22796 - [HBCK2] Add fix of overlaps to fixMeta hbck Service
  • HBASE-22803 - Modify config value range to enable turning off of the hbck chore
  • HBASE-22807 - HBCK Report showed wrong orphans regions on FileSystem
  • HBASE-22808 - HBCK Report showed the offline regions which belong to disabled table
  • HBASE-22824 - Show filesystem path for the orphans regions on filesystem
  • HBASE-22827 - Expose multi-region merge in shell and Admin API
  • HBASE-22859 - [HBCK2] Fix the orphan regions on filesystem
  • HBASE-22970 - split parents show as overlaps in the HBCK Report
  • HBASE-23014 - Should not show split parent regions in hbck report UI
  • HBASE-23044 - CatalogJanitor#cleanMergeQualifier may clean wrong parent regions
  • HBASE-23153 - PrimaryRegionCountSkewCostFunction SLB function should implement CostFunction#isNeeded
  • HBASE-23175 - Yarn unable to acquire delegation token for HBase Spark jobs
  • HBASE-23192 - CatalogJanitor consistencyCheck does not log problematic row on exception
  • HBASE-23247 - [hbck2] Schedule SCPs for 'Unknown Servers'
  • HBASE-24139 - Balancer should avoid leaving idle region servers
  • HBASE-24273 - HBCK's "Orphan Regions on FileSystem" reports regions with referenced HFiles (#1613)
  • HBASE-24794 - hbase.rowlock.wait.duration should not be less than or equal to 0

Apache Hive

The following issues are fixed in CDH 6.3.4:

  • HIVE-15211 - Provide support for complex expressions in ON clauses for INNER joins
  • HIVE-15251 - Provide support for complex expressions in ON clauses for OUTER joins
  • HIVE-15369 - Extend column pruner to account for residual filter expression in Join operator
  • HIVE-15370 - Include Join residual filter expressions in user level EXPLAIN
  • HIVE-15388 - HiveParser spends lots of time in parsing queries with lots of "("
  • HIVE-15578 - Simplify IdentifiersParser
  • HIVE-16683 - ORC WriterVersion gets ArrayIndexOutOfBoundsException on newer ORC files
  • HIVE-16907 - "INSERT INTO" overwrite old data when destination table encapsulated by backquote
  • HIVE-18390 - IndexOutOfBoundsException when query a partitioned view in ColumnPruner
  • HIVE-18624 - Parsing time is extremely high (~10 min) for queries with complex select expressions
  • HIVE-19631 - Reduce epic locking in AbstractService
  • HIVE-19799 - Remove jasper dependency
  • HIVE-20051 - Skip authorization for temp tables
  • HIVE-20621 - GetOperationStatus called in resultset.next causing incremental slowness
  • HIVE-21377 - Using Oracle as HMS DB with DirectSQL
  • HIVE-22416 - MR-related operation logs missing when parallel execution is enabled
  • HIVE-22513 - Constant propagation of casted column in filter ops can cause incorrect results
  • HIVE-22713 - Constant propagation shouldn't be done for Join-Fil(*)-RS structure
  • HIVE-22741 - Speed up ObjectStore method getTableMeta
  • HIVE-22772 - Log opType and session level information for each operation
  • HIVE-22889 - Trim trailing and leading quotes for HCatCli query processing
  • HIVE-22931 - HoS dynamic partitioning fails with blobstore optimizations off
  • HIVE-23306 - Backportand HIVE-22901: RESET command does not work if there is a config set by System.getProperty
  • HIVE-23868 - BackportWindowing function spec: support 0 preceeding/following

Hue

The following issues are fixed in CDH 6.3.4:

  • HUE-7474 - [impala] Log query plan only in debug mode
  • HUE-8882 - [editor] Replace invalidate on DDL with clearCache
  • HUE-8882 - [impala] Fix invalidate delta when hive is missing.
  • HUE-8882 - [impala] Fix invalidate delta when hive is missing
  • HUE-8882 - [impala] Fix get_hive_metastore_interpreters filtering
  • HUE-8882 - [tb] Improve invalidate logic when refreshing missing tables in the table browser
  • HUE-8980 - [jb] Fix coordinator cannot sync with saved documents
  • HUE-9070 - [editor] Integrate primary keys info in the interface
  • HUE-9070 - [editor] API for retrieving Table Primary Keys
  • HUE-9080 - [editor] PK icons are now missing in Kudu tables
  • HUE-9080 - [impala] Workaround missing PK information in table description
  • HUE-9180 - [useradmin] Convert LDAP names to unicode to reduce length
  • HUE-9212 - [core] Fix missing login-modal causes auto logout failed
  • HUE-9250 - [useradmin] Prevent login failed due to user.last_login is None type
  • HUE-9273 - [notebook] Encoding Error when use non-ascii characters in sql-editor-variables

Apache Impala

The following issues are fixed in CDH 6.3.4:

  • IMPALA-4551 - Limit the size of SQL statements
  • IMPALA-6159 - DataStreamSender should transparently handle some connection reset by peer
  • IMPALA-6503 - Support reading complex types from ORC
  • IMPALA-6772 - Enable test_scanners_fuzz for ORC
  • IMPALA-6772 - Bump ORC version to 1.6.2-p6
  • IMPALA-7604 - part 1: tests for agg cardinality
  • IMPALA-7604 - part 2: fixes for AggregationNode cardinality
  • IMPALA-7802 - Implement support for closing idle sessions
  • IMPALA-7957 - Fix slot equivalences may be enforced multiple times
  • IMPALA-8184 - Add timestamp validation to ORC scanner
  • IMPALA-8254 - Fix error when running compute stats with compression_codec set
  • IMPALA-8557 - Add '.txt' to text files, remove '.' at end of filenames
  • IMPALA-8595 - THRIFT-3505 breaks IMPALA-5775
  • IMPALA-8612 - NPE when DropTableOrViewStmt analysis leaves serverName_ NULL
  • IMPALA-8634 - Catalog client should retry RPCs
  • IMPALA-8673 - Add query option to force plan hints for insert queries
  • IMPALA-8718 - project out collection slots in analytic's sort tuple
  • IMPALA-8748 - Must pass hostname to RpcMgr::GetProxy()
  • IMPALA-8790 - IllegalStateException: Illegal reference to non-materialized slot
  • IMPALA-8797 - Support database and table blacklist
  • IMPALA-8851 - Drop table if exists throws authorization exception when table does not exist
  • IMPALA-8890 - Advance read page in UnpinStream
  • IMPALA-8912 - Avoid sampling hbase table twice for HBaseScanNode
  • IMPALA-8913 - Add query option to disable hbase row estimation
  • IMPALA-8923 - remove synchronized in HBaseTable.getEstimatedRowStats
  • IMPALA-8969 - Grouping aggregator can cause segmentation fault when doing multiple aggregations
  • IMPALA-9002 - Add flag to only check SELECT priviledge in GET_TABLES
  • IMPALA-9116 - KUDU-2989. Work around SASL bug when FQDN is >=64 characters
  • IMPALA-9136 - Table.getUniqueName() reimplemented not to use table lock
  • IMPALA-9162 - Incorrect redundant predicate applied to outer join
  • IMPALA-9231 - Use simplified privilege checks for show databases
  • IMPALA-9249 - Fix ORC scanner crash when root type is not struct
  • IMPALA-9272 - Fix PlannerTest.testHdfs depending on year(now())
  • IMPALA-9277 - Catch exception thrown from orc::ColumnSelector::updateSelectedByTypeId
  • IMPALA-9324 - Correctly handle ORC UNION type in scanner
  • IMPALA-9549 - Handle catalogd startup delays when using local catalog
  • IMPALA-9707 - fix Parquet stat filtering when min/max values are cast to NULL
  • IMPALA-9809 - Multi-aggregation query on particular dataset crashes impalad
  • IMPALA-10005 - Fix Snappy decompression for non-block filesystems
  • IMPALA-10103 - upgrade jquery to 3.5.1

Apache Kafka

The following issues are fixed in CDH 6.3.4:

  • KAFKA-9254 - Overridden topic configs are reset after dynamic default change
  • KAFKA-9839 - IllegalStateException on metadata update when broker learns about its new epoch after the controller

Kite SDK

There are no notable fixed issues in this release.

Apache Kudu

The following issues are fixed in CDH 6.3.4:

  • KUDU-2635 - ignore failures to delete orphaned blocks
  • KUDU-2727 - [contsensus]lock-free CheckLeadershipAndBindTerm()
  • KUDU-2836 - Release memory to OS periodically
  • KUDU-2929 - don't do nothing when under memory pressure
  • KUDU-2947 - [consensus]fix voting in case of slow WAL
  • KUDU-2977 - Sharding block map to speed up tserver startup
  • KUDU-2987 - Intra location rebalance crashes in special case.
  • KUDU-2992 - Avoid sending duplicated requests in catalog_manager
  • KUDU-3002 - prioritize WAL unanchoring when under memory pressure
  • KUDU-3001 - Multi-thread to load containers in a data directory
  • KUDU-3023 - [tablet]validate RPC vs transaction size limit
  • KUDU-3035 - [java]Pass last propagated timestamp in Batch
  • KUDU-3036 - [master]reject DDLs which would lead to DoS
  • KUDU-3099 - Remove System.exit() calls from KuduBackup/KuduRestore
  • KUDU-3106 - [security]update on getEndpointChannelBindings()

Apache Oozie

The following issues are fixed in CDH 6.3.4:

  • OOZIE-1624 - Exclusion pattern for sharelib JARs
  • OOZIE-3544 - Upgrade commons-beanutils to 1.9.4
  • OOZIE-3549 - Add back support for truststore passwords
  • OOZIE-3561 - Forkjoin validation is slow when there are many actions in chain
  • OOZIE-3578 - MapReduce counters cannot be used over 120
  • OOZIE-3592 - Do not print misleading SecurityException for successful jobs
  • OOZIE-3584 - Fork-join action issue when action param cannot be resolved
  • Removed one of the 2 conflicting logging libraries from one part of oozie, removing a blocker for Apache Spark customers who use Spark through Oozie.
  • CWE-693 Protection mechanism failure

Apache Parquet

There are no notable fixed issues in this release.

Apache Phoenix

There are no notable fixed issues in this release.

Apache Pig

The following issue is fixed in CDH 6.3.4:

  • PIG-5395 - Pig build is failing due to maven repo access point change

Apache Solr/Cloudera Search

The following issues are fixed in CDH 6.3.4:

  • SOLR-6117 - Unify ReplicationHandler error handling
  • SOLR-11676 - Fix a SolrJ test to not expect replicationFactor that is not being set anymore
  • SOLR-11676 - Keep nrtReplicas and replicationFactor in sync while creating a collection and modifying a collection
  • SOLR-11807 - Simply testing of createNodeSet with restoring collection and fixing the test failure
  • SOLR-11807 - Restoring collection now treats maxShardsPerNode=-1 as unlimited
  • SOLR-11807 - Test code didn't take into account changing maxShardsPerNode for one code path
  • SOLR-12489 - User specified replicationFactor and maxShardsPerNode is used when specified during a restore operation.
  • SOLR-12489 - Fix test failures
  • SOLR-12489 - remove unused imports
  • SOLR-12617 - Remove Commons BeanUtils as a dependency
  • SOLR-13779 - Use the safe fork of simple-xml for clustering contrib

Apache Sentry

The following issue is fixed in CDH 6.3.4:

  • SENTRY-2557: Queries are running too slow when there are a huge number of roles and permissions granted to them.

Apache Spark

The following issues are fixed in CDH 6.3.4:

  • SPARK-25903 - [CORE] TimerTask should be synchronized on ContextBarrierState
  • SPARK-26989 - [CORE][TEST][2.4] DAGSchedulerSuite: ensure listeners are fully processed before checking recorded values
  • SPARK-27494 - [SS] Null values don't work in Kafka source v2
  • SPARK-28005 - [YARN] Remove unnecessary log from SparkRackResolver
  • SPARK-30238 - [SQL] hive partition pruning can only support string and integral types
  • SPARK-31559 - [YARN] Re-obtain tokens at the startup of AM for yarn cluster mode if principal and keytab are available
  • SPARK-32003 - [CORE][2.4] When external shuffle service is used, unregister outputs for executor on fetch failure after executor is lost

Apache Sqoop

There are no notable fixed issues in this release.

Apache ZooKeeper

There are no notable fixed issues in this release.