Release Notes
Also available as:
PDF

Behavioral Changes

Behavioral changes denote a marked change in behavior from the previously released version to this version of software. In HDP 2.6.3, behavioral changes affect the following Hadoop components.

Table 1.3. Behavioral Changes

Hortonworks Bug IDApache ComponentApache JIRASummaryDetails
BUG-66121HiveHIVE-14251 Union All of different types resolves to incorrect data

Scenario: UNION result handling

Previous Behavior: queries in which union operator was used may have created an invalid output type.for example, the column type is ambigous in the following query:

select cast(1 as int) union select cast(1 as string)

Selecting the inappropriate type may cause the value to be changed to NULL.

New Behavior: The types are checked prior to execution; and the ambigous cases are rejected;

FAILED: SemanticException Schema of both sides of union should match: Column _c0 is of type int on first table and type string on second table. Cannot tell the position of null AST. (state=42000,code=40000)

The query should be clarified with explicit casts.

BUG-80021OozieN/AModify references to yarn-client mode for Oozie Spark action

Summary: Yarn-client mode for the Spark action is not supported.

Component Affected: Spark action in Oozie

Scenario: The yarn-client mode of the Spark action is no longer supported as of HDP 2.6.0.

New Behavior: If you use yarn-client mode in Oozie or Falcon workflows, you must change the workflow to use yarn-cluster mode instead. Workflow Manager automatically converts imported workflows to yarn-cluster mode.

BUG-85566RangerRANGER-1727Ranger allows user to change an external user's password with 'null' old password

Summary: Ranger allows user to change an external user's password

Scenario: External users password change should be performed at the external source (LDAP/AD etc). But having an ability to change the password via Ranger API (although it won't change the password in the source system) is not useful.

Previous Behavior: API call to change external user's password was allowed (although this will not change the password in actual external source)

New Behavior: Now, external user password change cannot be done via API.

BUG-86663AtlasATLAS-2017Import API: update to make the new parameter to be optional

Components Affected: Atlas, REST API end point api/admin/import

Scenario: Users of Atlas REST API to import data will need to update for changes in content-type header

Previous behavior: REST API to import data required content-type application/octet-stream

New behavior: The REST API now requires content-type multipart/form-data. In addition, parameter request is made optional.

For example:

curl -g -X POST -u adminuser:password -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" -F request=@importOptions.json -F data=@fileToBeImported.zip "http://localhost:21000/api/atlas/admin/import"

BUG-87531HDFSHDFS-10220A large number of expired leases can make namenode unresponsive and cause failover

Component Affected: NameNode

Scenario: Large number of expired leases can make namenode unresponsive and causes failover.

Previous Behavior: NamenNode will try to recover all expired leases in a loop

New Behavior: While releasing large number of leases NameNode will timeout after time configured by dfs.namenode.max-lock-hold-to-release-lease-ms to avoid holding lock for long times.

BUG-88870HDFSHDFS-10326Disable setting tcp socket send/receive buffers for write pipelines

Component Affected: DataNode and the DFS client

Previous Behavior: HDFS would explicitly set hardcoded values for TCP socket buffer sizes.

New Behavior: The size of the TCP socket buffers are no longer hardcoded by default. Instead the OS now will automatically tune the size for the buffer.

BUG-91290Hive, RangerN/AAdditional ranger hive policies required for INSERT OVERWRITE

Scenario:Additional ranger hive policies required for INSERT OVERWRITE

Previous behavior: Hive INSERT OVERWRITE queries succeed as usual.

New behavior: Hive INSERT OVERWRITE queries are unexpectedly failing after upgrading to HDP-2.6.x with the error:

Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user jdoe does not have WRITE privilege on /tmp/*(state=42000,code=40000)

As of HDP-2.6.0, Hive INSERT OVERWRITE queries require a Ranger URI policy to allow write operations, even if the user has write privilege granted through HDFS policy.

Workaround/Expected Customer Action:

  1. Create a new policy under the Hive repository.

  2. In the dropdown where you see Database, select URI.

  3. Update the path (Example: /tmp/*)

  4. Add the users and group and save.

  5. Retry the insert query.

RMP-9153ZeppelinZEPPELIN-1515Support Zeppelin HDFS storage

Previous Behavior: In releases of Zeppelin earlier than HDP-2.6.3, notebooks and configuration files were stored on the local disk of the Zeppelin server.

New Behavior: With HDP-2.6.3+, the default storage is now in HDFS.

Workaround/Expected Customer Action: When upgrading to HDP-2.6.3+ from versions earlier than HDP-2.6.3, perform the steps described in Enabling HDFS Storage for Zeppelin Notebooks and Configuration in HDP-2.6.3+.