Release Notes
Also available as:

Behavioral Changes

Behavioral changes denote a marked change in behavior from the previously released version to this version of software. In HDP 2.5.0, behavioral changes affect the following Hadoop components.

Table 1.15. Behavioral Changes

Hortonworks Bug IDApache ComponentApache JIRASummaryDetails
N/AFalconN/ABerkeley DB JAR file must be downloadedStarting with the HDP 2.5.0 release, customers must obtain the Berkeley DB JAR file (available under open source license from Oracle) as part of a Falcon installation or upgrade. See the Data Movement and Integration guide for more information.
BUG-59164HiveN/AData type conversions are different between Hive 1 and Hive 2

Component Affected: Hive

Scenario: In hive2, the table column data type conversion with ALTER TABLE CHANGE COLUMNS and ALTER TABLE REPLACE COLUMNS was changed.

Previous Behavior: Previously, hive1 was very permissive and allowed changing from any primitive data type to any other primitive data type with the DDL ALTER statement.

New Behavior: It is now more restrictive. By default, it only allows a small set of safe conversions. In other words, the data type being converted into is larger -- it can handle more values.

For example, changing a table column data type from INT to BIGINT is safe because BIGINT can represent more values than INT.

In hive2, when the environment variable hive.metastore.disallow.incompatible.col.type.changes is true, ALTER TABLE CHANGE COLUMNS and ALTER TABLE REPLACE COLUMNS is restricted to safe conversions.

The default for hive.metastore.disallow.incompatible.col.type.changes is true. To permit the old very permissive data type conversion behavior, you need to set this environment variable to false.

Here are the safe conversions:


The last line shows increasing value ranges of the numeric types. For example, INT to FLOAT is a valid progression. FLOAT to INT is not.





Ranger API change of behavior for HDP 2.4.0

Component Affected: Ranger admin

Scenario: Search filter is not working as expected.

For example : If expected search result is after first record and if in search request page size is 1.


Above search policy URL not returning a policy as expected. It seems as if the filtering is happening after retrieving the first n policies where n is the pageSize, while it should fetch all matching results first and apply the pageSize limit later.

Previous Behavior: Searches for a policy by name limits the search to the default page size, search returns no records if the policy is farther down the list. If pageSize is big enough to include the policies we are searching for then we get the results we expect. It seems as if the filtering was happening after retrieving the first n policies where n is the pageSize.

New Behavior: Able to search across all policies after new implementation. New pagination implementation shall send results according to requested page size after filtering the result.

BUG-60495HiveHIVE-14022Left semi join should throw SemanticException if where clause contains columnname from right table

Scenario: Specifying left semi joins provides predictable results but only exposes columns from one table.

Previous Behavior: Previously, you could specify left semi joins while at the same time accessing columns from the "right" table.

New Behavior: Specifying left semi joins is no longer possible. As a result, you might experience this as a regression, which will require a query change to move forward.

BUG-61629ZeppelinN/AInterpreters are now available on the top right corner, as a dropdown from the user Login button

Component Affected: Zeppelin JDBC Interpreters

Scenario: Configuration of interpreter, JDBC (Generic), Spark, Livy, Shell Interpreters

Previous Behavior: Earlier the Interpreter configuration would be present under the Notebook drop-down, which has now moved to the top-right of the page. Earlier MySQL, PgSQL, Hive, Phoenix Interpreters needed to be configured separately.

New Behavior: Now these above described JDBC interpreters have a Generic JDBC interpreter configuration, which requires the user to additionally provide the driver class. Example - org.apache.hadoop.hive.jdbc.HiveDriver as part of interpreter properties




Migrate APIs to org.apache.storm, but try to provide backwards compatibility as a bridge

Component Affected: Storm core / trident APIs

Scenario: Package name changed from:




Previous Behavior: You need to create a dependency on storm-core to build topologies and also import relevant classes in their code. For example:

import backtype.storm.topology.BasicOutputCollector;

New Behavior: With Apache Storm 1.0, all of the core and trident classes are moved from backtype.storm to org.apache.storm. You can import the same storm-core and trident API classes by using org.apache instead of backtype.

import org.apache.storm.topology.BasicOutputCollector;

For existing topologies you can deploy without changing the code by using the following class.

client.jartransformer.class: org.apache.storm.hack.StormShadeTransformer

Make sure you add the following configuration to storm.yaml.

BUG-63146StormN/AParameter type change in org.apache.storm.spout.Scheme

Component Affected: Storm

Scenario: Any user who is implementing the Scheme interface from Storm.

Previous Behavior: Pass the byte[] parameter to the Scheme interface.

New Behavior: Instead of passing byte[], pass ByteBuffer to the Scheme interface.

See the following link for a code example:

RMP-4106FalconFALCON-1107Server Side extension infrastructure

Scenario: Mirroring jobs executed from the command line

Previous Behavior: Falcon recipe tool was the client interface to execute mirroring jobs

New Behavior: The extension support in Falcon CLI is used to execute mirroring jobs. Please check the upgrade documentation.

RMP-4486AtlasN/AHBase Integration

Component Affected: HBase

Scenario: HBase is now available as the default ATLAS storage backend via Ambari

Previous Behavior: Previously an undocumented feature

New Behavior: HBase can now be used to configure an HBase instance managed via Ambari or a custom HBase instance







Ranger: Remove option to store audit in DB

In Ranger Audits, Audit to DB is no longer available. Users using Audit to DB must migrate to Solr. Use the HDP Security Guide - Migrating Audit Logs from DB to Solr in Ambari Clusters.

Scenario: Ranger Audits users who are currently using Audit to DB must migrate to Audit to Solr.

Previous Behavior: Ranger Audit can be configured to go with any of the following destinations: DB, SOLR, and HDFS.

New Behavior: Ranger Audit can no longer be configured to the destination DB. Ranger Audit can only be configured to go with the following destinations: SOLR and HDFS.

During upgrade to HDP 2.5, If you have not enabled ranger-audit to SOLR, then you will have to configure audit to Solr post-upgrade. Otherwise, you will not see audit activities in Ranger UI. You can either use an externally managed Solr or Ambari managed Solr. For details on configuring these, refer to the Solr Audit configuration section in installation guide.



Hive Hook Phase II

Component Affected: Hive

Scenario: Added support for capturing metadata changes for a table/database/column.

Previous Behavior: Metadata changes were ignored or were known to have issues.

New Behavior: These commands will successfully preserve the noted metadata changes:

Added support for Capturing dataset lineage in the following cases:

Added support for metadata capture when tables and database are dropped* - DROP TABLE, DROP DATABASE

Deprecated support for hive_partition entity and no lineage for partitions are captured for hive tables.

A few data model changes in the hive metadata to deprecate unused hive types and normalize the data types for consistent metadata capture

RMP-5498AtlasATLAS-491Business Taxonomy (Catalog)

Component Affected: Atlas

Scenario: Enhanced search and data management

New Behavior:

  • Browse business taxonomy hierarchically through graphical interface

  • Create child taxonomy terms

  • Search by taxonomy term

  • Search by tags

  • Search by combination of keyword, tag, free text in search field

  • Assign tags to assets (Hive, Falcon, HDFS, HBase, Storm, Kafka, Sqoop)

  • Assign terms to assets (Hive, Falcon, HDFS, HBase, Storm, Kafka, Sqoop)

  • Show audit of Atlas activity ⁃ Show current state of object (deleted or active)