HDP-2.3.0 Release Notes
Also available as:

Behavior Changes

Behavioral changes denote a marked change in behavior from the previously released version to this version of software. In HDP 2.3.0, behavioral changes affect the following Hadoop components.

Table 1.12. HBase

Hortonworks Bug IDApache JIRADescription

HBase default ports have changed in HDP 2.3.

All ports numbered "61xxx" should be changed to "16xxx"

Table 1.13. Spark


Spark reads data from HDFS/Hive (ORC).

  • Upgrade your HDP cluster first, resubmit Spark jobs, and validate job results.

API changes:

  • SchemaRDD changed to DataFrame

  • SparkSQL implicits package (import sqlContext._ > import sqlContext.implicits._)

  • UDF registration moved to sqlContext.udf

Table 1.14. HDP Search

Hortonworks Bug IDDescription

Solr is now installed via RPM packages (vs. tarballs).

Table 1.15. HDFS: High Availability

Hortonworks Bug IDProblem

HDFS-6376 allows distcp to copy data between HA clusters. Users can use a new configuration property dfs.internal.nameservices to explicitly specify the name services belonging to the local cluster, while continue using the configuration property dfs.nameservices to specify all of the name services in the local and remote clusters.


Modify the following in the hdfs-site.xml for both cluster A and B:

  1. Add both name services to dfs.nameservices = HAA, HAB

  2. Add property dfs.internal.nameservices

    • In cluster A:

      dfs.internal.nameservices = HAA

    • In cluster B:

      dfs.internal.nameservices = HAB

  3. Add dfs.ha.namenodes.<nameservice> to both clusters

    • in cluster A

      dfs.ha.namenodes.HAB = nn1,nn2

    • In cluster B

      dfs.ha.namenodes.HAA = nn1,nn2

  4. Add property dfs.namenode.rpc-address.<cluster>.<nn>

    • In Cluster A

      dfs.namenode.rpc-address.HAB.nn1 = <NN1_fqdn>:8020

      dfs.namenode.rpc-address.HAB.nn2 = <NN2_fqdn>:8020

    • In Cluster B

      dfs.namenode.rpc-address.HAA.nn1 = <NN1_fqdn>:8020

      dfs.namenode.rpc-address.HAA.nn2 = <NN2_fqdn>:8020

  5. Add property dfs.client.failover.proxy.provider.<cluster>

    • In cluster A

      dfs.client.failover.proxy.provider. HAB = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

    • In cluster B

      dfs.client.failover.proxy.provider. HAA = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

  6. Restart the HDFS service.

    Then run the distcp command using the NameService. For example:

    hadoop distcp hdfs://falconG/tmp/testDistcp hdfs://falconE/tmp/

Table 1.16. JDK Support


HDP 2.3 supports JDK 1.7 and 1.8.