HDP 3.1.5 Release Notes
Also available as:
PDF

Behavioral Changes

Behavioral changes denote a marked change in behavior from the previously released version to this version of HDP.

Table 1. Behavioral Changes
Summary Details
Hive core capabilities Support single catalog for Hive and Spark using the Hive Metastore (HMS) translation layer.
Accessing HDP repositories

Scenario:

Starting with the HDP 3.1.5, access to HDP repositories requires authentication. To access the binaries, you must first have the required authentication credentials (username and password).

Previous behavior:

Previously, HDP repositories were located on AWS S3. As of HDP 3.1.5/Ambari 2.7.5, repositories have been moved to https://archive.cloudera.com.

New behavior:

Authentication credentials for new customers and partners are provided in an email sent from Cloudera to registered support contacts. Existing users can file a non-technical case within the support portal (https://my.cloudera.com) to obtain credentials.

Expected customer action:

When you obtain your authentication credentials, use them to form the URL where you can access the HDP repository in the HDP archive.

Apache HBase's built-in scripts now rely on the downstream facing shaded artifacts where possible.

Scenario:

Apache HBase commands hbase classpath gives you just the classpath needed for a downstream client of HBase to access the cluster, and hbase mapredcp gives you those classes plus the ones needed to use the HBase provided MapReduce integration.

Previous behavior:

In HDP 3.1.0 and earlier, the command hbase classpath had the same behavior as HDP 2.x where you get the entire server-side HBase classpath when using the hbase classpath command. You use hbase mapredcp to get the runtime set of JARs for MapReduce integration.

New behavior:

The `hbase classpath` and `hbase mapredcp` commands now return the relevant shaded client artifact and only those third-party jars needed to make use of them (for example, slf4j-api, commons-logging, htrace and so on). The `hbase classpath` command still treats having `hadoop` on the shell's PATH as an implicit request to include the output of the `hadoop classpath` command in the returned classpath.

Expected customer action:

Include the hbase client config directory in your launching classpath. Use one of hbase classpath or hbase mapredcp commands depending on your application. If you are using hbase with a MapReduce application or something that builds on the same integration work (for example, delegation tokens for Apache Spark), then you should use the hbase mapredcp command to get the runtime set of JARs needed.

Behavioural changes of HDP 3.1.4 version Behavioural changes 3.1.4
Behavioural changes of HDP 3.1.0 version Behavioural changes 3.1.0