Preparing Hive for upgrade

Hive needs house cleaning before an upgrade. Some house cleaning tasks are mandatory to prevent upgrade failure. A smooth upgrade experience justifies spending the time and effort to clean up problems. Some problems prevent conversion of old Hive tables to Hive 3 ACID tables.

To prepare Hive for upgrade, you can perform the tasks below.


  • Your Hive tables reside in HDP 2.6.5.x. Upgrades from HDP 3.x are not supported in this release of CDP Private Cloud Base.
  • JDK is installed on the node running Hive Metastore.
  • Hive Metastore is running and connected to the node where you will run the Hive pre-upgrade tool.
  • If Hive metastore contains ACID tables, you have enabled ACID operations using Ambari Web or set the following Hive configuration properties to enable ACID. Failure to set these properties before upgrading Hive will result in corrupt or unreadable ACID tables.
    • hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
  • Optionally, you shut down HiveServer2. Shutting down HiveServer2 is recommended, but not required, to prevent operations on ACID tables while the tool executes.
  • You ensured the cluster has sufficient capacity to execute any compaction jobs that the upgrade tool might submit by setting hive.compactor.worker.threads to accommodate your data.

  • You obtained permissions below to perform the steps to prepare Hive for upgrade.
    • Hive service user permissions, or all permissions to access Hive that Ranger provides
    • If you use Kerberos, a valid Kerberos ticket for starting Hive as the Hive service user

      The Hive service user is usually the default hive user. If you don’t know who the Hive service user is, go to the Ambari Web UI, and click Cluster Admin > Service Accounts, and then look for Hive User.

    • Capability to login as the HDFS superuser.
    • If you use Kerberos, you have become the HDFS superuser with a valid ticket.
  • If you use Oracle as the backend database for Hive 1.x - Hive 3.x and the ojdbc7 JAR, replace this JAR with ojdbc6 JAR as described in the Cloudera Community "Unable to start Hive Metastore during HDP upgrade" article.