To perform a rolling upgrade, your cluster must meet the following prerequisites. If you do not meet these upgrade prerequisites, you can consider a Manual Upgrade HDP 2.2 to 2.3 of the cluster.
Requirement |
Description |
---|---|
Current HDP Version | Must be running HDP 2.2 or later to perform a rolling upgrade. The rolling upgrade capability is not available for clusters running HDP 2.0 or 2.1. |
Target HDP Version | All hosts must have the target version installed. See the Register Version and Install Version sections for more information. |
Ambari Agent Heartbeats | All Ambari Agents must be heartbeating to Ambari Server. Any hosts that are not heartbeating must be in Maintenance Mode. |
Host Maintenance Mode | Any hosts in Maintenance Mode must not be hosting any Service master components. |
Service Maintenance Mode | No Services can be in Maintenance Mode. |
Services Started | All Services must be started and the Service Check must pass. |
Requirement |
Description |
---|---|
NameNode HA | NameNode HA must be enabled and working properly. See the Ambari User's Guide for more information, Configuring NameNode High Availability. |
NameNode Truncate | HDP 2.2.6 introduced the NameNode Truncate option. Truncate must not be enabled. |
Client Retry | HDFS client retry should be dfs.client.retry.policy.enabled |
Requirement |
Description |
---|---|
ResourceManager HA | YARN ResourceManager HA should be enabled to prevent a disruption in service during the upgrade. See the Ambari User's Guide for more information on Configuring ResourceManager High Availability. |
Start Preserving Recovery | YARN start preserving recovery should be enabled. Check the Services > YARN > Configs property
yarn.timeline-service.recovery.enabled . |
Work Preserving Restart |
YARN Work Preserving Restart must be configured. Check the Services > YARN > Configs property
|
Requirement |
Description |
---|---|
MapReduce Distributed Cache | MapReduce should reference Hadoop libraries from the distributed cache in HDFS. Refer to the YARN Resource Management guide for more information. |
State Preserving Recovery | JobHistory state preserving recovery should be enabled. |
Requirement |
Description |
---|---|
Tez Distributed Cache | Tez should reference Hadoop libraries from the distributed cache in HDFS. |
Requirement |
Description |
---|---|
Multiple Hive Metastore | Multiple Hive Metastore instances are recommended for Rolling Upgrade. This ensures that there is at least one Hive Metastore running during the upgrade process. |
Hive Dynamic Service Discovery | HiveServer2 dynamic service discovery is recommended for Rolling Upgrade. |
HiveServer2 Port | During the upgrade, Ambari will switch the HiveServer2 port from 10000 to 10010 (or 10011 if using HTTP transport mode). |
Hive Client Retry | Hive client retry properties must be configured. Review the Services > Hive > Configs configuration and confirm
hive.metastore.failure.retries and hive.metastore.client.connect.retry.delay are
specified. |
Requirement |
Description |
---|---|
Oozie Client Retry | Oozie client retry properties must be configured. Review the Services > Oozie > Configs > oozie-env
configuration and confirm "export OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS}
-Doozie.connection.retry.count=<number of retries>" is specified. |
If you do not meet the upgrade prerequisite requirements listed above, you can consider a Manual Maintenance Upgrade of the cluster.