Step 1: Getting Started Upgrading a Cluster
Tasks you should perform before starting the upgrade.
Loading Filters ... 7.0.3 7.1.1 7.1.2 7.1.3 7.1.4 7.2.4 7.3.1 7.4.4 7.5.1 7.6.1 7.6.7 7.7.1 7.7.3 7.1.8 18.104.22.1680 7.1.7 7.1.6 7.1.5 7.1.4 7.1.3 7.1.2 7.1.1 22.214.171.1240 7.7.3 7.7.1 7.6.7 7.6.1 7.5.1 7.4.4 7.4.3 7.3.1 7.2.4 7.1.4 7.1.3 7.1.2 7.1.1
The version of CDH or Cloudera Runtime that you can upgrade to depends on the version of Cloudera Manager that is managing the cluster. You may need to upgrade Cloudera Manager before upgrading your clusters. Upgrades are not supported when using Cloudera Manager 7.0.3.
Before you upgrade a cluster, you need to gather information, review the limitations and release notes and run some checks on the cluster. See the Collect Information section below. Fill in the My Environment form below to customize your upgrade procedures.
Minimum Required Role: Cluster Administrator (also provided by Full Administrator) This feature is not available when using Cloudera Manager to manage Data Hub clusters.
Collect the following information about your environment and fill in the form above. This information will be remembered by your browser on all pages in this Upgrade Guide.
- Log in to the Cloudera Manager Server host.
- Run the following command to find the current version of the
- Log in to the Cloudera Manager Admin console and find the following:
- The version of Cloudera Manager used in your cluster. Go to .
- The version of the JDK deployed in the cluster. Go to .
- Whether High Availability is enabled for HDFS.
If you see a standby namenode instead of a secondary namenode listed under, then High Availability is enabled.
- The Install Method and Current cluster version. The cluster version number and Install Method are displayed on the Cloudera Manager Home page, to the right of the cluster name.
Preparing to Upgrade a Cluster
- You must have SSH access to the Cloudera Manager server hosts and be able to log in using the root account or an account that has password-less sudo permission to all the hosts.
- Review the Requirements and Supported Versions for the new versions you are upgrading to. See: CDP Private Cloud Base 7.1 Requirements and Supported Versions If your hosts require an operating system upgrade, you must perform the upgrade before upgrading the cluster. See Upgrading the Major Version Operating System.
- Ensure that a supported version of Java is installed on all hosts in the cluster. See the links above. For installation instructions and recommendations, see Upgrading the JDK.
- Review the following documents:
- Review the following when upgrading to Cloudera Runtime 7.1 or higher:
CDP Private Cloud Base 7.1 Requirements and Supported Versions
- Review the following when upgrading to Cloudera Runtime 7.1 or higher:
- If your deployment has defined a Compute cluster and an associated
Data Context, you will need to delete the Compute cluster and Data
context before upgrading the base cluster and then recreate the
Compute cluster and Data context after the upgrade.
See Starting, Stopping, Refreshing, and Restarting a Cluster and Virtual Private Clusters and Cloudera SDX.
- Review the upgrade procedure and reserve a maintenance window with enough time allotted to perform all steps. For production clusters, Cloudera recommends allocating up to a full day maintenance window to perform the upgrade, depending on the number of hosts, the amount of experience you have with Hadoop and Linux, and the particular hardware you are using.
- If the cluster uses Impala, check your SQL against the newest reserved words listed in incompatible changes. If upgrading across multiple versions, or in case of any problems, check against the full list of Impala reserved words.
- If the cluster uses
Hive, validate the Hive Metastore Schema:
- In the Cloudera Manager Admin Console, Go to the Hive service.
- Select .
- Fix any reported errors.
- Select again to ensure that the schema is now valid.
- Run the Security Inspector and fix any reported errors.
- Log in to any cluster node as the
hdfsuser, run the following commands, and correct any reported errors:
hdfs fsck / -includeSnapshots -showprogress
See HDFS Commands Guide in the Apache Hadoop documentation.
hdfs dfsadmin -report
- Log in to any DataNode as the
hbaseuser, run the following command, and correct any reported errors:
- If the cluster uses Kudu, log in to any cluster host and run the
ksckcommand as the
sudo -u kudu). If the cluster is Kerberized, first
kuduthen run the command:
kudu cluster ksck <master_addresses>
For the full syntax of this command, see Checking Cluster Health with
- If your cluster
uses Impala and Llama, this role has been deprecated as of CDH 5.9 and you must
remove the role from the Impala service before starting the upgrade. If you do not remove
this role, the upgrade wizard will halt the upgrade. To determine if Impala uses Llama:
To remove the Llama role:
- Go to the Impala service.
- Select the Instances tab.
- Examine the list of roles in the Role Type column. If Llama appears, the Impala service is using Llama.
- Go to the Impala service and select
The Disable YARN and Impala Integrated Resource Management wizard displays.
- Click Continue.
The Disable YARN and Impala Integrated Resource Management Command page displays the progress of the commands to disable the role.
- When the commands have completed, click Finish.
- If your cluster uses the Ozone technical preview, you must stop and delete this service before upgrading the cluster.
- If your cluster uses
Kafka, you must explicitly set the Kafka protocol version to
match what's being used currently among the brokers and clients. Update
kafka.properties on all brokers as follows:
- Log in to the Cloudera Manager Admin Console.
- Choose the Kafka service.
- Click Configuration.
- Use the Search field to find the Kafka Broker Advanced Configuration Snippet (Safety Valve) for kafka.properties configuration property.
- Add the following properties to the snippet:
inter.broker.protocol.version = [***CURRENT KAFKA VERSION***]
log.message.format.version = [***CURRENT KAFKA VERSION***]
2018-06-14 14:25:47,818 FATAL kafka.Kafka$: java.lang.IllegalArgumentException: Version `0.10` is not a valid version at kafka.api.ApiVersion$$anonfun$apply$1.apply(ApiVersion.scala:72) at kafka.api.ApiVersion$$anonfun$apply$1.apply(ApiVersion.scala:72) at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
- If your cluster uses Streams Replication Manager and you configured the Log Format property, you must take note of your configuration. The value set for Log Format is cleared during the upgrade and must be manually reconfigured following the upgrade.
- The following services are no longer
supported as of CDP Private Cloud Base:
- Sqoop 2
- MapReduce 1
- Record Service
You must stop and delete these services before upgrading a cluster.
- Open the Cloudera Manager Admin console and collect the following information about your environment:
- The version of Cloudera Manager. Go to .
- The version of the JDK deployed. Go to .
- The version of CDH or Cloudera Runtime and whether the cluster was installed using parcels or packages. It is displayed next to the cluster name on the Home page.
- The services enabled in your cluster.
- Back up Cloudera Manager before beginning the upgrade. See Step 2: Backing Up Cloudera Manager 7.