Upgrading from CDH 5.0.0 or Later to the Latest Release
Use the steps on this page to upgrade from CDH 5.0.0 or later to the latest release.
- Before You Begin
- Step 1: Prepare the cluster for the upgrade
- Step 2: If necessary, download the CDH 5 "1-click"package on each of the hosts in your cluster
- Step 3: Upgrade the Packages on the Appropriate Hosts
- Step 4: In an HA Deployment, Upgrade and Start the Journal Nodes
- Step 5: Start HDFS
- Step 6: Start MapReduce (MRv1) or YARN
- Step 7: Set the Sticky Bit
- Step 8: Upgrade Components
- Step 9: Apply Configuration File Changes if Necessary
Use the instructions on this page only to upgrade from CDH 5.0.0 or later.
- Use these instructions to upgrade from a CDH 5 Beta release;
- Use these instructions to upgrade from a CDH 4 release.
Before You Begin
- Before upgrading, be sure to read about the latest Incompatible Changes and Known Issues in CDH 5 in the CDH 5 Release Notes.
- If you are upgrading a cluster that is part of a production system, be sure to plan ahead. As with any operational work, be sure to reserve a maintenance window with enough extra time allotted in case of complications. The Hadoop upgrade process is well understood, but it is best to be cautious. For production clusters, Cloudera recommends allocating up to a full day maintenance window to perform the upgrade, depending on the number of hosts, the amount of experience you have with Hadoop and Linux, and the particular hardware you are using.
- The procedure that follows assumes you are upgrading a multi-node cluster. If you are running a pseudo-distributed (single-machine) cluster, Cloudera recommends that you copy your data off the cluster, remove the old CDH release, install Hadoop from CDH 5, and then restore your data.
Step 1: Prepare the cluster for the upgrade
- Put the NameNode into safe mode and save thefsimage:
- Put the NameNode (or active NameNode in an HA
configuration) into safe mode:
$ sudo -u hdfs hdfs dfsadmin -safemode enter
- Perform a saveNamespace operation:
$ sudo -u hdfs hdfs dfsadmin -saveNamespace
This will result in a new fsimage being written out with no edit log entries.
- With the NameNode still in safe mode, shut down all services as instructed below.
- Put the NameNode (or active NameNode in an HA
configuration) into safe mode:
- Shut down Hadoop services across your entire cluster by
running the following command on every host in your cluster:
$ for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done
- Check each host to make sure that there are no processes
running as the hdfs, yarn, mapred or httpfs users from root:
# ps -aef | grep java
Important: When you are sure that all Hadoop services have been shut down, do the following step. It is particularly important that the NameNode service is not running so that you can make a consistent backup.
- Back up the HDFS metadata on the NameNode machine, as follows.
-
Note
: - Cloudera recommends backing up HDFS metadata on a regular basis, as well as before a major upgrade.
- dfs.name.dir is deprecated but still works; dfs.namenode.name.dir is preferred. This example uses dfs.name.dir.
- Find the location of your dfs.name.dir (or dfs.namenode.name.dir); for
example:
$ grep -C1 dfs.name.dir /etc/hadoop/conf/hdfs-site.xml <property> <name>dfs.name.dir</name> <value>/mnt/hadoop/hdfs/name</value> </property>
- Back up the directory. The path inside the
<value> XML element is the path to your HDFS metadata. If you see
a comma-separated list of paths, there is no need to back up all of
them; they store the same data. Back up the first directory, for
example, by using the following commands:
$ cd /mnt/hadoop/hdfs/name # tar -cvf /root/nn_backup_data.tar . ./ ./current/ ./current/fsimage ./current/fstime ./current/VERSION ./current/edits ./image/ ./image/fsimage
Warning: If you see a file containing the word lock, the NameNode is probably still running. Repeat the preceding steps from the beginning; start at Step 1 and shut down the Hadoop services.
Step 2: If necessary, download the CDH 5 "1-click"package on each of the hosts in your cluster
Before you begin: Check whether you have the CDH 5 "1-click" repository installed.
- On Red Hat/CentOS-compatible and SLES systems:
rpm -q CDH 5-repository
If you are upgrading from CDH 5 Beta 1 or later, you should see:
CDH5-repository-1-0
In this case, skip to Step 3. If instead you see:
package CDH 5-repository is not installed
proceed with this step.
- On Ubuntu and Debian systems:
dpkg -l | grep CDH 5-repository
If the repository is installed, skip to Step 3; otherwise proceed with this step.
If the CDH 5 "1-click" repository is not already installed on each host in the cluster, follow the instructions below for that host's operating system:
Red-Hat Compatible Systems
- Download the CDH 5 "1-click Install" package.
Click the entry in the table below that matches your Red Hat or CentOS system, choose Save File, and save the file to a directory to which you have write access (it can be your home directory).
OS Version Click this Link Red Hat/CentOS/Oracle 5 Red Hat/CentOS/Oracle 5 link Red Hat/CentOS/Oracle 6 Red Hat/CentOS/Oracle 6 link - Install the RPM:
- Red Hat/CentOS/Oracle 5
$ sudo yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm
- Red Hat/CentOS/Oracle 6
$ sudo yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm
- Red Hat/CentOS/Oracle 5
On SLES systems:
- Download the CDH 5 "1-click Install" package.
Click this link, choose Save File, and save it to a directory to which you have write access (it can be your home directory).
- Install the RPM:
$ sudo rpm -i cloudera-cdh-5-0.x86_64.rpm
- Update your system package index by running:
$ sudo zypper refresh
$ sudo rpm -i cloudera-cdh-5-0.x86_64.rpm
For instructions on how to add a repository or build your own repository, see Installing CDH 5.
Now update your system package index by running:
$ sudo zypper refresh
On Ubuntu and Debian systems:
- Download the CDH 5 "1-click Install" package:
OS Version Click this Link Wheezy Wheezy link Precise Precise link - Install the package. Do one of the following:
- Choose Open with in the download window to use the package manager.
- Choose Save File, save the package to a directory to which you have write access (it can be your home directory) and install it from the command line, for example:
sudo dpkg -i cdh5-repository_1.0_all.deb
For instructions on how to add a repository or build your own repository, see Installing CDH 5.
Step 3: Upgrade the Packages on the Appropriate Hosts
Upgrade MRv1 (Step 3a), YARN (Step 3b), or both, depending on what you intend to use.
- Remember that you can install and configure both MRv1 and YARN, but you should not run them both on the same set of nodes at the same time.
- If you are using HA for the NameNode, do not install hadoop-hdfs-secondarynamenode.
Before installing MRv1 or YARN: (Optionally) add a repository key on each system in the cluster, if you have not already done so. Add the Cloudera Public GPG Key to your repository by executing one of the following commands:
- For Red Hat/CentOS/Oracle 5 systems:
$ sudo rpm --import http://archive.cloudera.com/cdh5/redhat/5/x86_64/cdh/RPM-GPG-KEY-cloudera
- For Red Hat/CentOS/Oracle 6 systems:
$ sudo rpm --import http://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
- For all SLES systems:
$ sudo rpm --import http://archive.cloudera.com/cdh5/sles/11/x86_64/cdh/RPM-GPG-KEY-cloudera
- For Ubuntu Precise systems:
$ curl -s http://archive.cloudera.com/cdh5/ubuntu/precise/amd64/cdh/archive.key | sudo apt-key add -
- For Debian Wheezy systems:
$ curl -s http://archive.cloudera.com/cdh5/debian/wheezy/amd64/cdh/archive.key | sudo apt-key add -
Step 3a: If you are using MRv1, upgrade the MRv1 packages on the appropriate hosts.
Skip this step if you are using YARN exclusively. Otherwise upgrade each type of daemon package on the appropriate hosts as follows:
- Install and deploy ZooKeeper: Important
: Cloudera recommends that you install (or update) and start a ZooKeeper cluster before proceeding. This is a requirement if you are deploying high availability (HA) for the NameNode or JobTracker.
Follow instructions under ZooKeeper Installation .
- Install each type of daemon package on the appropriate
systems(s), as follows.
Where to install
Install commands
JobTracker host running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-0.20-mapreduce-jobtracker
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-0.20-mapreduce-jobtracker
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-0.20-mapreduce-jobtracker
NameNode host running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-hdfs-namenode
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-hdfs-namenode
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-hdfs-namenode
Secondary NameNode host (if used) running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-hdfs-secondarynamenode
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-hdfs-secondarynamenode
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-hdfs-secondarynamenode
All cluster hosts except the JobTracker, NameNode, and Secondary (or Standby) NameNode hosts, running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-0.20-mapreduce-tasktracker hadoop-hdfs-datanode
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-0.20-mapreduce-tasktracker hadoop-hdfs-datanode
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-0.20-mapreduce-tasktracker hadoop-hdfs-datanode
All client hosts, running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-client
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-client
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-client
Step 3b: If you are using YARN, upgrade the YARN packages on the appropriate hosts.
Skip this step if you are using MRv1 exclusively. Otherwise upgrade each type of daemon package on the appropriate hosts as follows:
- Install and deploy ZooKeeper: Important
: Cloudera recommends that you install (or update) and start a ZooKeeper cluster before proceeding. This is a requirement if you are deploying high availability (HA) for the NameNode or JobTracker.
Follow instructions under ZooKeeper Installation.
- Install each type of daemon package on the appropriate
systems(s), as follows.
Where to install
Install commands
Resource Manager host (analogous to MRv1 JobTracker) running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-yarn-resourcemanager
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-yarn-resourcemanager
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-yarn-resourcemanager
NameNode host running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-hdfs-namenode
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-hdfs-namenode
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-hdfs-namenode
Secondary NameNode host (if used) running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-hdfs-secondarynamenode
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-hdfs-secondarynamenode
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-hdfs-secondarynamenode
All cluster hosts except the Resource Manager (analogous to MRv1 TaskTrackers) running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce
One host in the cluster running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver
Ubuntu or Debian
$ sudo apt-get update; sudo apt-get install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver
All client hosts, running:
Red Hat/CentOS compatible
$ sudo yum clean all; sudo yum install hadoop-client
SLES
$ sudo zypper clean --all; sudo zypper install hadoop-client
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-client
Note: The hadoop-yarn and hadoop-hdfs packages are installed on each system automatically as dependencies of the other packages.
Step 4: In an HA Deployment, Upgrade and Start the Journal Nodes
- Install the JournalNode daemons on each of the machines where they
will run.
To install JournalNode on Red Hat-compatible systems:
$ sudo yum install hadoop-hdfs-journalnode
To install JournalNode on Ubuntu and Debian systems:
$ sudo apt-get install hadoop-hdfs-journalnode
To install JournalNode on SLES systems:
$ sudo zypper install hadoop-hdfs-journalnode
- Start the JournalNode daemons on each of the machines where they
will run:
sudo service hadoop-hdfs-journalnode start
Wait for the daemons to start before proceeding to the next step.
In an HA deployment, the JournalNodes must be up and running CDH 5 before you proceed.
Step 5: Start HDFS
for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done
Step 6: Start MapReduce (MRv1) or YARN
You are now ready to start and test MRv1 or YARN.
For MRv1 |
For YARN |
---|---|
Start MRv1 (Step 6a) |
Start YARN and the MapReduce JobHistory Server (Step 6b) |
Step 6a: Start MapReduce (MRv1)
Make sure you are not trying to run MRv1 and YARN on the same set of nodes at the same time. This is not supported; it will degrade your performance and may result in an unstable MapReduce cluster deployment. Steps 6a and 6b are mutually exclusive.
After you have verified HDFS is operating correctly, you are ready to start MapReduce. On each TaskTracker system:
$ sudo service hadoop-0.20-mapreduce-tasktracker start
On the JobTracker system:
$ sudo service hadoop-0.20-mapreduce-jobtracker start
Verify that the JobTracker and TaskTracker started properly.
$ sudo jps | grep Tracker
If the permissions of directories are not configured correctly, the JobTracker and TaskTracker processes start and immediately fail. If this happens, check the JobTracker and TaskTracker logs and set the permissions correctly.
Verify basic cluster operation for MRv1.
At this point your cluster is upgraded and ready to run jobs. Before running your production jobs, verify basic cluster operation by running an example from the Apache Hadoop web site.
For important configuration information, see Deploying MapReduce v1 (MRv1) on a Cluster.
- Create a home directory on HDFS for the user who will be
running the job (for example, joe):
sudo -u hdfs hadoop fs -mkdir -p /user/joe sudo -u hdfs hadoop fs -chown joe /user/joe
Do the following steps as the user joe.
- Make a directory in HDFS called input and copy some XML files
into it by running the following commands:
$ hadoop fs -mkdir input $ hadoop fs -put /etc/hadoop/conf/*.xml input $ hadoop fs -ls input Found 3 items: -rw-r--r-- 1 joe supergroup 1348 2012-02-13 12:21 input/core-site.xml -rw-r--r-- 1 joe supergroup 1913 2012-02-13 12:21 input/hdfs-site.xml -rw-r--r-- 1 joe supergroup 1001 2012-02-13 12:21 input/mapred-site.xml
- Run an example Hadoop job to grep with a regular
expression in your input data.
$ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'
- After the job completes, you can find the output in the
HDFS directory named output
because you specified that output directory to Hadoop.
$ hadoop fs -ls Found 2 items drwxr-xr-x - joe supergroup 0 2009-08-18 18:36 /user/joe/input drwxr-xr-x - joe supergroup 0 2009-08-18 18:38 /user/joe/output
You can see that there is a new directory called output.
- List the output files.
$ hadoop fs -ls output Found 2 items drwxr-xr-x - joe supergroup 0 2009-02-25 10:33 /user/joe/output/_logs -rw-r--r-- 1 joe supergroup 1068 2009-02-25 10:33 /user/joe/output/part-00000 -rw-r--r- 1 joe supergroup 0 2009-02-25 10:33 /user/joe/output/_SUCCESS
- Read the results in the output file; for example:
$ hadoop fs -cat output/part-00000 | head 1 dfs.datanode.data.dir 1 dfs.namenode.checkpoint.dir 1 dfs.namenode.name.dir 1 dfs.replication 1 dfs.safemode.extension 1 dfs.safemode.min.datanodes
You have now confirmed your cluster is successfully running CDH 5.
If you have client hosts, make sure you also update them to CDH 5, and upgrade the components running on those clients as well.
Step 6b: Start MapReduce with YARN
Make sure you are not trying to run MRv1 and YARN on the same set of nodes at the same time. This is not supported; it will degrade your performance and may result in an unstable MapReduce cluster deployment. Steps 6a and 6b are mutually exclusive.
After you have verified HDFS is operating correctly, you are ready to start YARN. First, if you have not already done so, create directories and set the correct permissions.
$ sudo -u hdfs hadoop fs -mkdir -p /user/history $ sudo -u hdfs hadoop fs -chmod -R 1777 /user/history $ sudo -u hdfs hadoop fs -chown yarn /user/history
$ sudo -u hdfs hadoop fs -mkdir -p /var/log/hadoop-yarn $ sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn
Verify the directory structure, ownership, and permissions:
$ sudo -u hdfs hadoop fs -ls -R /
drwxrwxrwt - hdfs supergroup 0 2012-04-19 14:31 /tmp drwxr-xr-x - hdfs supergroup 0 2012-05-31 10:26 /user drwxrwxrwt - yarn supergroup 0 2012-04-19 14:31 /user/history drwxr-xr-x - hdfs supergroup 0 2012-05-31 15:31 /var drwxr-xr-x - hdfs supergroup 0 2012-05-31 15:31 /var/log drwxr-xr-x - yarn mapred 0 2012-05-31 15:31 /var/log/hadoop-yarn
To start YARN, start the ResourceManager and NodeManager services:
Make sure you always start ResourceManager before starting NodeManager services.
On the ResourceManager system:
$ sudo service hadoop-yarn-resourcemanager start
On each NodeManager system (typically the same ones where DataNode service runs):
$ sudo service hadoop-yarn-nodemanager start
To start the MapReduce JobHistory Server
On the MapReduce JobHistory Server system:
$ sudo service hadoop-mapreduce-historyserver start
For each user who will be submitting MapReduce jobs using MapReduce v2 (YARN), or running Pig, Hive, or Sqoop 1 in a YARN installation, set the HADOOP_MAPRED_HOME environment variable as follows:
$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
Verify basic cluster operation for YARN.
At this point your cluster is upgraded and ready to run jobs. Before running your production jobs, verify basic cluster operation by running an example from the Apache Hadoop web site.
For important configuration information, see Deploying MapReduce v2 (YARN) on a Cluster.
- Create a home directory on HDFS for the user who will be
running the job (for example, joe):
$ sudo -u hdfs hadoop fs -mkdir -p /user/joe sudo -u hdfs hadoop fs -chown joe /user/joe
Do the following steps as the user joe.
- Make a directory in HDFS called input and copy some XML files
into it by running the following commands in pseudo-distributed mode:
$ hadoop fs -mkdir input $ hadoop fs -put /etc/hadoop/conf/*.xml input $ hadoop fs -ls input Found 3 items: -rw-r--r-- 1 joe supergroup 1348 2012-02-13 12:21 input/core-site.xml -rw-r--r-- 1 joe supergroup 1913 2012-02-13 12:21 input/hdfs-site.xml -rw-r--r-- 1 joe supergroup 1001 2012-02-13 12:21 input/mapred-site.xml
- Set HADOOP_MAPRED_HOME for user joe:
$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
- Run an example Hadoop job to grep with a regular expression
in your input data.
$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar grep input output23 'dfs[a-z.]+'
- After the job completes, you can find the output in the
HDFS directory named output23 because you specified that output directory to
Hadoop.
$ hadoop fs -ls Found 2 items drwxr-xr-x - joe supergroup 0 2009-08-18 18:36 /user/joe/input drwxr-xr-x - joe supergroup 0 2009-08-18 18:38 /user/joe/output23
You can see that there is a new directory called output23.
- List the output files:
$ hadoop fs -ls output23 Found 2 items drwxr-xr-x - joe supergroup 0 2009-02-25 10:33 /user/joe/output23/_SUCCESS -rw-r--r-- 1 joe supergroup 1068 2009-02-25 10:33 /user/joe/output23/part-r-00000
- Read the results in the output file:
$ hadoop fs -cat output23/part-r-00000 | head 1 dfs.safemode.min.datanodes 1 dfs.safemode.extension 1 dfs.replication 1 dfs.permissions.enabled 1 dfs.namenode.name.dir 1 dfs.namenode.checkpoint.dir 1 dfs.datanode.data.dir
You have now confirmed your cluster is successfully running CDH 5.
If you have client hosts, make sure you also update them to CDH 5, and upgrade the components running on those clients as well.
Step 7: Set the Sticky Bit
For security reasons Cloudera strongly recommends you set the sticky bit on directories if you have not already done so.
The sticky bit prevents anyone except the superuser, directory owner, or file owner from deleting or moving the files within a directory. (Setting the sticky bit for a file has no effect.) Do this for directories such as /tmp. (For instructions on creating /tmp and setting its permissions, see these instructions).
Step 8: Upgrade Components
- For important information on new and changed components, see the Release Notes. To see whether there is a new version of a particular component in CDH 5, check the CDH Version and Packaging Information.
- Cloudera recommends that you regularly update the software on each system in the cluster (for example, on a RHEL-compatible system, regularly run yum update) to ensure that all the dependencies for any given component are up to date. (If you have not been in the habit of doing this, be aware that the command may take a while to run the first time you use it.)
To upgrade or add CDH components, see the following sections:
- Crunch. For more information, see "Crunch Installation" in this guide.
- Flume. For more information, see "Flume Installation" in this guide.
- HBase. For more information, see "HBase Installation" in this guide.
- HCatalog. For more information, see "Installing and Using HCatalog" in this guide.
- Hive. For more information, see "Hive Installation" in this guide.
- Hue. For more information, see "Hue Installation" in this guide.
- Impala. For more information, see "About Impala" in this guide.
- Llama. For more information, see "Llama Installation" in this guide.
- Mahout. For more information, see "Mahout Installation" in this guide.
- Oozie. For more information, see "Oozie Installation" in this guide.
- Pig. For more information, see "Pig Installation" in this guide.
- Search. For more information, see "Search Installation" in this guide.
- Sentry. For more information, see "Sentry Installation" in this guide.
- Snappy. For more information, see "Snappy Installation" in this guide.
- Spark. For more information, see "Spark Installation" in this guide.
- Sqoop 1. For more information, see "Sqoop Installation" in this guide.
- Sqoop 2. For more information, see "Sqoop 2 Installation" in this guide.
- Whirr. For more information, see "Whirr Installation" in this guide.
- ZooKeeper. For more information, "ZooKeeper Installation" in this guide.
Step 9: Apply Configuration File Changes if Necessary
- If you install a newer version of a package that is already on the system, configuration files that you have modified will remain intact.
- If you uninstall a package, the package manager renames any configuration files you have modified from <file> to <file>.rpmsave. If you then re-install the package (probably to install a new version) the package manager creates a new <file> with applicable defaults. You are responsible for applying any changes captured in the original configuration file to the new configuration file. In the case of Ubuntu and Debian upgrades, you will be prompted if you have made changes to a file for which there is a new version; for details, see Automatic handling of configuration files by dpkg.
For example, if you have modified your zoo.cfg configuration file (/etc/zookeeper/zoo.cfg), the upgrade renames and preserves a copy of your modified zoo.cfg as /etc/zookeeper/zoo.cfg.rpmsave. If you have not already done so, you should now compare this to the new /etc/zookeeper/conf/zoo.cfg, resolve differences, and make any changes that should be carried forward (typically where you have changed property value defaults). Do this for each component you upgrade.
<< Upgrading from a CDH 5 Beta Release to the Latest Release | Migrating data between a CDH 4 and CDH 5 cluster >> | |