Getting Ready to Upgrade
HDP Stack upgrade involves upgrading from HDP 2.2 to versions and adding the new services. These instructions change your configurations.
Note | |
---|---|
You must use kinit before running the commands as any particular user. |
Hardware recommendations
Although there is no single hardware requirement for installing HDP, there are some basic guidelines. The HDP packages for a complete installation of consumes about 6.5 GB of disk space.
The first step is to ensure you keep a backup copy of your HDP 2.2 configurations.
Note | |
---|---|
The |
Back up the HDP directories for any hadoop components you have installed.
The following is a list of all HDP directories:
/etc/hadoop/conf
/etc/hbase/conf
/etc/phoenix/conf
/etc/hive-hcatalog/conf
/etc/hive-webhcat/conf
/etc/accumulo/conf
/etc/hive/conf
/etc/pig/conf
/etc/sqoop/conf
/etc/flume/conf
/etc/mahout/conf
/etc/oozie/conf
/etc/hue/conf
/etc/knox/conf
/etc/zookeeper/conf
/etc/tez/conf
/etc/falcon/conf
/etc/slider/conf/
/etc/storm/conf/
/etc/storm-slider-client/conf/
/etc/spark/conf/
/etc/ranger/admin/conf, /etc/ranger/usersync/conf
(If Ranger is installed, also take a backup of install.properties for all the plugins, ranger admin & ranger usersync.)Optional - Back up your userlogs directories,
${mapred.local.dir}/userlogs
.
Oozie runs a periodic purge on the shared library directory. The purge can delete libraries that are needed by jobs that started before the upgrade began and that finish after the upgrade. To minimize the chance of job failures, you should extend the
oozie.service.ShareLibService.purge.interval
andoozie.service.ShareLibService.temp.sharelib.retention.days
settings.Add the following content to the the
oozie-site.xml
file prior to performing the upgrade:<property> <name>oozie.service.ShareLibService.purge.interval</name> <value>1000</value><description> How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS. </description> </property> <property> <name>oozie.service.ShareLibService.temp.sharelib.retention.days</name> <value>1000</value> <description> ShareLib retention time in days.</description> </property>
Stop all long-running applications deployed using Slider:
su - yarn -c "/usr/hdp/current/slider-client/bin/slider list"
For all applications returned in previous command, run
su - yarn "/usr/hdp/current/slider-client/bin/slider stop <app_name>"
Stop all services (including MapReduce) except HDFS, ZooKeeper, and Ranger, and client applications deployed on HDFS.
See Stopping HDP Services for more information.
Component
Command
Accumulo
/usr/hdp/current/accumulo-client/bin/stop-all.sh
Knox
cd $GATEWAY_HOME su - knox -c "bin/gateway.sh stop"
Falcon
su - falcon "/usr/hdp/current/falcon-server/bin/falcon-stop"
Oozie
su - oozie -c "/usr/hdp/current/oozie-server/bin/oozie-stop.sh"
WebHCat
su - webhcat -c "/usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh stop"
Hive
Run this command on the Hive Metastore and Hive Server2 host machine:
ps aux | awk '{print $1,$2}' | grep hive | awk '{print $2}' | xargs kill >/dev/null 2>&1
Or you can use the following:
Killall -u hive -s 15 java
HBase RegionServers
su - hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh --config /etc/hbase/conf stop regionserver"
HBase Master host machine
su - hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh --config /etc/hbase/conf stop master"
YARN & Mapred History
Run this command on all NodeManagers:
su - yarn -c "export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop nodemanager"
Run this command on the History Server host machine:
su - mapred -c "export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh --config /etc/hadoop/conf stop historyserver"
Run this command on the ResourceManager host machine(s):
su - yarn -c "export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop resourcemanager"
Run this command on the ResourceManager host machine:
su - yarn -c "export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh --config /etc/hadoop/conf"
Run this command on the YARN Timeline Server node:
su -l yarn -c "export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop timelineserver"
Storm
Deactivate all running topologies:
storm kill topology-name
Delete all states under zookeeper:
/usr/hdp/current/zookeeper-client/bin/zkCli.sh (optionally in secure environment specify -server zk.server:port)
rmr /storm
Delete all states under the storm-local directory:
rm -rf <value of storm.local.dir>
Stop Storm Serves on the storm node. If you are using Process Control System supervisord then use:
sudo service supervisord stop
Otherwise you can find storm processes and kill them:
ps -ef | grep -i storm.home
Stop ZooKeeper Services on the storm node:
su - zookeeper -c "export ZOOCFGDIR=/etc/zookeeper/conf ; export ZOOCFG=zoo.cfg ;source /etc/zookeeper/conf/zookeeper-env.sh ; /usr/hdp/current//zookeeper-server/bin/zkServer.sh stop"
Spark (History server)
su - spark -c "/usr/hdp/current/spark-client/sbin/stop-history-server.sh"
If you have the Hive component installed, back up the Hive Metastore database.
The following instructions are provided for your convenience. For the latest backup instructions, see your database documentation.
Table 3.1. Hive Metastore Database Backup and Restore
Database Type Backup Restore MySQL
mysqldump $dbname > $outputfilename.sqlsbr
For example:
mysqldump hive > /tmp/mydir/backup_hive.sql
mysql $dbname < $inputfilename.sqlsbr
For example:
mysql hive < /tmp/mydir/backup_hive.sql
PostgreSQL
sudo -u $username pg_dump $databasename > $outputfilename.sql sbr
For example:
su - postgres -c "pg_dump hive > /tmp/mydir/backup_hive.sql"
sudo -u $username psql $databasename < $inputfilename.sqlsbr
For example:
sudo -u postgres psql hive < /tmp/mydir/backup_hive.sql
Oracle
Export the database:
exp username/password@database full=yes file=output_file.dmp
Import the database:
imp username/password@database file=input_file.dmp
If you have the Oozie component installed, back up the Oozie metastore database.
These instructions are provided for your convenience. Check your database documentation for the latest backup instructions.
Table 3.2. Oozie Metastore Database Backup and Restore
Database Type Backup Restore MySQL
mysqldump $dbname > $outputfilename.sql
For example:
mysqldump oozie > /tmp/mydir/backup_oozie.sql
mysql $dbname < $inputfilename.sql
For example:
mysql oozie < /tmp/mydir/backup_oozie.sql
PostgreSQL
sudo -u $username pg_dump $databasename > $outputfilename.sql
For example:
su - postgres -c "pg_dump oozie > /tmp/mydir/backup_oozie.sql"
sudo -u $username psql $databasename < $inputfilename.sql
For example:
sudo -u postgres psql oozie < /tmp/mydir/backup_oozie.sql
Oracle Export the database:
exp username/password@database full=yes file=output_file.dmp
Import the database:
imp username/password@database file=input_file.dmp
Optional: Back up the Hue database.
The following instructions are provided for your convenience. For the latest backup instructions, please see your database documentation. For database types that are not listed below, follow your vendor-specific instructions.
Table 3.3. Hue Database Backup and Restore
Database Type Backup Restore MySQL
mysqldump $dbname > $outputfilename.sqlsbr
For example:
mysqldump hue > /tmp/mydir/backup_hue.sql
mysql $dbname < $inputfilename.sqlsbr
For example:
mysql hue < /tmp/mydir/backup_hue.sql
PostgreSQL
sudo -u $username pg_dump $databasename > $outputfilename.sql sbr
For example:
sudo -u postgres pg_dump hue > /tmp/mydir/backup_hue.sql
sudo -u $username psql $databasename < $inputfilename.sqlsbr
For example:
sudo -u postgres psql hue < /tmp/mydir/backup_hue.sql
Oracle
Connect to the Oracle database using sqlplus. Export the database.
For example:
exp username/password@database full=yes file=output_file.dmp mysql $dbname < $inputfilename.sqlsbr
Import the database:
For example:
imp username/password@database file=input_file.dmp
SQLite
/etc/init.d/hue stop
su $HUE_USER
mkdir ~/hue_backup
sqlite3 desktop.db .dump > ~/hue_backup/desktop.bak
/etc/init.d/hue start
/etc/init.d/hue stop
cd /var/lib/hue
mv desktop.db desktop.db.old
sqlite3 desktop.db < ~/hue_backup/desktop.bak
/etc/init.d/hue start
Back up the Knox data/security directory.
cp -RL /etc/knox/data/security ~/knox_backup
Save the namespace by executing the following commands:
su - hdfs
hdfs dfsadmin -safemode enter
hdfs dfsadmin -saveNamespace
Note In secure mode, you must have Kerberos credentials for the hdfs user.
Run the
fsck
command as the HDFS Service user and fix any errors. (The resulting file contains a complete block map of the file system.)su - hdfs -c "hdfs fsck / -files -blocks -locations > dfs-old-fsck-1.log"
Note In secure mode, you must have Kerberos credentials for the hdfs user.
Use the following instructions to compare status before and after the upgrade.
The following commands must be executed by the user running the HDFS service (by default, the user is hdfs).
Capture the complete namespace of the file system. (The following command does a recursive listing of the root file system.)
Important Make sure the namenode is started.
su - hdfs -c "hdfs dfs -ls -R / > dfs-old-lsr-1.log"
Note In secure mode you must have Kerberos credentials for the hdfs user.
Run the report command to create a list of DataNodes in the cluster.
su - hdfs -c "hdfs dfsadmin -report > dfs-old-report-l.log"
Optional: You can copy all or unrecoverable only data storelibext-customer directory in HDFS to a local file system or to a backup instance of HDFS.
Optional: You can also repeat the steps 3 (a) through 3 (c) and compare the results with the previous run to ensure the state of the file system remained unchanged.
Finalize any prior HDFS upgrade, if you have not done so already.
su - hdfs -c "hdfs dfsadmin -finalizeUpgrade"
Note In secure mode, you must have Kerberos credentials for the hdfs user.
Stop remaining services (HDFS, ZooKeeper, and Ranger).
See Stopping HDP Services for more information.
Component
Command
HDFS
On all DataNotes:
If you are running secure cluster, run following command as root:
/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop datanode
Else:
su - hdfs -c "usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop datanode"
If you are not running a highly available HDFS cluster, stop the Secondary NameNode by executing this command on the Secondary NameNode host machine:
su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop secondarynamenode"
On the NameNode host machine(s)
su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop namenode"
If you are running NameNode HA, stop the ZooKeeper Failover Controllers (ZKFC) by executing this command on the NameNode host machine:
su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop zkfc"
If you are running NameNode HA, stop the JournalNodes by executing these command on the JournalNode host machines:
su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop journalnode"
ZooKeeper Host machines
su - zookeeper -c "/usr/hdp/current/zookeeper-server/bin/zookeeper-server stop"
Ranger (XA Secure)
service ranger-admin stop
service ranger-usersync stop
Back up your NameNode metadata.
Note It's recommended to take a backup of the full
/hadoop.hdfs/namenode
path.Copy the following checkpoint files into a backup directory.
The NameNode metadata is stored in a directory specified in the hdfs-site.xml configuration file under the configuration value "dfs.namenode.name.dir".
For example, if the configuration value is:
<property> <name>dfs.namenode.name.dir</name> <value>/hadoop/hdfs/namenode</value> </property>
Then, the NameNode metadata files are all housed inside the directory
/hadooop.hdfs/namenode
.Store the layoutVersion of the namenode.
${dfs.namenode.name.dir}/current/VERSION
Verify that the edit logs in
${dfs.namenode.name.dir}/current/edits*
are empty.Run the following command on the active NameNode machine:
hdfs oev -i ${dfs.namenode.name.dir}/current/edits_inprogress_* -o edits.out
Verify the edits.out file. It should only have OP_START_LOG_SEGMENT transaction. For example:
<?xml version="1.0" encoding="UTF-8"?> <EDITS> <EDITS_VERSION>-56</EDITS_VERSION> <RECORD> <OPCODE>OP_START_LOG_SEGMENT</OPCODE> <DATA> <TXID>5749</TXID> </DATA> </RECORD>
If edits.out has transactions other than OP_START_LOG_SEGMENT, run the following steps and then verify edit logs are empty.
Start the existing version NameNode.
Ensure there is a new FS image file.
Shut the NameNode down:
hdfs dfsadmin – saveNamespace
Rename or delete any paths that are reserved in the new version of HDFS.
When upgrading to a new version of HDFS, it is necessary to rename or delete any paths that are reserved in the new version of HDFS. If the NameNode encounters a reserved path during upgrade, it prints an error such as the following:
/.reserved is a reserved path and .snapshot is a reserved path component in this version of HDFS. Please rollback and delete or rename this path, or upgrade with the -renameReserved key-value pairs option to automatically rename these paths during upgrade.
Specifying
-upgrade -renameReserved
optional key-value pairs causes the NameNode to automatically rename any reserved paths found during startup.For example, to rename all paths named
.snapshot
to.my-snapshot
and change paths named.reserved
to.my-reserved
, specify-upgrade -renameReserved .snapshot=.my-snapshot,.reserved=.my-reserved
.If no key-value pairs are specified with
-renameReserved
, the NameNode suffixes reserved paths with:.<LAYOUT-VERSION>.UPGRADE_RENAMED
For example:
.snapshot.-51.UPGRADE_RENAMED
.Note We recommend that you perform a
-saveNamespace
before renaming paths (running-saveNamespace
appears in a previous step in this procedure). This is because a data inconsistency can result if an edit log operation refers to the destination of an automatically renamed file.Also note that running
-renameReserved
renames all applicable existing files in the cluster. This may impact cluster applications.Upgrade the JDK on all nodes to JDK 7 before upgrading HDP. If you are running JDK 7, no action is required.
Important If you want to upgrade to from JDK 7 to JDK 8, you must update the HDP stack before upgrading to JDK 8. For example, the high-level process should follow:
Run HDP 2.2 with JDK 7.
Perform the stack upgrade to . See Upgrade HDP 2.2 Components.
Upgrade JDK from 7 to 8.
Optional: If you plan to use the Falcon service, you must install the Berkeley DB JAR file on all Falcon server hosts on the cluster, prior to upgrading to HDP 2.5.5 or later.
Log in to the Falcon server as user falcon.
su - falcon
Download the required Berkeley DB implementation file.
wget –O je-5.0.73.jar http://search.maven.org/remotecontent?filepath=com/sleepycat/je/5.0.73/je-5.0.73.jar
Copy the file to the Falcon library folder.
cp je-5.0.73.jar /usr/hdp/<version>/falcon/webapp/falcon/WEB-INF/lib
Set permissions on the file to owner=read/write, group=read, other=read.
chmod 644 /usr/hdp/<version>/falcon/webapp/falcon/WEB-INF/lib/je-5.0.73.jar