Start Ambari Server and Ambari Agents.
On the Server host:
ambari-server start
On all of the Agent hosts:
ambari-agent start
Update the repository Base URLs in Ambari Server for the HDP-2.1 stack. Browse to Ambari Web > Admin > Clusters and set the value of the HDP and HDP-UTILS repository Base URLs. For more information about viewing and editing repository Base URLs, see Managing Stack Repositories.
Important For a remote, accessible, public repository, the HDP and HDP-UTILS Base URLs are the same as the baseurl=values in the HDP.repo file downloaded in Upgrade the Stack: Step 1 For a local repository, use the local repository Base URL that you configured for the HDP Stack. For links to download the HDP repository files for your version of the Stack, see HDP Repositories.
Using the Ambari Web Services view, start the ZooKeeper service.
If you are upgrading from an HA NameNode configuration, start all JournalNodes. On each JournalNode host, run the following command:
su -l {HDFS_USER} -c "/usr/lib/hadoop/sbin/hadoop-daemon.sh start journalnode"
Important All JournalNodes must be running when performing the upgrade, rollback, or finalization operations. If any JournalNodes are down when running any such operation, the operation will fail.
Because the file system version has now changed you must start the NameNode manually. On the NameNode host:
su -l {HDFS_USER} -c "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh start namenode -upgrade"
To check if the Upgrade is in progress, check that the "
\previous
" directory has been created in \NameNode and \JournalNode directories. The "\previous
" directory contains a snapshot of the data before upgrade.Note In a NameNode HA configuration, this NameNode will not enter the standby state as usual. Rather, this NameNode will immediately enter the active state, perform an upgrade of its local storage directories, and also perform an upgrade of the shared edit log. At this point, the standby NameNode in the HA pair is still down. It will be out of sync with the upgraded active NameNode. To synchronize the active and standby NameNode, re-establishing HA, re-bootstrap the standbyNameNode by running the NameNode with the '-bootstrapStandby' flag. Do NOT start this standby NameNode with the '-upgrade' flag.
su -l {HDFS_USER} -c "hdfs namenode -bootstrapStandby -force"
The bootstrapStandby command will download the most recent fsimage from the active NameNode into the
$dfs.name.dir
directory of the standby NameNode. You can enter that directory to make sure the fsimage has been successfully downloaded. After verifying, start the ZKFailoverController via Ambari, then start the standby NameNode via Ambari. You can check the status of both NameNodes using the Web UI.Prepare the NameNode to work with Ambari:
Open the Ambari Web GUI. If it has been open throughout the process, do a hard reset on your browser to force a reload.
On the Services view, click HDFS to open the HDFS service.
Click View Host to open the NameNode host details page.
Use the drop-down menu to stop the NameNode.
On the Services view, restart the HDFS service. Make sure it passes the ServiceCheck. It is now under Ambari's control.
After the DataNodes are started, HDFS exits safemode. To monitor the status, run the following command:
sudo su -l $HDFS_USER -c "hdfs dfsadmin -safemode get"
Depending on the size of your system, a response may not display for up to 10 minutes. When HDFS exits safemode, the following message displays:
Safe mode is OFF
Make sure that the HDFS upgrade was successful. Execute steps 2 and 3 in Preparing for the Upgrade to create new versions of the logs and reports. Substitute "
new
" for "old
" in the file names as necessary.Compare the old and new versions of the following:
dfs-old-fsck-1.log
versusdfs-new-fsck-1.log
.The files should be identical unless the
hadoop fsck
reporting format has changed in the new version.dfs-old-lsr-1.log
versusdfs-new-lsr-1.log
.The files should be identical unless the format of
hadoop fs -lsr
reporting or the data structures have changed in the new version.dfs-old-report-1.log
versusfs-new-report-1.log
Make sure all DataNodes previously belonging to the cluster are up and running.
Update the configuration properties required for Application Timeline Server. Using Ambari Web -> Services -> Configs, choose a service, then add/modify values for each of the following properties:
YARN (Custom yarn-site.xml) yarn.timeline-service.leveldb-timeline-store.path=/var/log/hadoop-yarn/timeline yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms=300000 ** If you are upgrading to HDP 2.1.3, use the following setting: yarn.timeline-service.store-class=org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore** ** If you are upgrading to HDP 2.1.2, use the following setting: yarn.timeline-service.store-class=org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore ** yarn.timeline-service.ttl-enable=true yarn.timeline-service.ttl-ms=2678400000 yarn.timeline-service.generic-application-history.store-class=org.apache.hadoop.yarn.server.applicationhistoryservice.NullApplicationHistoryStore yarn.timeline-service.webapp.address={PUT_THE_FQDN_OF_ATS_HOST_NAME_HERE}:8188 yarn.timeline-service.webapp.https.address={PUT_THE_FQDN_OF_ATS_HOST_NAME_HERE}:8190 yarn.timeline-service.address={PUT_THE_FQDN_OF_ATS_HOST_NAME_HERE}:10200 HIVE (hive-site.xml) hive.execution.engine=mr hive.exec.failure.hooks=org.apache.hadoop.hive.ql.hooks.ATSHook hive.exec.post.hooks=org.apache.hadoop.hive.ql.hooks.ATSHook hive.exec.pre.hooks=org.apache.hadoop.hive.ql.hooks.ATSHook hive.tez.container.size={map-container-size} *If mapreduce.map.memory.mb > 2GB then set it equal to mapreduce.map.memory.Otherwise set it equal to mapreduce.reduce.memory.mb* hive.tez.java.opts="-server -Xmx" + Math.round(0.8 * map-container-size) + "m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC"
Use the Ambari Web Services view to start YARN.
Use the Ambari Web Services view to start MapReduce2.
Upgrade HBase:
Make sure that all HBase components - RegionServers and HBase Master - are stopped.
Use the Ambari Web Services view, start the ZooKeeper service. Wait until the ZK service is up and running.
On the HBase Master host, make these configuration changes:
In
HBASE_CONFDIR/hbase-site.xml
, set the propertydfs.client.read.shortcircuit
tofalse
.In the configuration file, find the value of the
hbase.tmp.dir
property and make sure that the directory exists and is readable and writeable for the HBase service user and group.chown -R $HBASE_USER:$HADOOP_GROUP $HBASE.TMP.DIR
Go to the Upgrade Folder and check in the saved global configuration file named
global_<$TAG>
for the value of the propertyhbase_pid_dir
andhbase_log_dir
. Make sure that the directories are readable and writeable for the HBase service user and group.chown -R $HBASE_USER:$HADOOP_GROUP $hbase_pid_dir chown -R $HBASE_USER:$HADOOP_GROUP $hbase_log_dir
Do this on every host where a RegionServer is installed as well as on the HBase Master host.
Check for HFiles in V1 format. HBase 0.96.0 discontinues support for HFileV1. Before the actual upgrade, run the following command to check if there are HFiles in V1 format:
hbase upgrade -check
HFileV1 was a common format prior to HBase 0.94. You may see output similar to:
Tables Processed: hdfs://localhost:41020/myHBase/.META. hdfs://localhost:41020/myHBase/usertable hdfs://localhost:41020/myHBase/TestTable hdfs://localhost:41020/myHBase/t Count of HFileV1: 2 HFileV1: hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812/family/249450144068442524 hdfs://localhost:41020/myHBase/usertable/ecdd3eaee2d2fcf8184ac025555bb2af/family/249450144068442512 Count of corrupted files: 1 Corrupted Files: hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812/family/1 Count of Regions with HFileV1: 2 Regions to Major Compact: hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812 hdfs://localhost:41020/myHBase/usertable/ecdd3eaee2d2fcf8184ac025555bb2af
When you run the upgrade check, if "Count of HFileV1" returns any files, start the hbase shell to use major compaction for regions that have HFileV1 format. For example in the sample output above, you must compact the
fa02dac1f38d03577bd0f7e666f12812
andecdd3eaee2d2fcf8184ac025555bb2af
regions.Upgrade HBase. You must be the HBase service user.
sudo su -l $HBASE_USER -c "hbase upgrade -execute"
Make sure that the output contains the string "Successfully completed Znode upgrade".
Use the Services view to start the HBase service. Make sure that Service Check passes.
Upgrade Oozie.
Note You must replace your Oozie configuration after upgrading.
Perform the following preparation steps on each oozie server and client:
Copy files from the backup folder conf to /etc/oozie/conf directory.
cp {temp.folder.name}/oozie-sit.xml /etc/oozie/conf
chmod -R 777 /etc/alternatives/oozie-tomcat-conf/conf
rm /usr/lib/oozie/conf
ln -s /etc/oozie/conf /usr/lib/oozie/conf
Create
/usr/lib/oozie/libtext-{customer}
directory.mkdir /usr/lib/oozie/libext-{customer}
Grant read/write access to the Oozie user.
chmod -R 666 /usr/lib/oozie/libext-{customer}
Copy the JDBC jar of your Oozie database to both
/usr/lib/oozie/libext-{customer}
and/usr/lib/oozie/libtools
.Copy these files to /usr/lib/oozie/libext-{customer} directory
cp /usr/lib/hadoop/lib/hadoop-lzo*.jar /usr/lib/oozie/libext-{customer}
cp /usr/share/HDP-oozie/ext-2.2.zip /usr/lib/oozie/libext-{customer}
Upgrade steps:
On the Services view, make sure YARN and MapReduce2 are running.
Make sure that the Oozie service is stopped.
Upgrade Oozie. You must be the Oozie service user. On the Oozie host:
sudo su -l $OOZIE_USER -c"/usr/lib/oozie/bin/ooziedb.sh upgrade -run"
Make sure that the output contains the string "Oozie DB has been upgrade to Oozie version
'OOZIE Build Version
'".Prepare the Oozie WAR file, run as root:
Note The Oozie server must be not running for this step. If you get the message "ERROR: Stop Oozie first", it means the script still thinks it's running. Check, and if needed, remove the process id (pid) file indicated in the output.
sudo su -l oozie -c "/usr/lib/oozie/bin/oozie-setup.sh prepare-war -d /usr/lib/oozie/libext-{customer}"
Make sure that the output contains the string "New Oozie WAR file with added".
Using Ambari Web UI Services > Oozie > Configs, expand Advanced, then edit the following properties:
In
oozie.service.coord.push.check.requeue.interval
, replace the existing property value with the following one:30000
In
oozie.service.SchemaService.wf.ext.schemas
, append (using copy/paste) to the existing property value the following string:shell-action-0.2.xsd,oozie-sla-0.1.xsd,oozie-sla-0.2.xsd,hive-action-0.3.xsd
Note If you have customized schemas, append this string to your custom schema name string.
Do not overwrite custom schemas.
If you have no customized schemas, you can replace the existing string with the following one:
shell-action-0.1.xsd,email-action-0.1.xsd,hive-action-0.2.xsd,sqoop-action-0.2.xsd,ssh-action-0.1.xsd,distcp-action-0.1.xsd,shell-action-0.2.xsd,oozie-sla-0.1.xsd,oozie-sla-0.2.xsd,hive-action-0.3.xsd
In
oozie.service.URIHandlerService.uri.handlers
, append to the existing property value the following string:org.apache.oozie.dependency.FSURIHandler,org.apache.oozie.dependency.HCatURIHandler
In
oozie.services
, append to the existing property value the following string:org.apache.oozie.service.XLogStreamingService,org.apache.oozie.service.JobsConcurrencyService
Note If you have customized properties, append this string to your custom property value string.
Do not overwrite custom properties.
If you have no customized properties, you can replace the existing string with the following one:
org.apache.oozie.service.SchedulerService, org.apache.oozie.service.InstrumentationService, org.apache.oozie.service.CallableQueueService, org.apache.oozie.service.UUIDService, org.apache.oozie.service.ELService, org.apache.oozie.service.AuthorizationService, org.apache.oozie.service.UserGroupInformationService, org.apache.oozie.service.HadoopAccessorService, org.apache.oozie.service.URIHandlerService, org.apache.oozie.service.MemoryLocksService, org.apache.oozie.service.DagXLogInfoService, org.apache.oozie.service.SchemaService, org.apache.oozie.service.LiteWorkflowAppService, org.apache.oozie.service.JPAService, org.apache.oozie.service.StoreService, org.apache.oozie.service.CoordinatorStoreService, org.apache.oozie.service.SLAStoreService, org.apache.oozie.service.DBLiteWorkflowStoreService, org.apache.oozie.service.CallbackService, org.apache.oozie.service.ActionService, org.apache.oozie.service.ActionCheckerService, org.apache.oozie.service.RecoveryService, org.apache.oozie.service.PurgeService, org.apache.oozie.service.CoordinatorEngineService, org.apache.oozie.service.BundleEngineService, org.apache.oozie.service.DagEngineService, org.apache.oozie.service.CoordMaterializeTriggerService, org.apache.oozie.service.StatusTransitService, org.apache.oozie.service.PauseTransitService, org.apache.oozie.service.GroupsService, org.apache.oozie.service.ProxyUserService, org.apache.oozie.service.XLogStreamingService, org.apache.oozie.service.JobsConcurrencyService
In
oozie.services.ext
, append to the existing property value the following string:org.apache.oozie.service.PartitionDependencyManagerService,org.apache.oozie.service.HCatAccessorService
After modifying all properties on the Oozie Configs page, scroll down, then choose Save to update oozie.site.xml, using the modified configurations.
Replace the content of
/usr/oozie/share
in HDFS. On the Oozie server host:Extract the Oozie sharelib into a
tmp
folder.mkdir -p /tmp/oozie_tmp cp /usr/lib/oozie/oozie-sharelib.tar.gz /tmp/oozie_tmp cd /tmp/oozie_tmp tar xzvf oozie-sharelib.tar.gz
Back up the
/usr/oozie/share
folder in HDFS and then delete it. If you have any custom files in this folder back them up separately and then add them back after the share folder is updated.su -l $HDFS_USR -c "$hdfs dfs -copyToLocal /user/oozie/share /tmp/oozie_tmp/oozie_share_backup" su -l $HDFS_USR -c "$hdfs dfs -rm -r /user/oozie/share"
Add the latest share libs that you extracted in step 1. After you have added the files, modify ownership and acl.
su -l $HDFS_USR -c "hdfs dfs -copyFromLocal /tmp/oozie_tmp/share /user/oozie/." su -l $HDFS_USR -c "hdfs dfs -chown -R oozie:hadoop /user/oozie" su -l $HDFS_USR -c "hdfs dfs -chmod -R 755 /user/oozie"
Use the Services view to start the Oozie service. Make sure that ServiceCheck passes for Oozie.
Update WebHcat.
Modify the
webhcat-site
config type.On the Ambari server, use
/var/lib/ambari-server/resources/scripts/configs.sh
to modify configuration properties in templeton.storage.class:configs.sh set $HOSTNAME $CLUSTERNAME $CONFIGURATION-TYPE $PROPERTY-NAME $PROPERTY-VALUE For example: configs.sh set <yourhostname> <yourclustername> webhcat-site "templeton.storage.class" "org.apache.hive.hcatalog.templeton.tool.ZooKeeperStorage"
Update the Pig and Hive tar bundles, by updating the following files:
/apps/webhcat/pig.tar.gz
/apps/webhcat/hive.tar.gz
Note You will find these files on a host where webhcat is installed.
For example, to update a *.tar.gz file:
Move the file to a local directory.
su -l $HCAT_USR -c "hadoop --config /etc/hadoop/conf fs -copyToLocal /apps/webhcat/*.tar.gz ${local_backup_dir}"
Remove the old file.
su -l $HCAT_USR -c "hadoop --config /etc/hadoop/conf fs -rm /apps/webhcat/*.tar.gz"
Copy the new file.
su -l $HCAT_USR -c "hadoop --config /etc/hadoop/conf fs -copyFromLocal /usr/share/HDP-webhcat/*.tar.gz /apps/webhcat"
Update /app/webhcat/hadoop-streaming.jar file.
Move the file to a local directory.
su -l $HCAT_USR -c "hadoop --config /etc/hadoop/conf fs -copyToLocal /apps/webhcat/hadoop-streaming*.jar ${local_backup_dir}"
Remove the old file.
su -l $HCAT_USR -c "hadoop --config /etc/hadoop/conf fs -rm /apps/webhcat/hadoop-streaming*.jar"
Copy the new hadoop-streaming.jar file.
su -l $HCAT_USR -c "hadoop --config /etc/hadoop/conf fs -copyFromLocal /usr/lib/hadoop-mapreduce/hadoop-streaming*.jar /apps/webhcat"
Make sure Ganglia no longer attempts to monitor JobTracker.
Make sure Ganglia is stopped.
Log into the host where JobTracker was installed (and where ResourceManager is installed after the upgrade).
Backup the folder
/etc/ganglia/hdp/HDPJobTracker
.Remove the folder
/etc/ganglia/hdp/HDPJobTracker
.Remove the folder
$ganglia_runtime_dir/HDPJobTracker
.Note For the value of
$ganglia_runtime_dir
, in the Upgrade Folder, check the saved global configuration fileglobal_<$TAG>
.
Use the Services view to start the remaining services back up.
The upgrade is now fully functional but not yet finalized. Using the
finalize
command removes the previous version of the NameNode and DataNode storage directories.Important After the upgrade is finalized, the system cannot be rolled back. Usually this step is not taken until a thorough testing of the upgrade has been performed.
The upgrade must be finalized before another upgrade can be performed.
Note Directories used by Hadoop 1 services set in /etc/hadoop/conf/taskcontroller.cfg are not automatically deleted after upgrade. Administrators can choose to delete these directories after the upgrade.
To finalize the upgrade:
sudo su -l $HDFS_USER -c "hadoop dfsadmin -finalizeUpgrade"
where
$HDFS_USER
is the HDFS Service user (by default,hdfs
).