Deploying HDFS High Availability
After you have set all of the necessary configuration options, you are ready to start the JournalNodes and the two HA NameNodes.
- If you are setting up a new HDFS cluster, you should first format the NameNode you will use as your primary NameNode; see Formatting the NameNode.
- Make sure you have performed all the configuration and setup tasks described under Configuring Hardware for HDFS HA and Configuring Software for HDFS HA, including initializing the HA state in ZooKeeper if you are deploying automatic failover.
Install and Start the JournalNodes
- Install the JournalNode daemons on each of the machines where they
will run.
To install JournalNode on Red Hat-compatible systems:
$ sudo yum install hadoop-hdfs-journalnode
To install JournalNode on Ubuntu and Debian systems:
$ sudo apt-get install hadoop-hdfs-journalnode
To install JournalNode on SLES systems:
$ sudo zypper install hadoop-hdfs-journalnode
- Start the JournalNode daemons on each of the machines where they
will run:
sudo service hadoop-hdfs-journalnode start
Wait for the daemons to start before starting the NameNodes.
Initialize the Shared Edits directory
hdfs namenode -initializeSharedEdits
Start the NameNodes
- Start the primary (formatted) NameNode:
$ sudo service hadoop-hdfs-namenode start
- Start the standby NameNode:
$ sudo -u hdfs hdfs namenode -bootstrapStandby $ sudo service hadoop-hdfs-namenode start
Note: If Kerberos is enabled, do not use commands in the form sudo -u <user> <command>; they will fail with a security error. Instead, use the following commands: $ kinit <user> (if you are using a password) or $ kinit -kt <keytab> <principal> (if you are using a keytab) and then, for each command executed by this user, $ <command>
Starting the standby NameNode with the -bootstrapStandby option copies over the contents of the primary NameNode's metadata directories (including the namespace information and most recent checkpoint) to the standby NameNode. (The location of the directories containing the NameNode metadata is configured via the configuration options dfs.namenode.name.dir and/or dfs.namenode.edits.dir.)
You can visit each NameNode's web page by browsing to its configured HTTP address. Notice that next to the configured address is the HA state of the NameNode (either "Standby" or "Active".) Whenever an HA NameNode starts and automatic failover is not enabled, it is initially in the Standby state. If automatic failover is enabled the first NameNode that is started will become active.
Restart Services
If you are converting from a non-HA to an HA configuration, you need to restart the JobTracker and TaskTracker (for MRv1, if used), or ResourceManager, NodeManager, and JobHistory Server (for YARN), and the DataNodes:
On each DataNode:
$ sudo service hadoop-hdfs-datanode start
On each TaskTracker system (MRv1):
$ sudo service hadoop-0.20-mapreduce-tasktracker start
On the JobTracker system (MRv1):
$ sudo service hadoop-0.20-mapreduce-jobtracker start
Verify that the JobTracker and TaskTracker started properly:
sudo jps | grep Tracker
On the ResourceManager system (YARN):
$ sudo service hadoop-yarn-resourcemanager start
On each NodeManager system (YARN; typically the same ones where DataNode service runs):
$ sudo service hadoop-yarn-nodemanager start
On the MapReduce JobHistory Server system (YARN):
$ sudo service hadoop-mapreduce-historyserver start
Deploy Automatic Failover
If you have configured automatic failover using the ZooKeeper FailoverController (ZKFC), you must install and start the zkfc daemon on each of the machines that runs a NameNode. Proceed as follows.
To install ZKFC on Red Hat-compatible systems:
$ sudo yum install hadoop-hdfs-zkfc
To install ZKFC on Ubuntu and Debian systems:
$ sudo apt-get install hadoop-hdfs-zkfc
To install ZKFC on SLES systems:
$ sudo zypper install hadoop-hdfs-zkfc
To start the zkfc daemon:
$ sudo service hadoop-hdfs-zkfc start
It is not important that you start the ZKFC and NameNode daemons in a particular order. On any given node you can start the ZKFC before or after its corresponding NameNode.
You should add monitoring on each host that runs a NameNode to ensure that the ZKFC remains running. In some types of ZooKeeper failures, for example, the ZKFC may unexpectedly exit, and should be restarted to ensure that the system is ready for automatic failover.
Additionally, you should monitor each of the servers in the ZooKeeper quorum. If ZooKeeper crashes, then automatic failover will not function. If the ZooKeeper cluster crashes, no automatic failovers will be triggered. However, HDFS will continue to run without any impact. When ZooKeeper is restarted, HDFS will reconnect with no issues.
Verifying Automatic Failover
After the initial deployment of a cluster with automatic failover enabled, you should test its operation. To do so, first locate the active NameNode. As mentioned above, you can tell which node is active by visiting the NameNode web interfaces.
Once you have located your active NameNode, you can cause a failure on that node. For example, you can use kill -9 <pid of NN> to simulate a JVM crash. Or you can power-cycle the machine or its network interface to simulate different kinds of outages. After you trigger the outage you want to test, the other NameNode should automatically become active within several seconds. The amount of time required to detect a failure and trigger a failover depends on the configuration of ha.zookeeper.session-timeout.ms, but defaults to 5 seconds.
If the test does not succeed, you may have a misconfiguration. Check the logs for the zkfc daemons as well as the NameNode daemons in order to further diagnose the issue.
<< Configuring Software for HDFS HA | Upgrading an HDFS HA Configuration to the Latest Release >> | |