HDFS HA

You must check HDFS HA and set Failover controller.

Perform the task.
  1. Cloudera Manager does not support the comma separated entries. Review dfs.data.transfer.protection and hadoop.rpc.protection parameters. If SSL is not enabled, then click Undo. This will clear the selections.
  2. Start HDFS service.
    Failover Controller fails to start. You must continue with step 3 or 4.
  3. Format the ZooKeeper Failover controller ZnNode. SSH to the Failover controller host And perform the following using the example:# cd /var/run/cloudera-scm-agent/process/<xx>-hdfs-FAILOVERCONTROLLER /opt/cloudera/parcels/CDH/bin/hdfs --config /var/run/cloudera-scm-agent/process/<xx>-hdfs-FAILOVERCONTROLLER/ zkfc -formatZK
    1. Check the output in the log for success. If the error is: `ERROR tools.DFSZKFailoverController: DFSZKFailOverController exiting due to earlier exception java.io.IOException: Running in secure mode, but config doesn't have a keytab`.
    2. The ZooKeeper nodes are not created. You must manually create the ZNodes for the Failover controllers to start and allow an Active Namenode to be elected.
    3. With the zookeeper-client, open a ZooKeeper Shell `zookeeper-client -server <a_zk_server>`
    4. Create the required ZNodes create /hadoop-ha create /hadoop-ha/<namenode_namespace>.
  4. This is an alternative step to the above step 3.
    1. Log in to Cloudera Manager.
    2. Navigate to Clusters
    3. Click HDFS
    4. Go to Configuration tab
    5. Under Filters > Scope, click Failover Controller
    6. Navigate to the Failover Controller instance
    7. Initialize Automatic Failover Znode.
  5. Start HDFS service again.