Decommissioning and Recommissioning Hosts
Decommissioning a host decommissions and stops all roles on the host without having to go to each service and individually decommission the roles. Decommissioning applies to only to HDFS DataNode, MapReduce TaskTracker, YARN NodeManager, and HBase RegionServer roles. If the host has other roles running on it, those roles are stopped.
Once all roles on the host have been decommissioned and stopped, the host can be removed from service. You can decommission multiple hosts in parallel.
Continue reading:
Decommissioning Hosts
Minimum Required Role: Limited Operator (also provided by Operator, Configurator, Cluster Administrator, or Full Administrator)
You cannot decommission a DataNode or a host with a DataNode if the number of DataNodes equals the replication factor (which by default is three) of any file stored in HDFS. For example, if the replication factor of any file is three, and you have three DataNodes, you cannot decommission a DataNode or a host with a DataNode.
- If the host has a DataNode, perform the steps in Tuning HDFS Prior to Decommissioning DataNodes.
- Click the Hosts tab.
- Check the checkboxes next to one or more hosts.
- Select .
A confirmation pop-up informs you of the roles that will be decommissioned or stopped on the hosts you have selected. To proceed with the decommissioning, click Confirm.
A Command Details window appears that will show each stop or decommission command as it is run, service by service. You can click one of the decommission links to see the subcommands that are run for decommissioning a given role. Depending on the role, the steps may include adding the host to an "exclusions list" and refreshing the NameNode, JobTracker, or NodeManager, stopping the Balancer (if it is running), and moving data blocks or regions. Roles that do not have specific decommission actions are stopped.
While decommissioning is in progress, the host displays the icon. Once all roles have been decommissioned or stopped, the host displays the icon. If one host in a cluster has been decommissioned, the DECOMMISSIONED facet displays in the Filters on the Hosts page and you can filter the hosts according to their decommission status.
You cannot start roles on a decommissioned host.
Tuning HDFS Prior to Decommissioning DataNodes
Minimum Required Role: Configurator (also provided by Cluster Administrator, Full Administrator)
When a DataNode is decommissioned, the NameNode ensures that every block from the DataNode will still be available across the cluster as dictated by the replication factor. This procedure involves copying blocks off the DataNode in small batches. In cases where a DataNode has thousands of blocks, decommissioning can take several hours. Before decommissioning hosts with DataNodes, you should first tune HDFS:
- Raise the heap size of the DataNodes. DataNodes should be configured with at least 4 GB heap size to allow for the increase in iterations and max streams.
- Go to the HDFS service page.
- Click the Configuration tab.
- Under each DataNode role group (DataNode Default Group and any additional DataNode role groups) go to the Resource Management category, and set the Java Heap Size of DataNode in Bytes property as recommended.
- Click Save Changes to commit the changes.
- Set the DataNode balancing bandwidth:
- Expand the category.
- Configure the DataNode Balancing Bandwidth property to the bandwidth you have on your disks and network.
- Click Save Changes to commit the changes.
- Increase the replication work multiplier per iteration to a larger number (the default is 2, however 10 is recommended):
- Expand the category.
- Configure the Replication Work Multiplier Per Iteration property to a value such as 10.
- Click Save Changes to commit the changes.
- Increase the replication maximum threads and maximum replication thread hard limits:
- Expand the category.
- Configure the Maximum number of replication threads on a Datanode and Hard limit on the number of replication threads on a Datanode properties to 50 and 100 respectively.
- Click Save Changes to commit the changes.
- Restart the HDFS service.
Recommissioning Hosts
Minimum Required Role: Operator (also provided by Configurator, Cluster Administrator, Full Administrator)
Only hosts that are decommissioned using Cloudera Manager can be recommissioned.
- Click the Hosts tab.
- Select one or more hosts to recommission.
- Select .
The icon is removed from the host and from the roles that reside on the host. However, the roles themselves are not restarted.
Restarting All The Roles on a Recommissioned Host
Minimum Required Role: Operator (also provided by Configurator, Cluster Administrator, Full Administrator)
- Click the Hosts tab.
- Select one or more hosts on which to start recommissioned roles.
- Select .