HDFS Balancers

HDFS data might not always be distributed uniformly across DataNodes. One common reason is addition of new DataNodes to an existing cluster. HDFS provides a balancer utility that analyzes block placement and balances data across the DataNodes. The balancer moves blocks until the cluster is deemed to be balanced, which means that the utilization of every DataNode (ratio of used space on the node to total capacity of the node) differs from the utilization of the cluster (ratio of used space on the cluster to total capacity of the cluster) by no more than a given threshold percentage. The balancer does not balance between individual volumes on a single DataNode.

Configuring and Running the HDFS Balancer Using Cloudera Manager

Minimum Required Role: Cluster Administrator (also provided by Full Administrator)

In Cloudera Manager, the HDFS balancer utility is implemented by the Balancer role. The Balancer role usually shows a health of None on the HDFS Instances tab because it does not run continuously.

The Balancer role is normally added (by default) when the HDFS service is installed. If it has not been added, you must add a Balancer role to rebalance HDFS and to see the Rebalance action.

Configuring the Balancer Threshold

The Balancer has a default threshold of 10%, which ensures that disk usage on each DataNode differs from the overall usage in the cluster by no more than 10%. For example, if overall usage across all the DataNodes in the cluster is 40% of the cluster's total disk-storage capacity, the script ensures that DataNode disk usage is between 30% and 50% of the DataNode disk-storage capacity. To change the threshold:
  1. Go to the HDFS service.
  2. Click the Configuration tab.
  3. Select Scope > Balancer.
  4. Select Category > Main.
  5. Set the Rebalancing Threshold property.

    To apply this configuration property to other role groups as needed, edit the value for the appropriate role group. See Modifying Configuration Properties Using Cloudera Manager.

  6. Enter a Reason for change, and then click Save Changes to commit the changes.

Configuring Concurrent Moves

The property dfs.datanode.balance.max.concurrent.moves sets the maximum number of threads used by the DataNode balancer for pending moves. It is a throttling mechanism to prevent the balancer from taking too many resources from the DataNode and interfering with normal cluster operations. Increasing the value allows the balancing process to complete more quickly, decreasing the value allows rebalancing to complete more slowly, but is less likely to compete for resources with other tasks on the DataNode. To use this property, you need to set the value on both the DataNode and the Balancer.

  • To configure the Datanode:
    • Go to the HDFS service.
    • Click the Configuration tab.
    • Search for DataNode Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml.
    • Add the following code to the configuration field, for example, setting the value to 50.
      <property>
        <name>dfs.datanode.balance.max.concurrent.moves</name>
        <value>50</value>
      </property>
    • Restart the DataNode.
  • To configure the Balancer:
    1. Go to the HDFS service.
    2. Click the Configuration tab.
    3. Search for Balancer Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml.
    4. Add the following code to the configuration field, for example, setting the value to 50.
      <property>
        <name>dfs.datanode.balance.max.concurrent.moves</name>
        <value>50</value>
      </property>

Running the Balancer

  1. Go to the HDFS service.
  2. Ensure the service has a Balancer role.
  3. Select Actions > Rebalance.
  4. Click Rebalance to confirm. If you see a Finished status, the Balancer ran successfully.

Configuring Block Size

You can configure the Block Metadata Batch Size (dfs.balancer.getBlocks.size) and Minimum Block Size (dfs.balancer.getBlocks.min-block-size) for HDFS. The Block Metadata Batch Size property configures the amount of block metadata that gets retrieved. The Minimum Block Size property configures the smallest block to consider for moving.

Tuning these properties can improve performance during balancing:
  1. In the Cloudera Manager Admin Console, select Clusters > <HDFS cluster>.
  2. On the Configuration tab, search for the following properties:
    • Block Metadata Batch Size (dfs.balancer.getBlocks.size)
    • Minimum Block Size (dfs.balancer.getBlocks.min-block-size)