DataNodes

DataNodes store data in a Hadoop cluster and is the name of the daemon that manages the data. File data is replicated on multiple DataNodes for reliability and so that localized computation can be executed near the data. The default replication factor for HDFS is three. That is, three copies of data are maintained at all times.

Within a cluster, DataNodes should be uniform. If they are not uniform, issues can occur. For example, DataNodes with less memory fill up more quickly than DataNodes with more memory, which can result in job failures.

How NameNode Manages Blocks on a Failed DataNode

A DataNode is considered dead after a set period without any heartbeats (10.5 minutes by default). When this happens, the NameNode performs the following actions to maintain the configured replication factor (3x replication by default):
  1. The NameNode determines which blocks were on the failed DataNode.
  2. The NameNode locates other DataNodes with copies of these blocks.
  3. The DataNodes with block copies are instructed to copy those blocks to other DataNodes to maintain the configured replication factor.
Keep the following in mind when working with dead DataNodes:
  • If the DataNode failed due to a disk failure, follow the procedure in Replacing a Disk on a DataNode Host or Performing Disk Hot Swap for DataNodes to bring a repaired DataNode back online. If a DataNode failed to heartbeat for other reasons, they need to be recommissioned to be added back to the cluster. For more information, see Recommissioning Hosts.
  • If a DataNode rejoins the cluster, there is the possibility that there will be a surplus of replicas for blocks that were on that DataNode. The NameNode will randomly remove excess replicas while adhering to Rack-Awareness policies.

Replacing a Disk on a DataNode Host

Minimum Required Role: Operator (also provided by Configurator, Cluster Administrator, Full Administrator)

For CDH 5.3 and higher, see Performing Disk Hot Swap for DataNodes.

If one of your DataNode hosts experiences a disk failure, follow this process to replace the disk:
  1. Stop managed services.
  2. Decommission the DataNode role instance.
  3. Replace the failed disk.
  4. Recommission the DataNode role instance.
  5. Run the HDFS fsck utility to validate the health of HDFS. The utility normally reports over-replicated blocks immediately after a DataNode is reintroduced to the cluster, which is automatically corrected over time.
  6. Start managed services.