Known Issues in HDFS

Learn about the known issues in HDFS, the impact or changes to the functionality, and the workaround.

CDPD-65530: HDFS requests throw UnknownHostException during OS upgrade
During the VM replacement as part of OS upgrade, every new node gets a new IP Address, and if the old IP address is cached somewhere, HDFS requests fail with UnknownHostException and it recovers after sometime (10 mins max).

The issue is seen during COD and DL ZDU.

None.
CDPSDX-5302: Avoiding long delay on the HBase master does not happen during upgrade.
  1. Log in to Cloudera Manager
  2. Select the HDFS service
  3. Select Configurations tab
  4. Search for hdfs-site.xml.
  5. Set ipc.client.connect.timeout = 5000
  6. Set ipc.client.connect.max.retries.on.timeouts = 5
  7. Click Save
The above configuration changes ensures that:
  1. The long delay on the HBase master does not happen during upgrade.
  2. The long delay on the HBase master recovery does not happen during upgrade.
CDPD-67230: Rolling restart can cause failed writes on small clusters
In a rolling restart, if the cluster has less than 10 datanodes existing writers can fail with an error indicating a new block cannot be allocated and all nodes are excluded. This is because you have attempted to use all the datanodes in the cluster, and failed to write to each of them as they were restarted. This only happen on small clusters of less than 10 datanodes, because larger clusters have more spare nodes to allow the write to continue.
None.
CDPD-60873: java.io.IOException:Encountered "status=ERROR, status message, ack with firstBadLink" while fixing the HDFS corrupt file during rollback.
Increase the value of dfs.client.block.write.retries to the number of nodes in the cluster and perform Deploy client configuration procedure for rectification.
CDPD-60431: Configuration difference between 7.1.7 SP2 and 7.1.9.0 results

Component

Configuration Old Value New Value Description
HDFS dfs.permissions.ContentSummary.subAccess Not set True Performance optimization for NameNode content summary API
HDFS dfs.datanode.handler.count 3 10 Optimal value for DN server threads on large clusters
None.
CDPD-60387: Configuration difference between 7.1.8.3 and 7.1.9.0 results

Component

Configuration Old Value New Value Description
HDFS dfs.namenode.accesstime.precision Not set 0 Optimal value for NameNode performance on large clusters
HDFS dfs.datanode.handler.count 3 10 Optimal value for DN server threads on large clusters
None.
OPSAPS-64307: When the JournalNodes on a cluster are restarted, the Add new NameNode wizard for HDFS service might fail to bootstrap the new NameNode. If there was no new fsImage created from the time JournalNodes restarted, during the restart the edit logs were rolled in the system.
If the bootstraping fails during the Add new NameNode wizard, then perform the following steps:
  1. Delete the newly added NameNode and FailoverController
  2. Move the active HDFS NameNode to safe mode
  3. Perform the Save Namespace operation on the active HDFS NameNode
  4. Leave safe mode on the active HDFS NameNode
  5. Add the new NameNode again
OPSAPS-64363: Deleting of additional Standby Namenode does not delete the ZKFC role and this has to be done manually.
None.
CDPD-28390: Rolling restart of the HDFS JournalNodes may time out on Ubuntu20.
If the restart operation times out, you can manually stop and restart the Name Node and Journal Node services one by one.
OPSAPS-55788: WebHDFS is always enabled. The Enable WebHDFS option does not take effect.
None.
OPSAPS-63299: Disable HA command for a nameservice does not work if the nameservice has more than 2 NameNodes defined.
None.
OPSAPS-63301: Deleting nameservice command does not delete all the NameNodes belonging to the nameservice, if there are more than two NameNodes that are assigned to the nameservice.
None.
Unsupported features
The following HDFS features are currently not supported in Cloudera Data Platform:

Technical Service Bulletins

TSB 2022-549: Possible HDFS Erasure Coded (EC) data loss when EC blocks are over-replicated
Cloudera has detected a bug that can cause loss of data that is stored in HDFS Erasure Coded (EC) files in an unlikely scenario.
Some EC blocks may be inadvertently deleted due to a bug in how the NameNode chooses excess or over-replicated block replicas for deletion. One possible cause of over-replication is running the HDFS balancer soon after a NameNode goes into failover mode.
In a rare situation, the redundant blocks can be placed in such a way that one replica is in one rack, and few redundant replicas are in the same rack. Such placement causes a counting bug (HDFS-16420) to be triggered. Instead of deleting just the redundant replicas, the original replica may also be deleted.
Usually this is not an issue, because the lost replica can be detected and reconstructed from the remaining data and parity blocks. However, if multiple blocks in an EC Block Group are affected by this counting bug within a short time, the block cannot be reconstructed anymore. For example, 4 blocks are affected out of 9 for the RS(6,3) policy.
Another situation is recommissioning multiple nodes back into the same rack of the cluster where the current live replica exists.
Upstream JIRA
HDFS-16420
Knowledge article
For the latest update on this issue see the corresponding Knowledge article: TSB 2022-549: Possible HDFS Erasure Coded (EC) data loss when EC blocks are over-replicated