Known Issues in HDFS
This topic describes known issues and unsupported features for using HDFS in this release of Cloudera Runtime.
- OPSAPS-60958: The dfs.access.time.precision and dfs.namenode.accesstime.precision parameters are available in Cloudera Manager > HDFS > Configuration.
- You must configure both the dfs.access.time.precision and dfs.namenode.accesstime.precision parameters with the same value as Cloudera Manager still sends both the parameters to HDFS service configuration.
- OPSAPS-55788: WebHDFS is always enabled. The Enable WebHDFS checkbox does not take effect.
- None.
- Unsupported Features
-
The following HDFS features are currently not supported in Cloudera Data Platform:
- ACLs for the NFS gateway (HADOOP-11004)
- Aliyun Cloud Connector (HADOOP-12756)
- Allow HDFS block replicas to be provided by an external storage system (HDFS-9806)
- Consistent standby Serving reads (HDFS-12943)
- Cost-Based RPC FairCallQueue (HDFS-14403)
- HDFS Router Based Federation (HDFS-10467)
- More than two NameNodes (HDFS-6440)
- NameNode Federation (HDFS-1052)
- NameNode Port-based Selective Encryption (HDFS-13541)
- Non-Volatile Storage Class Memory (SCM) in HDFS Cache Directives (HDFS-13762)
- OpenStack Swift (HADOOP-8545)
- SFTP FileSystem (HADOOP-5732)
- Storage policy satisfier (HDFS-10285)
Technical Service Bulletins
- TSB 2021-406: CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential stealing
- WebHDFS clients might send SPNEGO authorization header to remote URL without proper verification. A maliciously crafted request can trigger services to send server credentials to a webhdfs path (ie: webhdfs://…) for capturing the service principal.
- Knowledge article
- For the latest update on this issue see the corresponding Knowledge article: TSB-2021 406: CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential stealing
- TSB 2021-458: Possible HDFS Erasure Coded (EC) Data Files Corruption in EC Reconstruction
- Cloudera has detected two bugs that can cause corruption of HDFS Erasure
Coded (EC) files during the data reconstruction process.
The first bug can be hit during DataNode decommissioning. Due to a bug in the data reconstruction logic during decommissioning, some parity blocks may be generated with a content of all zeros.
The second issue occurs in a corner case when a DataNode times out in the reconstruction process. It will reschedule a read from another good DataNode. However, the stale DataNode reader may have polluted the buffer and subsequent reconstruction which uses the polluted buffer will suffer from EC block corruption.
- Knowledge article
- For the latest update on this issue see the corresponding Knowledge article: Cloudera Customer Advisory: Possible HDFS Erasure Coded (EC) Data Files Corruption in EC Reconstruction