Known Issues in Ozone

This topic describes known issues and unsupported features for using Oozie in this release of Cloudera Runtime.

CDPD-15268:
Uploading a key using the S3 Multi-part upload API into an Ozone encryption zone (TDE-enabled bucket) is not currently supported. The key upload will fail with an exception.
There is no workaround.
CDPD-15362:
When files and directories stored in Ozone are deleted via Hadoop filesystem shell -rm command using o3fs or ofs scheme, they will not be moved to trash even if fs.trash.interval is set to >0 on the client. Instead, they are deleted immediately.
There is no workaround.
CDPD-15330:
A network partitioned Ozone Manager (OM) in an OM HA cluster, if in a leader state before partition and does not step down as leader, can serve stale reads.
No workaround is available. If network partition is detected and the Ozone Manager node is restarted, then this issue can be resolved (even if network partition exists after restart).
CDPD-15870:
Lagging Ozone Manager (OM) node can fail to catch up with the leader if the leader OM does not have the logs missing from lagging OM in its cache, even if the logs are present on the disk.
Manually copy Ratis logs that are missing in the lagging OM from another OM. Stop the lagging OM. Copy the missing logs from the leading OM's Ratis storage directory location defined by the ozone.om.ratis.storage.dir property into the lagging OM's Ratis storage directory. Restart the lagging OM and it will load the ratis logs.
CDPD-15266:
When Ozone Manager (OM) HA is enabled, not all older OM Ratis logs are purged. Similarly, for DataNode, old Ratis logs may not be purged. This can lead to older logs consuming the disk space.
For OM, you must manually delete the OM Ratis logs from the Ratis storage directory location defined by the ozone.om.ratis.storage.dir property. You must only delete the logs older than the already purged logs.
For example, if the OM Ratis log directory contains the logs log_0_100, log_101_200, and log_301_400, then you can delete log_0_100 and log_101_200 as log_201_300 is already purged.
For DataNode, you must manually delete datanodeRatis logs from the Ratis storage directory location defined by the dfs.container.ratis.datanode.storage.dir property. You must delete only the logs older than already purged logs.
For example, if the DataNode Ratis log directory contains the logs log_0_100, log_101_200, and log_301_400, then you can delete log_0_100 and log_101_200 as log_201_300 is already purged.
Cloudera advises you to backup the ratis logs. Ensure that DataNode and OM come as back up again and the pipelines they are connected to must be healthy. In case there are any exceptions, the Ratis logs must be restored from the backup.
CDPD-15869:
A lagging Ozone Manager (OM) node in the HA cluster can cause AccessContolException error. If the lagging OM does not have the Delegation Token and is contacted first by the client, then it displays an AccessControlException error.
If the lagging Ozone Manager (OM) is still participating in the ring and catches up with the transactions on the leader, then the client will be able to execute the request after that. Otherwise, if the lagging OM is behind, then you must stop so that client contacts a different OM.
CDPD-15602:
Creating or deleting keys with a trailing forward slash (/) in the name is not supported via the Ozone shell or the S3 REST API. Such keys are internally treated as directories by the Ozone service for compatibility with the Hadoop filesystem interface. This will be supported in a later release of CDP.
You can create or delete keys via the Hadoop Filesystem interface, either programmatically or via the filesystem Hadoop shell. For example, `ozone fs -rmdir <dir>`.