Known issues in Ozone parcel 718.2.0

You must be aware of the known issues and limitations, the areas of impact, and workaround in Ozone parcel.

Tez Configuration Changes

The following configuration changes have to be made to pick up the latest Ozone FS jar from the Ozone parcel (when installed):

CDPD-48540
For tez.cluster.additional.classpath.prefix, the value is /var/lib/hadoop-hdfs/* (Tez Additional Classpath)
For tez.user.classpath.first, the value is true (Tez Client Advanced Configuration Snippet (Safety Valve) for tez-conf/tez-site.xml)
CDPD-47605
For tez.cluster.additional.classpath.prefix the value is /var/lib/hadoop-hdfs/* (Hive Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml)
For tez.user.classpath.first, the value is true (Hive Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml)

Restart the TEZ and HIVE_ON_TEZ service as prompted by Cloudera Manager.

Update Yarn to use updated Ozone FS Jar

CDPD-48500: Ozone parcel activation or install should handle redeployment of YARN jars and clean-up cache.
Perform the following steps:
  1. Log in to Cloudera Manager UI
  2. Navigate to Clusters
  3. Select the YARN service
  4. Click Actions
  5. Click Install YARN Service Dependencies
  6. Click YARN MapReduce Framework JARs
  7. Restart the CDP 7.1.8 cluster

Other issues

CDPD-56006: On providing an incorrect hostname/service ID in the Ozone URI, the filesystem client instead of failing, retries till exhaustion and the default retry is too high.
Configure ozone.client.failover.max.attempts to a lower the value to avoid long endless retries.
CDPD-49137: Sometimes OM's kerberos token is not updated and it stops being able to communicate with SCM. When this occurs, writes will start to fail.
Restarting OM or setting the safety valve hadoop.kerberos.keytab.login.autorenewal.enabled = true will fix the issue.
CDPD-49808: Spark jobs against Ozone intermittently fail with ERROR spark.SparkContext: [main]: Error initializing SparkContext.java.lang.IllegalStateException: No filter named.
This is an intermittent failure which can be retried.
CDPD-50678: Deleting containers which have one or more replicas which are not empty on the Datanode can cause the container to be stuck in a deleting state indefinitely. Containers in this state can also block decommission or maintenance operations completing.
None.

Technical Service Bulletins

TSB 2023-697: Potential Ozone data loss in Ozone Parcel 2 caused by hsync/hflush
A recently identified regression caused by an issue in the implementation of hsync/hflush Application Programming Interfaces (APIs) can result in data loss in Apache Ozone (Ozone) if the client application calls hsync/hflush before closing the file.
Once the file is closed, the blocks of the file will be removed by the Storage Container Manager (SCM) after the container with these blocks gets closed. Reading the file will fail with the error message similar to the following: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Unable to find the block. BlockID : conID: 4 locID: 111677748019201149 bcsId: 382
To avoid potential data loss caused by the issue described above, Users on the affected parcel versions must upgrade to OZONE-718.2.2 immediately.

Upstream Jira: HDDS-9146(fix)

Knowledge article: TSB 2023-697: Potential Ozone data loss in Ozone Parcel 2 caused by hsync/hflush