Known issues in Ozone parcel 718.2.2
You must be aware of the known issues and limitations, the areas of impact, and workaround in Ozone parcel.
Tez Configuration Changes
The following configuration changes have to be made to pick up the latest Ozone FS jar from the Ozone parcel (when installed):
- For tez.cluster.additional.classpath.prefix, the value is /var/lib/hadoop-hdfs/* (Tez Additional Classpath)
- For tez.user.classpath.first, the value is true (Tez Client Advanced Configuration Snippet (Safety Valve) for tez-conf/tez-site.xml)
- For tez.cluster.additional.classpath.prefix the value is /var/lib/hadoop-hdfs/* (Hive Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml)
- For tez.user.classpath.first, the value is true (Hive Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml)
Restart the TEZ and HIVE_ON_TEZ service as prompted by Cloudera Manager.
Update Yarn to use updated Ozone FS Jar
- CDPD-48500: Ozone parcel activation or install should handle redeployment of YARN jars and clean-up cache.
- Perform the following steps:
- Log in to Cloudera Manager UI
- Navigate to Clusters
- Select the YARN service
- Click Actions
- Click Install YARN Service Dependencies
- Click YARN MapReduce Framework JARs
- Restart the CDP 7.1.8 cluster
- CDPD-56006: On providing an incorrect hostname/service ID in the Ozone URI, the filesystem client instead of failing, retries till exhaustion and the default retry is too high.
- Configure ozone.client.failover.max.attempts to a lower the value to avoid long endless retries.
- CDPD-49137: Sometimes OM's kerberos token is not updated and it stops being able to communicate with SCM. When this occurs, writes will start to fail.
- Restarting OM or setting the safety valve hadoop.kerberos.keytab.login.autorenewal.enabled = true will fix the issue.
- CDPD-49808: Spark jobs against Ozone intermittently fail with
ERROR spark.SparkContext: [main]: Error initializing SparkContext.java.lang.IllegalStateException: No filter named.
- This is an intermittent failure which can be retried.
- CDPD-50678: Deleting containers which have one or more replicas which are not empty on the Datanode can cause the container to be stuck in a deleting state indefinitely. Containers in this state can also block decommission or maintenance operations completing.
- CDPD-52571/HDDS-8178: CertificateClient and KeyStoresFactory support multiple Sub-CA certificates in the trust chain
- CDPD-35141: Error: Error while compiling statement: FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.exec.MoveTask. Unable to move source <bucket1> to destination <bucket2> (state=08S01,code=40000) java.sql.SQLException: Error while compiling statement: FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.exec.MoveTask. Unable to move source <bucket1> to destination <bucket2>.
- We may see the above issue if the source and target buckets are different in Hive queries. For now, copying across the same bucket is only supported.
- Avoid different buckets in source and target path.
- CDPD-60578: [EC] Re-replication failing due to NullPointerException.