Known issues in Ozone parcel 718.1.0

You must be aware of the known issues and limitations, the areas of impact, and workaround in Ozone parcel.

Tez Configuration Changes

The following configuration changes have to be made to pick up the latest Ozone FS jar from the Ozone parcel (when installed):

CDPD-48540
For tez.cluster.additional.classpath.prefix, the value is /var/lib/hadoop-hdfs/* (Tez Additional Classpath)
For tez.user.classpath.first, the value is true (Tez Client Advanced Configuration Snippet (Safety Valve) for tez-conf/tez-site.xml)
CDPD-47605
For tez.cluster.additional.classpath.prefix the value is /var/lib/hadoop-hdfs/* (Hive Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml)
For tez.user.classpath.first, the value is true (Hive Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml)

Restart the TEZ and HIVE_ON_TEZ service as prompted by Cloudera Manager.

Update Yarn to use updated Ozone FS Jar

CDPD-48500: Ozone parcel activation or install should handle redeployment of YARN jars and clean-up cache.
Perform the following steps:
  1. Log in to Cloudera Manager UI
  2. Navigate to Clusters
  3. Select the YARN service
  4. Click Actions
  5. Click Install YARN Service Dependencies
  6. Click YARN MapReduce Framework JARs
  7. Restart the CDP 7.1.8 cluster

Other issues

SSL Handshake fails between Ozone DataNodes if the two DataNodes have their certificate signed by different Ozone Storage Container Managers.

Ozone DataNode certificates are signed by the leader Storage Container Manager. Due to an issue in creating a TrustStore for DataNode to DataNode connections, the trust cannot be established between the two DataNodes if a different Storage Container Manager signs their certificate. These connections fail to establish and display an SSL Handshake Exception. This affects Pipeline creation and container replication (also EC container reconstruction). The symptoms vary, depending on the number of the nodes that have different singer certificates, either these DataNodes do not participate in any Ratis-3 Pipeline, or have Pipelines exclusively in between groups with the same signer. Over time this can lead to an imbalance in DataNode usage, and it might cause decommission of a DataNode stuck if the data has to be replicated to a node with a certificate that has a different signer.

This problem affects all the 7.1.8 Ozone Parcel releases.

To identify if the problem is present on a cluster, the output of ozone admin cert list command must be examined. Ensure you define a sufficient number of certificates to be returned with the -c option to see all the certificates issued in the system.

If there are different Issuers for the latest DataNode certificates, this indicates the cluster is affected.

It is possible to avoid the problem by checking the ozone admin scm roles output and see if the Primordial node is the actual leader node before adding a new DataNode and starting it for the first time. If the leader SCM node is a different node, then calling ozone admin scm transfer to make the Primordial node the leader can put the cluster into the desired state before adding the new DataNode.

Ensure that all the certificates in the cluster are signed by the same Storage Container Manager node.

Perform the procedure below if the cluster is affected and has DataNode certificates signed by a different issuer:

  1. Identify the current leader Storage Container Manager by running the command ozone admin scm roles.
  2. If the leader did not sign the majority of DataNode certificates, change the leader that has signed the majority of DN certificates by running the ozone admin scm transfer command with the proper Ozone SCM Service ID (set in Cloudera Manager>Ozone>Configuration) and the UUID of the desired leader node.
  3. Stop the minority of DataNodes that have a different sign than the majority.
  4. Locate the Datanode Metadata Directory (set in Cloudera Manager>Ozone>Configuration) on the hosts of the stopped DataNode and move the directory to a backup location.
  5. Start the previously stopped DataNodes.
  6. After regenerating the certificates, check if the DataNodes are joining the Ratis-3 Pipeline.

To avoid service disruptions, you can stop the DataNodes one by one instead of all together. This can cause a data outage for the downtime of the node if there are files with Ratis-1 replication on the cluster and the single replica of these files reside on the node being restarted. Changing the leader node on an SCM HA-enabled cluster should not disrupt operations.

CDPD-56006: On providing an incorrect hostname/service ID in the Ozone URI, the filesystem client instead of failing, retries till exhaustion and the default retry is too high.
Configure ozone.client.failover.max.attempts to a lower the value to avoid long endless retries.
CDPD-49137: Sometimes OM's kerberos token is not updated and it stops being able to communicate with SCM. When this occurs, writes will start to fail.
Restarting OM or setting the safety valve hadoop.kerberos.keytab.login.autorenewal.enabled = true will fix the issue.
CDPD-49808: Spark jobs against Ozone intermittently fail with ERROR spark.SparkContext: [main]: Error initializing SparkContext.java.lang.IllegalStateException: No filter named.
This is an intermittent failure which can be retried.
CDPD-49918: ECContainerReconstructionThread hits a precondition failure for checksum validations. This results in Reconstruction thread failing.
None.
CDPD-50678: Deleting containers which have one or more replicas which are not empty on the Datanode can cause the container to be stuck in a deleting state indefinitely. Containers in this state can also block decommission or maintenance operations completing.
None.
CDPD-50665: Due to HDDS-8171, if a datanode is IN_MAINTENANCE and stopped, and another node is decommissioned which is hosting an EC replica of the same index as the one on the IN_MAINTENANCE node, decommission can get blocked.
Start the datanode process on the IN_MAINTENANCE node so its replicas are available again.
CDPD-48932: Datanode needs off heap memory due to direct buffers and rocks DB usage. This can lead to the memory footprint to be larger than the configured physical memory on the node.
Remove other memory intensive services from the node, make sure the physical memory is sufficient for the planned load and scale.