Step 9: Complete Post-Upgrade steps for upgrades to CDP Private Cloud Base
Steps to perform after upgrading a cluster.
Loading Filters ... 7.0.3 7.1.1 7.1.2 7.1.3 7.1.4 7.2.4 7.3.1 7.4.4 7.5.1 7.6.1 7.6.7 7.7.1 7.7.3 7.0.3 7.1.1 7.1.2 7.1.3 7.1.4 7.1.5 7.1.6 7.1.7 7.1.7.1000 7.1.8 7.1.7.2000 7.1.8 7.1.7.1000 7.1.7 7.1.6 7.1.5 7.1.4 7.1.3 7.1.2 7.1.1 7.1.7.2000 7.7.3 7.7.1 7.6.7 7.6.1 7.5.1 7.4.4 7.4.3 7.3.1 7.2.4 7.1.4 7.1.3 7.1.2 7.1.1
- gRPC TLS configuration for Ozone is supported only on new CDP 7.1.7 clusters and not on clusters upgraded from CDP 7.1.6 to CDP 7.1.7. If you want to enable gRPC TLS on the upgraded CDP 7.1.7 clusters, you must contact Cloudera Support for more information.
- Schema RegistryThe Ranger SchemaRegistry Plugin Audit Directory is not created automatically when upgrading a cluster. This causes the Schema Registry service to encounter an issue. As a result, following an upgrade, you must manually initiate the command in Cloudera Manager that creates the audit directory.
- In Cloudera Manager, select the Schema Registry service.
- Click .
- Kafka
- Remove the following properties from the Kafka Broker Advanced
Configuration Snippet (Safety Valve) for kafka.properties configuration
property.
inter.broker.protocol.version
log.message.format.version
- Save your changes.
- Perform a rolling restart:
- Select the Kafka service.
- Click .
- In the pop-up dialog box, select the options you want and click Rolling Restart.
- Click Close once the command has finished.
- Remove the following properties from the Kafka Broker Advanced
Configuration Snippet (Safety Valve) for kafka.properties configuration
property.
- Streams Messaging ManagerFollowing a successful upgrade, if you want to use SMM to monitor SRM replications, you must reconnect the two services. This is done by enabling the STREAMS_REPLICATION_MANAGER Service SMM property which is disabled by default.
- In Cloudera Manager, select the SMM service.
- Go to Configuration.
- Find and enable the STREAMS_REPLICATION_MANAGER Service property.
- Click Save Changes.
- Restart the service.
- Kafka ConnectUpgrading a Kafka Connect cluster can fail with the following ClassNotFoundException:
This is due to the Cloudera Manager agent not being able to properly create the symlinks and alternatives when performing an upgrade. The issue also occurs when a custom path is set forError: ClassNotFoundException exception occurred: com.cloudera.dim.kafka.config.provider.CmAgentConfigProvider exception occurred: com.cloudera.dim.kafka.config.provider.CmAgentConfigProvider
plugin.path
in Cloudera Manager, which does not contain the expected plugin JARs. Complete the following steps to resolve the issue:- Manually install missing symlinks or alternatives relevant to kafka-connect by
checking
cloudera-scm-agent.log
. - Ensure that the kafka-connect libraries are present in
/var/lib/kafka
or change theplugin.path
to a value where these libraries are present.If required, manually copy the JARs to the location configured in
plugin.path
. - If the issue persists, set the
plugin.path
to:
This is the location where kafka-connect libraries are set when installing parcels./opt/cloudera/parcels/CDH-[***VERSION***]/lib/kafka_connect_ext/libs
- Manually install missing symlinks or alternatives relevant to kafka-connect by
checking
- Kafka producer
Kafka producers upgraded to Kafka 3.0.1, 3.1.1, or all versions after 3.1.1 have idempotence enabled by default. Idempotent producers must have Idempotent Write permission set on the cluster resource in Ranger. As a result, If you upgraded your producers and use Ranger to provide authorization for the Kafka service, you must do either of the following after upgrading your producers. If configuration is not done, the producers fail to initialize due to an authorization failure.
- Explicitly disable idempotence for the producers. This can be done by setting
enable.idempotence
to false. - Update your policies in Ranger and ensure that producers have Idempotent Write permission on the cluster resource.
- Explicitly disable idempotence for the producers. This can be done by setting
- Streams Replication ManagerDuring the upgrade SRM uses the legacy versions of its intra cluster protocol and internal changelog data format to ensure backward compatibility. After a successful upgrade, you must configure SRM to use the new protocol and data format otherwise some features will not function.
- In Cloudera Manager select the Streams Replication Manager service.
- Go to configuration and clear the following properties:
- Use Legacy Intra Cluster Host Name Format
- Use Legacy Internal Changelog Data Format
- Restart the SRM service.
- Streams Replication ManagerThe value set in the Log Format property is cleared during the upgrade. If you customized the log format using this property, its value must be manually reconfigured following the upgrade.
- In Cloudera Manager select the SRM service.
- Go to Configuration and configure the Log Format property.
- Restart the SRM service.
- Streams Replication ManagerIn Cloudera Runtime 7.1.8 and higher, the separator configured with
replication.policy.separator
in Streams Replication Manager applies to the names of the offsets-syncs and checkpoints internal topics if the default replication (DefaultReplicationPolicy) policy is in use. If you have been using a custom separator, when SRM is restarted after the upgrade, a new set of internal topics will be automatically created using the separator that is configured. These topics however will be empty and will only be repopulated with data after replication restarts. If you want to export translated consumer group offsets after the upgrade but before replication starts, you must export the translated consumer group offsets from the old topics as the data is not yet available in the new topics. This is done by specifically instructing thesrm-control
tool to use the default separator, which is the dot (.). For example:
Where the [***CONFIG FILE***] specified insrm-control offsets --config [***CONFIG FILE***] --source [SOURCE_CLUSTER] --target [TARGET_CLUSTER] --group [GROUP1] --export > out.csv
--config
contains the following configuration entryreplication.policy.separator=.