Kafka

You must cleanup metadata on broker hosts after migrating the Ambari-managed HDP cluster to CDP Private Cloud Base. If you are upgrading to CDP Private Cloud Base 7.1.7, you can ignore step 3, step 4, and step 5.

Cleanup Metadata on Broker Hosts
  1. On each Broker host, remove $log.dirs/meta.properties file from Kafka broker hosts. For example,
    #mv /grid/0/kafka-logs/meta.properties /tmp.

    Remove ${log.dirs}/meta.properties file from Kafka broker.

    If the Kafka log.dirs property points to /kafka-logs, then the command is

    #mv /kafka-logs/meta.properties /tmp

  2. Set the Kafka port value. For more information, see Change Kafka Port Value.
  3. Enable kerberos.auth.enable. For more information, see Kafka cluster Kerberos.
  4. In the Ambari kafka configuration, get the zookeeper.connect configuration value. Find the path defined at the end after port number and update the path at Cloudera Manager Kafka zookeeper.chroot configuration. For example, if the path is zookeeper.connect=hostname1:port1,hostname2:port2,hostname3:port3/chroot/path, then the path is /chroot/path. Another example, if the path is zookeeper.connect=hostname1:port1,hostname2:port2,hostname3:port3, then the path is /. Also, the default path is /.
  5. Update the broker IDs:
    1. Log in to Cloudera Manager
    2. Navigate to Clusters
    3. Select the Kafka service
    4. Navigate to the Configurations tab
    5. Search for broker.id and update the IDs for each hostname using the output of the third step of Extract broker ID Procedure 2
  6. Start the Kafka service.