Migrating a Key Trustee KMS Server Role Instance to a New Host
- secure
- isolated from non-administrator access
- maintained as a system with a higher level of isolation and security requirements compared to other cluster nodes
Assumptions and Requirements
- Complete the steps one node at a time (migrate to the first new node, verify, then repeat the steps to migrate to second new node, verify, and so on).
- The sequence of restarts indicated throughout the steps are critical to successfully completing the migration without data loss. Do not skip any of the steps.
- As required for any Cloudera Manager cluster that is configured for HA, Zookeeper must be deployed as a service (true by default).
- If TLS and Kerberos are configured (they typically are in most production environments), then consider their configuration requirements: the new KMS nodes must be ready with a Java Keystore and Truststore that present the correct host certificates while also trusting the Key Trustee Server.
- For this use case/procedure, assume that the existing KMS proxy host instances are named:
- ktkms01.example.com
- ktkms02.example.com
- Assume that the host destination instances are:
- ktkms03.example.com
- ktkms04.example.com
Migrating a Key Trustee KMS Server Role Instance to a New Host
- Back up the Key Trustee KMS private key and configuration directory. See Backing Up and Restoring Key Trustee Server and Clients for more information.
- Before adding the new role instance, see Resource Planning for Data at Rest Encryption for considerations when selecting a host.
- Run the Add Role Instances wizard for the Key Trustee KMS service ( ).
- Click Select hosts and select the checkbox for the host to which you are adding the new Key Management Server proxy service role instance. Click OK and then Continue.
- On the Review Changes page of the wizard, confirm the authorization code, organization name, and settings, and then click Finish.
- Select and start the new KMS instance ( ).
- Verify that a Kerberos HTTP principal has been created for that specific host role instance in the Security configuration interface (
For example, in this scenario the existing KMS Kerberos principal is HTTP/ktkms01.example.com@EXAMPLE.COM, and you must verify the new host principal HTTP/ktkms03.example.com@EXAMPLE.COM has been created before continuing. If you cannot verify the new host principal, then click Generate Missing Credentials on the Kerberos Credentials tab to generate it before continuing.
If the custom Kerberos keytab retrieval script is in use for Kerberos integration, it is important to have those keytabs ready and ingested before proceeding. Refer to Using a Custom Kerberos Keytab Retrieval Script for details.
).
- If you do not have a ZooKeeper service in your cluster, add one using the instructions in Adding a Service.
- Synchronize the Key Trustee KMS private key. In this use case you log into the original working KMS instance host, which is ktkms01.example.com, and
synchronize the keys from your existing KMS host ktkms01.example.com to the new host to which you are migrating, ktkms03.example.com. Copy the private key over the network by running the following command as a privileged (root) user on the original Key Trustee KMS host:
rsync -zav /var/lib/kms-keytrustee/keytrustee/.keytrustee root@ktkms03.example.com:/var/lib/kms-keytrustee/keytrustee/.
Of course, replace the host name (here we are using ktkms03.example.com) with the hostname of the Key Trustee KMS host to which you are migrating. - To verify that the Key Trustee KMS private keys successfully synchronized, compare the MD5 hash of the private keys. On each Key Trustee KMS host, run the following command:
$ md5sum /var/lib/kms-keytrustee/keytrustee/.keytrustee/secring.gpg
If the outputs are different, contact Cloudera Support for assistance. Do not attempt to synchronize existing keys. If you overwrite the private key and do not have a backup, any keys encrypted by that private key are permanently inaccessible, and any data encrypted by those keys is permanently irretrievable. If you are configuring Key Trustee KMS high availability for the first time, continue synchronizing the private keys.
- Restart the Key Trustee KMS service ( ). After the restart completes, click Close.
- Restart the cluster. This refreshes all the KMS instances in the cluster, and ensures all stale services are restarted.
- Redeploy the client configuration ( ).
- Run the steps in Validating Hadoop Key Operations. Perform the check multiple times to exercise the Load balanced KMS nodes properly. If a command fails during this test, stop immediately, halt the cluster, and contact Cloudera Support.
- Remove the old KMS instance being migrated. First stop the KMS instance (ktkms01.example.com) ( ), and then delete it ( ).
- Restart the cluster. This refreshes all the KMS instances in the cluster, and ensures all stale services are restarted.
- Redeploy the client configuration ( ).
- Re-run the steps in Validating Hadoop Key Operations. Perform the check multiple times to exercise the Load balanced KMS nodes properly. If a command fails during this test, stop immediately, and halt the cluster.
- Repeat these steps for any additional KMS node migrations you wish to perform. So in the use case shown here, we would repeat the steps to migrate the ktkms02.example.com host to the ktkms04.example.com host.