Migrating a Key Trustee KMS Server Role Instance to a New Host

In some cases–for example, after upgrading your servers–it is desirable to migrate a Key Trustee KMS Server role instance to a new host. This procedure describes how to move a Key Trustee KMS proxy service role instance from an existing cluster host to another cluster host. The security and performance requirements for the KMS proxy are based on providing a dedicated system to the role, and not shared with CDP or other services. The KMS proxy represents a service that must be:
  • secure
  • isolated from non-administrator access
  • maintained as a system with a higher level of isolation and security requirements compared to other cluster nodes

Assumptions and Requirements

The following assumptions and requirements apply during the migration of a Key Trustee KMS server role instance to a new host as described in this procedure:
  • Complete the steps one node at a time (migrate to the first new node, verify, then repeat the steps to migrate to second new node, verify, and so on).
  • The sequence of restarts indicated throughout the steps are critical to successfully completing the migration without data loss. Do not skip any of the steps.
  • As required for any KMS service that is configured for HA, Zookeeper must be deployed as a service (true by default). Refer to “Adding a Service” for details about how to add services.
  • Review and examine TLS and Kerberos configuration requirements: the new KMS nodes must be ready with a Java Keystore and Truststore that present the correct host certificates while also trusting the Key Trustee Server. If the custom Kerberos keytab retrieval script is in use for Kerberos integration, it is important to have those keytabs ready and ingested before proceeding. Refer to “Using a custom Kerberos keytab retrieval script” for details.
  • For this use case/procedure, assume that the existing KMS proxy host instances are named:
    • ktkms01.example.com
    • ktkms02.example.com
  • Assume that the host destination instances are:
    • ktkms03.example.com
    • ktkms04.example.com

Migrating a Key Trustee KMS Server Role Instance to a New Host

  1. Back up the Key Trustee KMS private key and configuration directory. See “Back up Key Trustee Server clients” for more information.
  2. Before adding the new role instance, see “Resource Planning for Data at Rest Encryption” for considerations when selecting a host.
  3. Run the Add Role Instances (“Role Instances”) wizard for the Key Trustee KMS service (Key Trustee KMS service > Actions > Add Role Instances).
  4. Click Select hosts and select the checkbox for the host to which you are adding the new Key Management Server proxy service role instance. Click OK and then Continue.
  5. On the Review Changes page of the wizard, confirm the authorization code, organization name, and settings, and then click Finish.
  6. Select and start the new KMS instance (Actions for Selected > Start).
  7. Verify that a Kerberos HTTP principal has been created for that specific host role instance in the Security configuration interface (Administration > Security > Kerberos Credentials)).

    For example, in this scenario the existing KMS Kerberos principal is HTTP/ktkms01.example.com@EXAMPLE.COM, and you must verify the new host principal HTTP/ktkms03.example.com@EXAMPLE.COM has been created before continuing. If you cannot verify the new host principal, then click Generate Missing Credentials on the Kerberos Credentials tab to generate it before continuing.

    If the custom Kerberos keytab retrieval script is in use for Kerberos integration, it is important to have those keytabs ready and ingested before proceeding. Refer to “Using a custom Kerberos keytab retrieval script” for details.
  8. Synchronize the Key Trustee KMS private key. In this use case you log into the original working KMS instance host, which is ktkms01.example.com, and synchronize the keys from your existing KMS host ktkms01.example.com to the new host to which you are migrating, ktkms03.example.com. Copy the private key over the network by running the following command as a privileged (root) user on the original Key Trustee KMS host:
    rsync -zav /var/lib/kms-keytrustee/keytrustee/.keytrustee root@ktkms03.example.com:/var/lib/kms-keytrustee/keytrustee/.
    Replace the host name (here we are using ktkms03.example.com) with the hostname of the Key Trustee KMS host to which you are migrating.
  9. To verify that the Key Trustee KMS private keys successfully synchronized, compare the MD5 hash of the private keys. On each Key Trustee KMS host, run the following command:
    $ md5sum /var/lib/kms-keytrustee/keytrustee/.keytrustee/secring.gpg

    If the outputs are different, contact Cloudera Support for assistance. Do not attempt to synchronize existing keys. If you overwrite the private key and do not have a backup, any keys encrypted by that private key are permanently inaccessible, and any data encrypted by those keys is permanently irretrievable. If you are configuring Key Trustee KMS high availability for the first time, continue synchronizing the private keys.

  10. Restart the Key Trustee KMS service (Key Trustee KMS service > Actions > Restart). After the restart completes, click Close.
  11. Restart the cluster (“Starting, Stopping, Refreshing, and Restarting a Cluster”). This refreshes all the KMS instances in the cluster, and ensures all stale services are restarted.
  12. Redeploy the client configuration (Home > Cluster > Deploy Client Configuration).
  13. Run the steps in “Managing Encryption Keys and Zones”. Perform the check multiple times to exercise the Load balanced KMS nodes properly. If a command fails during this test, stop immediately, halt the cluster, and contact Cloudera Support.
  14. Remove the old KMS instance being migrated. First stop the KMS instance (ktkms01.example.com) (Select KMS instance > Actions for Selected > Stop), and then delete it (Select KMS instance > Actions for Selected > Delete).
    Delete the /var/lib/kms-keytrustee directories on the old KMS instance only after configuring and verifying the new KMS instances. Delete this material only after completing the following tasks:
    1. Remove the old KMS instances from the KMS service.
    2. Verify that you can read previously encrypted data only using the new KMS instances.
  15. Restart the cluster (“Starting, Stopping, Refreshing, and Restarting a Cluster”). This refreshes all the KMS instances in the cluster, and ensures all stale services are restarted.
  16. Redeploy the client configuration (Home > Clusterwide > Deploy Client Configuration).
  17. Re-run the steps in “Managing Encryption Keys and Zones”. Perform the check multiple times to exercise the Load balanced KMS nodes properly. If a command fails during this test, stop immediately, and halt the cluster.
  18. Repeat these steps for any additional KMS node migrations you wish to perform. So in the use case shown here, we would repeat the steps to migrate the ktkms02.example.com host to the ktkms04.example.com host.