Migrate to a multiple Kudu master configuration

Before migrating to a multiple Kudu masters set up, you need to perform many migration planning steps, such as deciding the number of masters, and choosing the nodes where to add the new Kudu masters.

The migration procedure does not require stopping all the Kudu processes. The restarting of the Kudu processes can be done without incurring downtime.

  • You must decide how many masters to use.

    The number of masters should be odd because an even number of masters does not provide any benefit over having one fewer masters. Three or five node master configurations are recommended as they can tolerate the failure of one or two masters respectively.

  • You must establish a maintenance window.

    One hour should be sufficient maintenance window time. During this time the Kudu cluster might become unavailable if there is some problem during the procedure.

  1. Optional: Configure a DNS alias for the new master.
    The alias can be:
    • a CNAME record: if the machine already has an A record in DNS
    • and A record: if the machine is only known by its IP address
  2. Perform the following preparatory steps for the new master that you are planning to add:
    1. Choose a node in the cluster where there is no running Kudu master yet and which has enough spare CPU and memory capacity to host a new Kudu master.
      The master generates very little load so it can be collocated with other data services or load-generating processes, though not with another Kudu master from the same configuration.
    2. Choose and record the directories where the new master’s data and WAL will be located.
    3. Choose and record the port the master should use for RPCs.
  3. In Cloudera Manager, add a new Kudu Master role to the selected new master node:
    1. Select the Kudu service.
    2. Select Instances.
    3. Click Add Role Instances.
    4. Click Select hosts under Master × 1.
    5. Select the host node and click OK.
    6. Click Continue.
    7. Review the changes and if everything is correct, click Finish.
    Now, the newly added master role instance is commissioned but not part of the cluster. To make it a part of the cluster, you need to start it.
  4. Start the newly added master role instance.
    1. Select the master role instance.
    2. Go to Actions for Selected > Start.

    Upon starting, the new Kudu master registers with the existing master(s).

  5. Review the log files to make sure no errors have been reported, and click Close.
    Now, the newly added master role instance is a part of the cluster.
  6. To propagate the new master membership configuration, restart all Kudu masters (the old ones and the newly added one).
    It can be done on a one-by-one (i.e. Rolling Restart) or all-at-once manner.
  7. After all the Kudu masters are up and running, open the Web UI of the newly added master and click the Masters tab.
    A page with Live Masters table appears.
  8. Verify the following:
    • The table shows all the Kudu masters; the old ones and the newly added one.
    • There is one master marked as LEADER and the rest are marked as FOLLOWER in the Role column.
  9. In the Live Masters table, click the UUID link of the leader Kudu master in the UUID column of the corresponding row.
    The Web UI of the current leader master appears.
  10. In the Web UI of the current leader master, click the Masters tab.
    A page with Live Masters table appears.
  11. Verify the following:
    • The table shows all the Kudu masters in the cluster; the old ones and the newly added one.
    • The information in the Registration column makes sense for the newly added Kudu master: the entries in the rpc_addresses field RPC match the hostnames and/or IP addresses of the nodes hosting KUDU_MASTER roles.
    • The information in the Start time column reflects the time of the recent restart performed at step 6.
  1. If you have Kudu tables that are accessed from Impala and you did not set up DNS aliases, update the HMS database manually in the underlying database that provides the storage for HMS:
    1. Connect to the HMS database.
    2. Run an SQL statement similar to the following example:
      UPDATE TABLE_PARAMS
      SET PARAM_VALUE =
        'master-1.example.com,master-2.example.com,master-3.example.com'
      WHERE PARAM_KEY = 'kudu.master_addresses' AND PARAM_VALUE = 'master-1.example.com';
    3. In impala-shell run the following command: INVALIDATE METADATA;
  2. After adding all the desired masters into the cluster, modify the value of the tserver_master_addrs configuration parameter for each tablet server. The new value must be a comma-separated list of masters where each entry is a string of the form <hostname>:<port>, where
    • hostname is master’s hostname
    • port is master’s RPC port number
  3. Restart all the tablet servers to pick up the new masters’ configuration.
  4. To verify that all masters are working properly, consider performing the following checks:
    • Using a browser, visit each master’s web UI and navigate to the /masters page.

      All the masters should be listed there with one master in the LEADER role and the others in the FOLLOWER role. The contents of /masters on each master should be the same.

    • Run a Kudu system check (ksck) on the cluster using the kudu command line tool.

    • If applicable, run a few quick SQL queries against a couple of migrated Kudu tables using impala-shell or Hue.