Migrate to a multiple Kudu master configuration
Before migrating to a multiple Kudu masters set up, you need to perform many migration planning steps, such as deciding the number of masters, and choosing the nodes where to add the new Kudu masters.
The migration procedure does not require stopping all the Kudu processes. The restarting of the Kudu processes can be done without incurring downtime.
- You must decide how many masters to use.
The number of masters should be odd because an even number of masters does not provide any benefit over having one fewer masters. Three or five node master configurations are recommended as they can tolerate the failure of one or two masters respectively.
- You must establish a maintenance window.
One hour should be sufficient maintenance window time. During this time the Kudu cluster might become unavailable if there is some problem during the procedure.
Configure a DNS alias for the new master.
The alias can be:
- a CNAME record: if the machine already has an A record in DNS
- and A record: if the machine is only known by its IP address
Perform the following preparatory steps for the new master that you are planning to
Choose a node in the cluster where there is no running Kudu master yet and which
has enough spare CPU and memory capacity to host a new Kudu master.
The master generates very little load so it can be collocated with other data services or load-generating processes, though not with another Kudu master from the same configuration.
- Choose and record the directories where the new master’s data and WAL will be located.
- Choose and record the port the master should use for RPCs.
- Choose a node in the cluster where there is no running Kudu master yet and which has enough spare CPU and memory capacity to host a new Kudu master.
In Cloudera Manager, add a new Kudu Master role to the selected new master node:
Now, the newly added master role instance is commissioned but not part of the cluster. To make it a part of the cluster, you need to start it.
- Select the Kudu service.
- Select Instances.
- Click Add Role Instances.
- Click Select hosts under Master × 1.
- Select the host node and click OK.
- Click Continue.
- Review the changes and if everything is correct, click Finish.
Start the newly added master role instance.
- Select the master role instance.
- Go to .
Review the changes in log files, and click Close.
Now, the newly added master role instance is a part of the cluster.
- If you have Kudu tables that are accessed from Impala and you did
not set up DNS aliases, update the HMS database manually in the underlying database that
provides the storage for HMS:
- Connect to the HMS database.
- Run an SQL statement similar to the following example:
UPDATE TABLE_PARAMS SET PARAM_VALUE = 'master-1.example.com,master-2.example.com,master-3.example.com' WHERE PARAM_KEY = 'kudu.master_addresses' AND PARAM_VALUE = 'master-1.example.com';
- In impala-shell run the following command:
- After adding all the desired masters into the cluster, modify the value of the
tserver_master_addrsconfiguration parameter for each tablet server. The new value must be a comma-separated list of masters where each entry is a string of the form
hostnameis master’s hostname
portis master’s RPC port number
- Restart all the tablet servers to pick up the new masters’ configuration.
- To verify that all masters are working properly, consider performing the following
Using a browser, visit each master’s web UI and navigate to the
All the masters should be listed there with one master in the
LEADERrole and the others in the
FOLLOWERrole. The contents of
/masterson each master should be the same.
Run a Kudu system check (
ksck) on the cluster using the
kuducommand line tool.
- If applicable, run a few quick SQL queries against a couple of migrated Kudu tables using impala-shell or Hue.