Runtime upgrade of the cluster

Learn how to resume the stopped nodes in the cluster before performing a runtime upgrade.

The cluster upgrade might fail if the cluster contains nodes that are in the Stopped state. You must restart these nodes before executing the upgrade or drop the stopped nodes before the upgrade. They can be added later after the upgrade.

Ensure that you execute the following command to increase the minimum number of nodes in the cluster to the maximum number of allowed Compute nodes in the cluster. This triggers the restart of the stopped nodes. After that, the cluster can be upgraded.

./clients/cdpcli/ opdb update-database --environment-name jrh16-cod-7216 --database-name jrh13-7216 \
        --auto-scaling-parameters '{"minComputeNodesForDatabase":<MAX_COMPUTE_NODES>}' 

Revert the minComputeNodesForDatabase to the original value after the upgrade.

Alternatively, drop all the stopped Compute nodes in the cluster using the following CDP CLI command.

cdp opdb create-database —environment-name <env_name> –database-name <db_name> --auto-scaling-parameters '{"maxComputeNodesForDatabase": <number_of running_compute_nodes in the system>}'