Upgrade from 1.5.0 or 1.5.1 to 1.5.2 (ECS)
You can upgrade your existing CDP Private Cloud Data Services 1.5.0 or 1.5.1 to 1.5.2 without performing an uninstall.
- Review the Software Support Matrix for ECS.
As of CDP Private Cloud Data Services 1.5.1, external Control Plane metadata databases are no longer supported. If you are upgrading from CDP Private Cloud Data Services 1.5.0 to 1.5.2, and you were previously using an external Control Plane database, you must run the following
psqlcommands to create the required databases. You should also ensure that the two new databases are owned by the common database users known by the control plane.
CREATE DATABASE db-cadence; CREATE DATABASE db-cadence-visibility;
- If you are upgrading from CDP Private Cloud Data Services 1.5.0 to 1.5.2, and you were previously using an external Control Plane database, you must regenerate the DB certificate with SAN before upgrading to CDP Private Cloud Data Services 1.5.2. For more information see Pre-upgrade - Regenerate external DB cert as SAN (if applicable).
- The Docker registry that is configured with the cluster must remain the same during the upgrade process. If CDP Private Cloud Data Services 1.5.0 or 1.5.1 was installed using the public Docker registry, CDP Private Cloud Data Services 1.5.2 should also use the public Docker registry, and not be configured to use the embedded Docker registry. If you would like to use a different configuration for the Docker registry, you must perform a new installation of CDP Private Cloud Data Services.
In Cloudera Manager, navigate to CDP Private Cloud Data Services and click the icon, then click
On the Getting Started page, you can select the Install
method - Air Gapped or Internet
Internet install method
Air Gapped install method
On the Collect Information page, click
On the Install Parcels page, click
On the Update Progress page, you can see the progress of your
upgrade. Click Continue after the upgrade is complete .
After the upgrade is complete, the Summary page appears. You can
now Launch CDP Private Cloud from here.
If you see a Longhorn Health Test message about a degraded Longhorn volume, wait for the cluster repair to complete.
Or you can navigate to the CDP Private Cloud Data Services page and click Open CDP Private Cloud Data Services.CDP Private Cloud Data Services opens up in a new window.
- If the upgrade stalls, do the following:
- Check the status of all pods by running the following command on
the ECS server node:
kubectl get pods --all-namespaces
- If there are any pods stuck in “Terminating” state, then force
terminate the pod using the following
kubectl delete pods <NAME OF THE POD> -n <NAMESPACE> --grace-period=0 —force
If the upgrade still does not resume, continue with the remaining steps.
- In the Cloudera Manager Admin Console, go to the ECS service and
The Longhorn dashboard opens.
Check the "in Progress" section of the dashboard to see whether there are any volumes stuck in the attaching/detaching state in. If a volume is that state, reboot its host.
- Check the status of all pods by running the following command on the ECS server node:
- You may see the following error message during the
Upgrade Cluster > Reapplying all settings > kubectl-patch
If you see this error, do the following:
kubectl rollout status deployment/rke2-ingress-nginx-controller -n kube-system --timeout=5m error: timed out waiting for the condition
- Check whether all the Kubernetes nodes are ready for scheduling.
Run the following command from the ECS Server
You will see output similar to the following:
kubectl get nodes
NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled control-plane,etcd,master 103m v1.21.11+rke2r1 <node2> Ready <none> 101m v1.21.11+rke2r1 <node3> Ready <none> 101m v1.21.11+rke2r1 <node4> Ready <none> 101m v1.21.11+rke2r1
Run the following command from the ECS Server node for the node showing a status of
You will see output similar to the following:
- Scale down and scale up the
rke2-ingress-nginx-controller pod by
running the following command on the ECS Server
kubectl delete pod rke2-ingress-nginx-controller-<pod number> -n kube-system
- Resume the upgrade.
- Check whether all the Kubernetes nodes are ready for scheduling. Run the following command from the ECS Server node:
- After upgrading, the Cloudera Manager admin role may be missing the Host Administrators
privilege in an upgraded cluster. The cluster administrator should run the following
command to manually add this privilege to the
ipa role-add-privilege <cmadminrole> --privileges="Host Administrators"
- If you specified a custom certificate, select the ECS cluster in Cloudera Manager, then
select Actions > Update Ingress Controller. This command copies the
key.pemfiles from the Cloudera Manager server host to the ECS Management Console host.
- After upgrading, you can enable the unified time zone feature to synchronize the ECS cluster time zone with the Cloudera Manager Base time zone. When upgrading from earlier versions of CDP Private Cloud Data Services to 1.5.2, unified time zone is disabled by default to avoid affecting timestamp-sensitive logic. For more information, see ECS unified time zone.