Removing cluster hosts and Kubernetes nodes

You stop roles assigned to the hosts you want to repurpose for the installation of Cloudera Private Cloud. Next, you delete nodes from Kubernetes. Finally, you go back to Cloudera Manager and remove hosts from the CDSW cluster.

If you want to repurpose nodes from CDP Private Cloud Base, you must use a different procedure from the one described below. For more information, see Deleting hosts (CDP Private Cloud).
  1. In Cloudera Manager, go to CDSW service > Status, select the name of a host, and click Actions > Stop Roles on Hosts.
    After you click Confirm, you see a success message.
  2. In Instances, select the Docker Daemon and Worker roles assigned to ageorge-cdsw-4.com and ageorge-cdsw-5.com, and click Actions > Delete.
    In Instances, in Hostnames, ageorge-cdsw-4.com and ageorge-cdsw-5.com are no longer listed.
  3. Using kubectl commands, delete the nodes from Kubernetes, and then check that the nodes are gone.
    [root@ageorge-cdsw-1 ~]# kubectl delete node ageorge-cdsw-4.com
    node "ageorge-cdsw-4.com" deleted
    [root@ageorge-cdsw-1 ~]# kubectl delete node ageorge-cdsw-5.com
    node "ageorge-cdsw-5.com" deleted
                        
                        
    [root@ageorge-cdsw-1 ~]# kubectl get nodes
    NAME                 STATUS   ROLES    AGE   VERSION
    ageorge-cdsw-1.com   Ready    master   61m   v1.19.15
    ageorge-cdsw-2.com   Ready    <none>   61m   v1.19.15
    ageorge-cdsw-3.com   Ready    <none>   61m   v1.19.15              
  4. Log into each host, ageorge-cdsw-4.com and ageorge-cdsw-5.com, and reset Kubernetes.
    kubeadm reset
    Output looks something like this:
    [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
    [reset] Are you sure you want to proceed? 
  5. Respond yes to the prompt, and follow instructions in the resulting output.
    Output looks something like this:
    [preflight] Running pre-flight checks
    ...
    The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
                            
    The reset process does not reset or clean up iptables rules or IPVS tables.
    If you wish to reset iptables, you must do so manually by using the "iptables" command.
                            
    If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
    to reset your system's IPVS tables.
                            
    The reset process does not clean your kubeconfig files and you must remove them manually.
    Please, check the contents of the $HOME/.kube/config file.                
  6. Check that the Kubernetes nodes are gone.
    kubectl get nodes
    The output looks something like this:
    NAME                 STATUS   ROLES    AGE   VERSION
    ageorge-cdsw-1.com   Ready    master   65m   v1.19.15
    ageorge-cdsw-2.com   Ready    <none>   64m   v1.19.15
    ageorge-cdsw-3.com   Ready    <none>   64m   v1.19.15
  7. If the Spark Gateway role is active, in the SPARK ON YARN service, in Instances, select each Gateway role, and click Actions > Delete.
    In this example, the Role Types for ageorge-cdsw-5.com and ageorge-cdsw-4.com hosts are selected.

    Click Delete, and the Role Types assigned to ageorge-cdsw-5.com and ageorge-cdsw-4.com hosts are deleted.

  8. Delete any other active roles from the hosts.
  9. In All Hosts, select the ageorge-cdsw-5.com and ageorge-cdsw-4.com hosts, and click Actions > Remove from Cluster.
    You are prompted to confirm removing the hosts from the cluster:

    Click Confirm, and view the success message upon completion.