Restoring a CDE service

You can restore the Cloudera Data Engineering (CDE) service with its jobs, resources, job run history, and job logs from a backed-up ZIP file.

You must back up the CDE service, expand the resource pool, and then upgrade your Cloudera Data Platform (CDP) to restore the CDE service. Also, you must validate that the Ozone Gateway is working as expected by performing the steps listed in the Post upgrade - Ozone Gateway validation topic.
  1. If you have exited from the previous terminal where the pre-upgrade commands were run for the CDE service being upgraded, then you have to export these variables before running any docker command.
    export BASE_WORK_DIR=[***HOST_MACHINE_PATH***]
    export BACKUP_OUTPUT_DIR=/home/dex/backup
  2. Set the following environment variables in case you have exited from the ECS Server host:
    export PATH=$PATH:/opt/cloudera/parcels/ECS/installer/install/bin/linux/:/opt/cloudera/parcels/ECS/docker
    export KUBECONFIG=~/kubeconfig
    export BASE_WORK_DIR=/opt/backup-restore
    export BACKUP_OUTPUT_DIR=/home/dex/backup
    
  3. Run the dex-upgrade-utils docker image on the ECS Server host to restore the service.
    docker run \
    -v [***KUBECONFIG_FILE_PATH***]:/home/dex/.kube/config:ro \
    -v [***CDP_CREDENTIAL_FILE_PATH***]:/home/dex/.cdp/credentials:ro \
    -v [***CDE-UPGRADE-UTIL.PROPERTIES_FILE_PATH***]:/opt/cde-backup-restore/scripts/backup-restore/cde-upgrade-util.properties:ro \
    -v [***LOCAL_BACKUP_DIRECTORY***]:$BACKUP_OUTPUT_DIR \
    -e KUBECONFIG=/home/dex/.kube/config \
    [***DOCKER_IMAGE_NAME***]:[***DOCKER_IMAGE_VERSION***] restore-service -s [***CDE-CLUSTER-ID***] -f $BACKUP_OUTPUT_DIR/[***BACKUP-ZIP-FILE-NAME***]

    Where -s is the CDE service ID and -f is the backup output directory path in the container.

    Example:

    docker run \
    -v $BASE_WORK_DIR/secrets/kubeconfig:/home/dex/.kube/config:ro \
    -v $BASE_WORK_DIR/secrets/credentials:/home/dex/.cdp/credentials:ro \
    -v $BASE_WORK_DIR/cde-upgrade-util.properties:/opt/cde-backup-restore/scripts/backup-restore/cde-upgrade-util.properties:ro \
    -v $BASE_WORK_DIR/backup:$BACKUP_OUTPUT_DIR \
    -e KUBECONFIG=/home/dex/.kube/config \
    docker-private.infra.cloudera.com/cloudera/dex/dex-upgrade-utils:1.20.1-b48 restore-service -s cluster-c2dhkp22 -f $BACKUP_OUTPUT_DIR/cluster-c2dhkp22-2023-03-10T06_00_05.zip
  4. Optional: If you are using a CDP database that is external and is not accessible from the container which is running the CDE upgrade command, then the following SQL statements are displayed in the logs.

    Example:

    2023-05-17 13:02:29,551 [INFO] CDP control plane database is external and not accessible
    2023-05-17 13:02:29,551 [INFO] Please rename the old & new cde service name manually by executing below SQL statement
    2023-05-17 13:02:29,551 [INFO]     update cluster set name = 'cde-base-service-1-19-1' where id = 'cluster-c2dhkp22';
    2023-05-17 13:02:29,551 [INFO]     update cluster set name = 'cde-base-service' where id = 'cluster-92c2fkgb';
    2023-05-17 13:02:29,551 [INFO] Please update the lastupdated time of old cde service in db to extend the expiry interval of db entry for supporting CDE CLI after old CDE service cleanup
    2023-05-17 13:02:29,551 [INFO]     update cluster set lastupdated = '2025-05-05 06:16:37.786199' where id = 'cluster-c2dhkp22';
                    -----------------------------------------------------------------

    You must execute the above SQL statements to complete the restore process.

    If you have closed the terminal or do not have this information, run the following SQL statements and specify the cluster details. Use the cluster ID that you have noted when performing the steps listed in the Prerequisites for upgrading CDE Service with endpoint stability section.

    1. Rename old CDE service.
      update cluster set name = '[***MODIFIED_SERVICE_NAME***]' where id = '[***OLD_CDE_CLUSTER_ID***]';
      Example:
      update cluster set name = 'cde-base-service-1-19-1' where id = 'cluster-c2dhkp22'
    2. Rename the new CDE service to the old CDE service name.
      update cluster set name = '[***OLD_CDE_SERVICE_NAME***]' where id = '[***NEW_CDE_CLUSTER_ID***]';
      Example:
      update cluster set name = 'cde-base-service' where id = 'cluster-92c2fkgb'
    3. Run the following command so that when the old CDE service is deleted or disabled then it is not cleared from the database for the next two years. The timestamp format must be the same and should be two years from the current time.
      update cluster set lastupdated = '[***YYYY-MM-DD HH:MM:SS[.NNN]***]' where id = '[***OLD_CDE_CLUSTER_ID***]';
      Example:
      update cluster set lastupdated = '2025-05-05 06:16:37.786199' where id = 'cluster-c2dhkp22'
  5. After the restore operation completes, validate that the jobs and resources are restored by running the cde job list and cde resource list CLI commands or check the virtual cluster job UI.
    In the Administration page of the CDE UI, you can see the old CDE service is appended with a version number. For example, if the old CDE service name was cde-sales, after the restore, the old CDE service is something similar to cde-sales-1-19.1.
  6. Optional: You can now delete the old CDE service after validating that everything is working as expected. If you delete the old CDE service, then you can shrink the resource pool size back to its initial value which you expanded in the Prerequisite steps. Do not delete the service if you want to rollback to the old service.