Managing HDFS replication policy
After you create an HDFS replication policy in CDP Private Cloud Data Services Replication Manager, you can perform and monitor various tasks related to the replication policy. You can view the job progress and replication logs. You can edit the advanced options to optimize a job run. You can suspend a job and also activate a suspended job. You can edit the replication policy as necessary.
When you click Actions for an HDFS replication policy,
the following actions appear:
Action Description Edit Change the replication policy options as required for non-expired policies that are in active or suspended state. Based on the schedule you choose, the replication policy replicates data.You can edit the replication policies to better align with changing requirements. For example, you might want to change the frequency of a policy depending on the data size and importance of the data being replicated.
Optionally, expand a replication policy on the Replication Policies page to edit the replication policy options which include frequency (start time cannot be modified if the policy has already started), queue name, maximum bandwidth, and maximum map slots.
Rerun Runs the replication policy. Delete Deletes the replication policy permanently. Suspend Suspends a running replication policy. Activate the replication policy, if required. View Log Download, copy, or open the log to track the job and to troubleshoot any issues. Collect diagnostic bundle Generates a diagnostic bundle for the replication policy. You can download the bundle as a ZIP file to your machine.
When you expand the policy details, the Job History panel
You can view the following details on the panel:
- Previous jobs, current job, and one future scheduled job if any.
Job details which include:
Job details Description Started Timestamp when the job started. Ended Timestamp when the job ended. Duration Time taken to complete the job. Progress Current status of a running job. Expected Remaining number of files and bytes expected to be copied for a running job. Copied Number of files and bytes copied for a running job and completed job. Failed Number of files and bytes that failed to be copied for a completed job. Deleted Number of files deleted for a completed job. Skipped Remaining number of files and bytes skipped from copying for a running job and complete job.
Click Actions to:
- Abort the job.
- Re-run an aborted or failed job.
- View Log for the job. You can download, copy, or open it to track the job and to troubleshoot any issues for the job.
When you click a job on the Job History panel, the
following tabs appear:
Tab Description General Shows the following job details:
- Started at timestamp
- Duration to complete the job
- HDFS Replication Report to download the job statistics in CSV format
- Job status Message
Command Details Shows the steps that Replication Manager ran for the job along with the timestamp.
You can download the following CSV reports from the
field to track the replication jobs and to troubleshoot
Report Description Listing Lists all the files and directories copied during the replication job. Status Shows the complete status report of each file as:
- an Error occurred and the file was not copied.
- a Deleted file.
- an up-to-date file for which the replication was Skipped.
Error Status Status report of all the copied files with errors. Each file shows the status, path, and message for the copied files with errors. Skipped Status Status report of all skipped files. Each file lists the status, path, and message for the databases and tables that were skipped. Deleted Status Status report of all deleted files. Each file lists the status, path, and message for the databases and tables that were deleted. Performance Summary report about the performance of the running replication job which includes the last performance sample for each mapper that is working on the replication job. Full Performance Performance report of the job which includes the samples taken for all mappers during the replication job.