Managing HDFS replication policy

After you create an HDFS replication policy in Cloudera Private Cloud Data Services Replication Manager, you can perform and monitor various tasks related to the replication policy. You can view the job progress and replication logs. You can edit the advanced options to optimize a job run. You can suspend a job and also activate a suspended job. You can edit the replication policy as necessary.

On the Replication Policies page, you can perform the following actions and tasks on a replication policy and its jobs:
  • When you click Actions for an HDFS replication policy, the following actions appear:
    Action Description
    Edit* Change the replication policy options as required for non-expired policies that are in active or suspended state. Based on the schedule you choose, the replication policy replicates data.
    You can edit the replication policies to better align with changing requirements. For example, you might want to change the frequency of a policy depending on the data size and importance of the data being replicated.

    Optionally, expand a replication policy on the Replication Policies page to edit the replication policy options which include frequency (start time cannot be modified if the policy has already started), queue name, maximum bandwidth, and maximum map slots.

    Rerun Runs the replication policy.
    Delete Deletes the replication policy permanently.
    Suspend Suspends a running replication policy. Activate the replication policy, if required.
    View Log Download, copy, or open the log to track the job and to troubleshoot any issues.
    Collect diagnostic bundle Generates a diagnostic bundle for the replication policy. You can download the bundle as a ZIP file to your machine.
    To view and use the replication policies with an empty name in Replication Manager, you must understand the following implementation:
    • If the Cloudera Manager API version is lower than 51, an existing replication policy with an empty name can be used and updated. However, if you edit the replication policy and provide a name for the replication policy in versions higher or equal to 51, you must ensure that the name conforms to the validation rules.
    • If the Cloudera Manager API version is higher or equal to 51, it is mandatory that you provide a non-empty unique name to the replication policy to continue using it. This is because API version 51 and higher enforces the validation rules on all the replication policies.
    To pass the replication policy name validation, you must ensure that the replication policy name is unique. The name can contain letters, numbers, and the _ / - characters. You must also ensure that it does not contain the characters % . ; \ nor any character that is not ASCII printable, which includes the ASCII characters less than 32 and the ASCII characters that are greater than or equal to 127.
  • When you expand the policy details, the Job History panel appears.
    You can view the following details on the panel:
    1. Previous jobs, current job, and one future scheduled job if any.
    2. Job details which include:
      Job details Description
      Started Timestamp when the job started.
      Ended Timestamp when the job ended.
      Duration Time taken to complete the job.
      Progress Current status of a running job.
      Expected Remaining number of files and bytes expected to be copied for a running job.
      Copied Number of files and bytes copied for a running job and completed job.
      Failed Number of files and bytes that failed to be copied for a completed job.
      Deleted Number of files deleted for a completed job.
      Skipped Remaining number of files and bytes skipped from copying for a running job and complete job.
    3. Click Actions to:
      • Abort the job.
      • Re-run an aborted or failed job.
      • View Log for the job. You can download, copy, or open it to track the job and to troubleshoot any issues for the job.
  • When you click a job on the Job History panel, the following tabs appear:
    Tab Description
    General Shows the following job details:
    • Started at timestamp
    • Duration to complete the job
    • HDFS Replication Report to download the job statistics in CSV format
    • Job status Message
    Command Details Shows the steps that Replication Manager ran for the job along with the timestamp.
  • You can download the following CSV reports from the General > HDFS Replication Report field to track the replication jobs and to troubleshoot issues:
    Report Description
    Listing Lists all the files and directories copied during the replication job.
    Status Shows the complete status report of each file as:
    • an Error occurred and the file was not copied.
    • a Deleted file.
    • an up-to-date file for which the replication was Skipped.
    Error Status Status report of all the copied files with errors. Each file shows the status, path, and message for the copied files with errors.
    Skipped Status Status report of all skipped files. Each file lists the status, path, and message for the databases and tables that were skipped.
    Deleted Status Status report of all deleted files. Each file lists the status, path, and message for the databases and tables that were deleted.
    Performance Summary report about the performance of the running replication job which includes the last performance sample for each mapper that is working on the replication job.
    Full Performance Performance report of the job which includes the samples taken for all mappers during the replication job.