Configuring replication of HDFS data

You must set up your clusters before you create an HDFS replication policy. You can also use CDP Private Cloud Base Replication Manager to replicate HDFS data to and from S3 or ADLS, however you cannot replicate data from one S3 or ADLS instance to another using Replication Manager.

To replicate HDFS data to and from S3 or ADLS, you must have the appropriate credentials to access the S3 or ADLS account. Additionally, you must create buckets in S3 or data lake store in ADLS. Replication Manager backs up file metadata, including extended attributes and ACLs when you replicate data to cloud storage. Replication Manager supports the following replication scenarios:
  • Replicate to and from Amazon S3 from CDH 5.14+ and Cloudera Manager version 5.13+.

    Replication Manager does not support S3 as a source or destination when S3 is configured to use SSE-KMS.

  • Replicate to and from Microsoft ADLS Gen1 from CDH 5.13+ and Cloudera Manager 5.15, 5.16, 6.1+.
  • Replicate to Microsoft ADLS Gen2 (ABFS) from CDH 5.13+ and Cloudera Manager 6.1+.
  1. Verify that your cluster conforms to one of the supported replication scenarios.
  2. If you are using different Kerberos principals for the source and destination clusters, add the destination principal as a proxy user on the source cluster. For example, if you are using the hdfssrc principal on the source cluster and the hdfsdest principal on the destination cluster, add the following properties to the HDFS service Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml property on the source cluster:
    <property>
        <name>hadoop.proxyuser.hdfsdest.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hdfsdest.hosts</name>
        <value>*</value>
    </property>

    Deploy the client configuration and restart all services on the source cluster, if the source cluster is managed by a different Cloudera Manager server than the destination cluster.

  3. Add the required credentials in Cloudera Manager to access the cloud storage to replicate HDFS to and from cloud storage.
    1. To add AWS credentials, see How to Configure AWS Credentials.
      Ensure that the following basic permissions are available to provide read-write access to S3 through the S3A connector:
      s3:Get*
      s3:Delete*
      s3:Put*
      s3:ListBucket
      s3:ListBucketMultipartUploads
      s3:AbortMultipartUpload
    2. To add ADLS credentials, perform the following steps:
      1. Click Add AD Service Principal on the Cloudera Manager Admin Console > Administration > External Accounts > Azure Credentials page for the source cluster.
      2. Enter the Name, Client ID, Client Secret Key, and Tenant Identity for the credential in the Add AD Service Principal modal window.
      3. Click Add.
  4. From Cloudera Manager > Replication > Replication Policies page, click Create Replication Policy.
  5. Select HDFS Replication Policy.

    The Create HDFS Replication Policy dialog box appears.

  6. In the General tab, you can configure the following options:
    • Click the Name field and add a unique name for the replication policy.
    • Click the Source field and select the source HDFS service. You can select HDFS services managed by a peer Cloudera Manager Server, local HDFS services (managed by the Cloudera Manager Server for the Admin Console you are logged into).
    • Enter one of the following values in the Source Path depending on your source cluster:
      • Directory (or file) on the on-premises cluster.
      • s3a://[***bucket name***]/[***path***] path to replicate from Amazon S3.
      • adl://[***accountname***].azuredatalakestore.net/[***path***]path to replicate from ADLS Gen 1.
      • abfs[s]://[***file_system***]@[***account_name***].dfs.core.windows.net/[***path***]/ path to replicate from ADLS Gen 2.

      You can also use a glob path to specify more than one path for replication.

    • Click the Destination field and select the destination HDFS service from the HDFS services managed by the Cloudera Manager Server for the Admin Console you are logged into.
    • Enter one of the following values in the Destination Path to save the source files:
      • Directory (or file) on the on-premises cluster.
      • s3a://[***bucket name***]/[***path***] path to replicate to Amazon S3.
      • adl://[***accountname***].azuredatalakestore.net/[***path***]path to replicate to ADLS Gen 1.
      • abfs[s]://[***file_system***]@[***account_name***].dfs.core.windows.net/[***path***]/ path to replicate to ADLS Gen 2.
    • Select a Schedule:
      • Immediate - Run the schedule Immediately.
      • Once - Run the schedule one time in the future. Set the date and time.
      • Recurring - Run the schedule periodically in the future. Set the date, time, and interval between runs.

        Replication Manager ensures that the same number of seconds elapse between the runs. For example, if you choose the Start Time as January 19, 2022 11.06 AM and Interval as 1 day, Replication Manager runs the replication policy for the first time at the specified time in the timezone the replication policy was created in, and then runs it exactly after 1 day that is, after 24 hours or 86400 seconds.

    • Enter the user to run the replication job in the Run As Username field. By default this is hdfs. If you want to run the job as a different user, enter the user name here. If you are using Kerberos, you must provide a user name here, and it must be one with an ID greater than 1000. (You can also configure the minimum user ID number with the min.user.id property in the YARN or MapReduce service.) Verify that the user running the job has a home directory, /user/username, owned by username:supergroup in HDFS. This user must have permissions to read from the source directory and write to the destination directory.
      Note the following:
      • The User must not be present in the list of banned users specified with the Banned System Users property in the YARN configuration (Go to the YARN service, select Configuration tab and search for the property). For security purposes, the hdfs user is banned by default from running YARN containers.
      • The requirement for a user ID that is greater than 1000 can be overridden by adding the user to the "white list" of users that is specified with the Allowed System Users property. (Go to the YARN service, select the Configuration tab and search for the property.)
  7. Select the Resources tab to configure the following options:
    • Scheduler Pool – (Optional) Enter the name of a resource pool in the field. The value you enter is used by the MapReduce Service you specified when Cloudera Manager executes the MapReduce job for the replication. The job specifies the value using one of these properties:
      • MapReduce – Fair scheduler: mapred.fairscheduler.pool
      • MapReduce – Capacity scheduler: queue.name
      • YARN – mapreduce.job.queuename
    • Maximum Map Slots - Limits for the number of map slots per mapper. The default value is 20.
    • Maximum Bandwidth - Limits for the bandwidth per mapper. The default is 100 MB.
    • Replication Strategy - Whether file replication tasks should be distributed among the mappers statically or dynamically. (The default is Dynamic.) Static replication distributes file replication tasks among the mappers up front to achieve a uniform distribution based on the file sizes. Dynamic replication distributes file replication tasks in small sets to the mappers, and as each mapper completes its tasks, it dynamically acquires and processes the next unallocated set of tasks.
  8. Select the Advanced Options tab to configure the following options:
    • Add Exclusion - Click the link to exclude one or more paths from the replication. The Regular Expression-Based Path Exclusion field displays, where you can enter a regular expression-based path. When you add an exclusion, include the snapshotted relative path for the regex. For example, to exclude the /user/bdr directory, use the following regular expression, which includes the snapshots for the bdr directory:
      .*/user/\.snapshot/.+/bdr.*

      To exclude top-level directories from replication in a globbed source path, you can specify the relative path for the regex without including .snapshot in the path. For example, to exclude the bdr directory from replication, use the following regular expression:

      .*/user+/bdr.*

      You can add more than one regular expression to exclude.

    • MapReduce Service - The MapReduce or YARN service to use.
    • Log path - An alternate path for the logs.
    • Description - A description of the replication policy.
    • Error Handling You can select the following:
      • Skip Checksum Checks - Whether to skip checksum checks on the copied files. If checked, checksums are not validated. Checksums are checked by default.
      • Skip Listing Checksum Checks - Whether to skip checksum check when comparing two files to determine whether they are same or not. If skipped, the file size and last modified time are used to determine if files are the same or not. Skipping the check improves performance during the mapper phase. Note that if you select the Skip Checksum Checks option, this check is also skipped.
      • Abort on Error - Whether to abort the job on an error. If selected, files copied up to that point remain on the destination, but no additional files are copied. Abort on Error is off by default.
      • Abort on Snapshot Diff Failures - If a snapshot diff fails during replication, Replication Manager uses a complete copy to replicate data. If you select this option, the Replication Manager aborts the replication when it encounters an error instead.
    • Preserve - Whether to preserve the block size, replication count, permissions (including ACLs), and extended attributes (XAttrs) as they exist on the source file system, or to use the settings as configured on the destination file system. By default source system settings are preserved. When Permission is checked, and both the source and destination clusters support ACLs, replication preserves ACLs. Otherwise, ACLs are not replicated. When Extended attributes is checked, and both the source and destination clusters support extended attributes, replication preserves them. (This option only displays when both source and destination clusters support extended attributes.) When you preserve attributes on the destination cluster, the HDFS replication factor is also preserved.
    • Delete Policy - Whether files that were deleted on the source should also be deleted from the destination directory. This policy also determines the handling of files in the destination location that are unrelated to the source. Options include:
      • Keep Deleted Files - Retains the destination files even when they no longer exist at the source. (This is the default.).
      • Delete to Trash - If the HDFS trash is enabled, files are moved to the trash folder.
      • Delete Permanently - Uses the least amount of space; use with caution. This option does not delete the files and directories in the top level directory. This is in line with rsync/Hadoop DistCp behaviour.
    • Alerts - Whether to generate alerts for various state changes in the replication workflow. You can alert on failure, on start, on success, or when the replication workflow is aborted.
  9. Click Save Policy.
    The replication task now appears as a row in the Replication Policies table. It can take up to 15 seconds for the task to appear.

    If you selected Immediate in the Schedule field, the replication job begins running when you click Save Policy.

To specify additional replication tasks, select Create > HDFS Replication.