Preparing clusters for Ozone replication policies
You must prepare the clusters, create buckets in the target cluster, and configure additional configurations for OBS bucket replication before you create Ozone replication policies.
-
Have you added the source and target clusters on the Management Console > Clusters page?
For more information, see Adding clusters to a Cloudera Private Cloud Data Services deployment.
-
Have you created the bucket on the target cluster of the same type as the
bucket on the source cluster from which the replication policy replicates
data?
The following sample commands create a volume and an FSO bucket:
ozone sh volume create o3://ozone1/vol1 ozone sh bucket create o3://ozone1/vol1/buck1 --layout FILE_SYSTEM_OPTIMIZED
The following sample command creates a volume and an OBS bucket:ozone sh bucket create /s3v/buck2 --layout OBJECT_STORE
-
Are the additional configurations required for OBS bucket replication
configured when the source bucket is an OBS bucket?
For more information, see Configuring properties for OBS bucket replication.
- Do you need to replicate data securely? If so, ensure that the SSL/TLS certificate exchange between two Cloudera Manager instances that manage source and target clusters respectively is configured. For more information, see Configuring SSL/TLS certificate exchange between two Cloudera Manager instances.
-
Are the following permissions enabled if the source and target clusters are
secure, and Ranger is enabled for Ozone?
Resource User permissions On the source cluster: - srcVolume
- srcVolume/srcBucket
- srcVolume/srcBucket/*
The bucket srcVolume/srcBucket must be owned by srcUser*, or the srcUser* must be an Ozone administrator (in order to create snapshots in this bucket).
- /user
- /user/[***srcUser***]
Must be readable by srcUser* - /user/[***srcUser***]/*
The bucket /user/[***srcUser***] must already exist, or must be createable by srcUser*.
The Ozone service must allow the users om and hive to impersonate srcUser*.
Must be readable/writable by srcUser* On the destination cluster: - dstVolume
- dstVolume/dstBucket
The bucket dstVolume/dstBucket is owned by dstUser*, or dstUser* is an Ozone administrator (to create snapshots in this bucket).
- /user
- /user/[***dstUser***]
Must be readable dstUser* - dstVolume/dstBucket/*
- /user/[***dstUser***]/*
The bucket /user/[***dstUser***] must already exist, or must be createable by dstUser*.
The Ozone service must allow the users om and hive to impersonate dstUser*.
Must be readable/writable by dstUser* - /user/[***dstUser***]/*
Must be readable by yarn user for YARN to pick up the container configuration for the MapReduce job. *The srcUser is the user that you specify in Run on Peer as Username field, and the dstUser is the user that you specify in the Run as Username field in the Create Ozone replication policy wizard. -
Is Kerberos enabled on both the clusters? If so, perform the following
steps:
- Configure a user with permissions to access HDFS and Ozone.
- Run the following command to add the group name of the user
(For example, the group name bdr) to the Ozone
service configuration in target Cloudera Manager:
sudo usermod -a -G om bdr
-
Is Ranger enabled on the source cluster? If so, you must:
-
complete the following steps on the Ranger UI from source Cloudera
Manager:
- Log into Ranger UI from source Cloudera Manager.
- Click cm_ozone on the Service Manager page.
- Add the user (that you configured in the previous step) to the all - volume, bucket, key, all - volume, and all - volume, bucket policy names, and then set the groups for this policy as public.
-
complete the following steps for the Ranger service in source Cloudera
Manager:
- Go to the source Cloudera Manager > Clusters > Ranger service > Configuration tab.
- Locate the Ranger KMS Server with KTS Advanced Configuration Snippet (Safety Valve) for conf/kms-site.xml property.
- Add the following key-value pairs:
- hadoop.kms.proxyuser.om.hosts=*
- hadoop.kms.proxyuser.om.groups=*
- hadoop.kms.proxyuser.om.users=*
- Save the changes.
- Restart the Ranger service for the changes to take effect.
-
complete the following steps on the Ranger UI from source Cloudera
Manager: