Ozone replication policies
Apache Ozone is a scalable, distributed, and high performance object store optimized for big data workloads and can handle billions of objects of varying sizes. Ozone storage is co-located on HDFS. You can create Ozone replication policies in CDP Private Cloud Base Replication Manager to replicate data in Ozone buckets between CDP Private Cloud Base 7.1.8 clusters or higher using Cloudera Manager 7.7.1 or higher.
- Object store buckets (OBS), which are storage buckets where all the keys are written into a flat namespace and can be accessed using S3 interface provided by Ozone.
- File System Optimization (FSO), which are Hadoop-compatible file system buckets where the rename and delete operations on the directories are atomic. These buckets can be accessed using Filesystem APIs and S3 interfaces.
- Legacy buckets, which are Ozone buckets created prior to CDP Private Cloud Base 7.1.8 and use the Ozone File System (ofs) protocol or scheme.
You can use Ozone replication policies to replicate or migrate the required Ozone data to another cluster to run load-intensive workloads, back up data, or for backup-restore use cases.
- FSO buckets in source and target clusters using ofs protocol.
- legacy buckets in source and target clusters using ofs protocol.
- OBS buckets in source and target clusters that support S3A filesystem using the S3A scheme or replication protocol.
How Ozone replication works
Ozone snapshots are enabled for all the buckets and volumes. If the incremental replication feature is enabled on the source and target clusters, to replicate Ozone data you can choose one of the following methods during the Ozone replication policy creation process:
- Full file listing
- By default, the Ozone replication policies use the full file listing method which
takes a longer time to replicate data. In this method, the first Ozone replication
policy job run is a bootstrap job; that is, all the data in the chosen buckets are
replicated. During subsequent replication policy runs, Replication Manager performs the
following high-level steps:
- Lists all the files.
- Performs a checksum and metadata check on them to identify the relevant files to copy. This step depends on the advanced options you choose during the replication creation process. During this identification process, some unchanged files are skipped if they do not meet the criteria set by the chosen advanced options.
- Copies the identified files from the source cluster to the target cluster.
- Incremental only
- In this method, the first replication policy job run is a bootstrap job, and subsequent job runs are incremental jobs.
- Incremental with fallback to full file listing
- In this method, the first replication policy job run is a bootstrap job, and subsequent job runs are incremental jobs. However, if the snapshot-diff fails during a replication policy job run, the next job run is a full file listing run. After the full file listing run succeeds, the subsequent runs are incremental runs. This method takes a longer time to replicate data if the replication policy job falls back to the full file listing method.