Backing up tables
You can use the
KuduBackup Spark job to backup one or more Kudu tables.
When you first run the job for a table, a full backup is run. Additional runs will perform
incremental backups which will only contain the rows that have changed since the initial full
backup. A new set of full backups can be forced at anytime by passing the
--forceFull flag to the backup job.
--rootPath: The root path is used to output backup data. It accepts any Spark-compatible path.
--kuduMasterAddresses: This is used to specify a comma-separated addresses of Kudu masters. The default value is
<table>...: Is used to indicate a list of tables that you want to back up.
KuduBackupjob execution which backs up the tables
barto the HDFS directory
spark-submit --class org.apache.kudu.backup.KuduBackup kudu-backup2_2.11-1.12.0.jar \ --kuduMasterAddresses master1-host,master-2-host,master-3-host \ --rootPath hdfs:///kudu-backups \ foo bar