Manifest committer for ABFS and GCS
The Intermediate Manifest committer is a high performance committer for Spark work that provides performance on ABFS for real world queries, and performance and correctness on GCS. It also works with other filesystems, including HDFS. However, the design is optimized for object stores where listing operatons are slow and expensive.
The v1/v2 file committers
The only committer of work from Spark to Azure ADLS Gen 2 "abfs://" storage that is safe to use is the "v1 file committer."
This is "correct" in that if a task attempt fails, its output is guaranteed not to be included in the final out. The "v2" commit algorithm cannot meet that guarantee, which is why it is no longer the default.
But the v1 file committer is slow, especially on jobs where deep directory trees of output
are used. This is due to lack of any instrumentation in the
FileOutputCommitter
. Stack traces of running jobs generally show
rename()
, though list operations do surface too.
On GCS, neither the v1 nor v2 algorithm are safe because the Google filesystem doesn't have the atomic directory rename which the v1 algorithm requires.
A further issue is that both Azure and GCS storage may encounter scale issues with deleting
directories with many descendants. This can trigger timeouts because the FileOutputCommitter
assumes that cleaning up after the job is a fast call to delete("_temporary",
true)
.
The manifest committer
The Manifest Committer is a higher performance committer for ABFS and GCS storage for jobs
that create files across deep directory trees through many tasks. It will also work on
hdfs://
and file://
URLs, but it is optimized to address
listing and renaming performance and throttling issues in cloud storage.
It will not work correctly with S3, because it relies on an atomic rename-no-overwrite operation to commit the manifest file. It will also have the performance problems of copying rather than moving all the generated data.
Although it will work with MapReduce, there is no handling of multiple job attempts with recovery from previous failed attempts.
This committer uses the extension point which came in for the S3A committers. Users can declare a new committer factory for abfs:// and gcs:// URLs. A suitably configured Spark deployment will pick up the new committer.