Working with Azure ADLS Gen2 storage
Switch the factory for committers with ABFS destiations to the manifest committer factory.
To switch to the manifest committer, the factory for committers for destinations with
abfs:// URLs must be switched to the manifest committer factory, either
for the application or the entire cluster.
<property> <name>mapreduce.outputcommitter.factory.scheme.abfs</name> <value>org.apache.hadoop.fs.azurebfs.commit.AzureManifestCommitterFactory</value> </property>
This allows for ADLS Gen2 -specific performance and consistency logic to be used from within the committer. In particular:
- The Etag header can be collected in listings and used in the job commit phase.
- IO rename operations are rate limited
- Recovery is attempted when throttling triggers rename failures.
Warning: This committer is not compatible with older Azure storage services (WASB or ADLS Gen 1).
The core set of Azure-optimized options becomes:
<property> <name>mapreduce.outputcommitter.factory.scheme.abfs</name> <value>org.apache.hadoop.fs.azurebfs.commit.AzureManifestCommitterFactory</value> </property> <property> <name>spark.hadoop.fs.azure.io.rate.limit</name> <value>10000</value> </property>
Full set of ABFS options for Spark
spark.hadoop.mapreduce.outputcommitter.factory.scheme.abfs org.apache.hadoop.fs.azurebfs.commit.AzureManifestCommitterFactory spark.hadoop.fs.azure.io.rate.limit 10000 spark.sql.parquet.output.committer.class org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter spark.sql.sources.commitProtocolClass org.apache.spark.internal.io.cloud.PathOutputCommitProtocol spark.hadoop.mapreduce.manifest.committer.summary.report.directory (optional: URI of a directory for job summaries)
Experimental: ABFS Rename Rate Limiting
To avoid triggering store throttling and backoff delays, as well as other throttling-related failure conditions file renames during job commit are throttled through a "rate limiter" which limits the number of rename operations per second a single instance of the ABFS FileSystem client may issue.
||Rate limit in operations/second for IO operations.|
Set the option to
0 remove all rate limiting.
The default value of this is set to 10000, which is the default IO capacity for an ADLS storage account.
<property> <name>fs.azure.io.rate.limit</name> <value>10000</value> <description>maximum number of renames attempted per second</description> </property>
This capacity is set at the level of the filesystem client, and so not shared across all processes within a single application, let alone other applications sharing the same storage account.
It will be shared with all jobs being committed by the same Spark driver, as these do share that filesystem connector.
If rate limiting is imposed, the statistic
store_io_rate_limited will report the time to acquire permits for
If server-side throttling took place, signs of this can be seen in:
- The store service's logs and their throttling status codes (usually 503 or 500).
- The job statistic
commit_file_rename_recovered. This statistic indicates that ADLS throttling manifested as failures in renames, failures which were recovered from in the comitter.
If these are seen or other applications running at the same time experience
throttling/throttling-triggered problems, consider reducing the value of
fs.azure.io.rate.limit, and/or requesting a higher IO capacity from
Important: if you do get extra capacity from Microsoft and you want to use it to
speed up job commits, increase the value of
across the cluster, or specifically for those jobs which you wish to allocate extra priority
This is still a work in progress; it may be expanded to support all IO operations performed by a single filesystem instance.