Configuring dynamic resource allocation
This section describes how to configure dynamic resource allocation for Apache Spark.
When the dynamic resource allocation feature is enabled, an application's use of executors is dynamically adjusted based on workload. This means that an application can relinquish resources when the resources are no longer needed, and request them later when there is more demand. This feature is particularly useful if multiple applications share resources in your Spark cluster.
You can configure dynamic resource allocation at either the cluster or the job level:
-
Cluster level: Dynamic resource allocation is enabled by default. The associated shuffle service starts automatically.
-
Job level: You can customize dynamic resource allocation settings on a per-job basis. Job settings override cluster configuration settings.
Cluster configuration is the default, unless overridden by job configuration.
The following subsections describe each configuration approach, followed by a list of dynamic resource allocation properties.