Dynamic Resource Allocation Properties
See the following tables for more information about basic and optional dynamic resource allocation properties. For more information, see the Apache Dynamic Resource Allocation documentation.
Table 3.1. Dynamic Resource Allocation Properties
Property Name | Value | Meaning |
---|---|---|
spark.dynamicAllocation. enabled | Default is true for the Spark Thrift server, and false
for Spark jobs. |
Specifies whether to use dynamic resource allocation, which scales the number of executors registered for an application up and down based on workload. Note that this feature is currently only available in YARN mode. |
spark.shuffle.service. enabled | true | Enables the external shuffle service, which preserves shuffle files written by executors so that the executors can be safely removed. This property must be set to |
spark.dynamicAllocation. initialExecutors | Default is spark.dynamicAllocation. minExecutors |
The initial number of executors to run if dynamic resource allocation is enabled. This value must be greater than or equal to the |
spark.dynamicAllocation. maxExecutors | Default is infinity |
Specifies the upper bound for the number of executors if dynamic resource allocation is enabled. |
spark.dynamicAllocation. minExecutors | Default is 0 |
Specifies the lower bound for the number of executors if dynamic resource allocation is enabled. |
Table 3.2. Optional Dynamic Resource Allocation Properties
Property Name | Value | Meaning |
---|---|---|
spark.dynamicAllocation. executorIdleTimeout | Default is 60 seconds (60s ) |
If dynamic resource allocation is enabled and an executor has been idle for more than this time, the executor is removed. |
spark.dynamicAllocation. cachedExecutorIdleTimeout | Default is infinity |
If dynamic resource allocation is enabled and an executor with cached data blocks has been idle for more than this time, the executor is removed. |
spark.dynamicAllocation. schedulerBacklogTimeout | 1 second (1s ) |
If dynamic resource allocation is enabled and there have been pending tasks backlogged for more than this time, new executors are requested. |
spark.dynamicAllocation. sustainedSchedulerBacklogTimeout | Default is schedulerBacklogTimeout |
Same as |