Property:
yarn.nodemanager.container-executor.class
Value:
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
Example:
<property>
<name>yarn.nodemanager.container-executor.class</name>
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
Property:
yarn.nodemanager.linux-container-executor.group
Value:
hadoop
Example:
<property>
<name>yarn.nodemanager.linux-container-executor.group</name>
<value>hadoop</value>
</property>
Property:
yarn.nodemanager.linux-container-executor.resources-handler.class
Value:
org.apache.hadoop.yarn.server.nodemanager.util.cgroupsLCEResourcesHandler
Example:
<property>
<name>
yarn.nodemanager.linux-container-executor.resources-handler.class</name>
<value>
org.apache.hadoop.yarn.server.nodemanager.util.cgroupsLCEResourcesHandler
</value>
</property>
Property:
yarn.nodemanager.linux-container-executor.cgroups.hierarchy
Value:
/hadoop-yarn
Example:
<property>
<name>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</name>
<value>/hadoop-yarn</value>
</property>
Property:
yarn.nodemanager.linux-container-executor.cgroups.mount
Value:
false
Example:
<property>
<name>yarn.nodemanager.linux-container-executor.cgroups.mount</name>
<value>false</value>
</property>
Property:
yarn.nodemanager.linux-container-executor.cgroups.mount-path
Value:
/sys/fs/cgroup
Example:
<property>
<name>yarn.nodemanager.linux-container-executor.cgroups.mount-path</name>
<value>/sys/fs/cgroup</value>
</property>
Set the Percentage of CPU used by YARN
Set the percentage of CPU that can be allocated for YARN containers. In most
cases, the default value of 100% should be used. If you have another process that
needs to run on a node that also requires CPU resources, you can lower the
percentage of CPU allocated to YARN to free up resources for the other
process.
Property:
yarn.nodemanager.resource.percentage-physical-cpu-limit
Value:
100
Example:
<property>
<name>yarn.nodemanager.resource.percentage-physical-cpu-limit</name>
<value>100</value>
</property>
Set Flexible or Strict CPU limits
CPU jobs are constrained with CPU scheduling and cgroups enabled, but by default
these are flexible limits. If spare CPU cycles are available, containers are
allowed to exceed the CPU limits set for them. With flexible limits, the amount of
CPU resources available for containers to use can vary based on cluster usage --
the amount of CPU available in the cluster at any given time.
You can use cgroups to set strict limits on CPU usage. When strict limits are
enabled, each process receives only the amount of CPU resources it requests. With
strict limits, a CPU process will receive the same amount of cluster resources
every time it runs.
Strict limits are not enabled (set to false
) by default.
Property:
yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage
Value:
false
Example:
<property>
<name>yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage</name>
<value>false</value>
</property>
| Note |
---|
Irrespective of whether this property is true or
false , at no point will total container CPU usage exceed
the limit set in
yarn.nodemanager.resource.percentage-physical-cpu-limit .
|
| Important |
---|
CPU resource isolation leverages advanced
features in the Linux kernel. At this time, setting
yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage
to true is not recommended due to known kernel
panics. In addition, with some kernels, setting
yarn.nodemanager.resource.percentage-physical-cpu-limit to
a value less than 100 can result in kernel panics. If you require either of
these features, you must perform scale testing to determine if the in-use
kernel and workloads are stable. As a starting point, Linux kernel version
4.8.1 works with these features. However, testing the features with the desired
workloads is very important. |