Managing Data Operating System
Also available as:
PDF
loading table of contents...

Enable Cgroups

On an Ambari cluster, you can enable CPU Scheduling to enable cgroups. On a non-Ambari cluster, you must configure certain properties in yarn-site.xml on the ResourceManager and NodeManager hosts to enable cgroups.

cgroups is a Linux kernel feature. cgroups is supported on the following Linux operating systems:

  • CentOS 6.9, 7.3

  • RHEL 6.9, 7.3

  • SUSE 12

  • Ubuntu 16

At this time there is no cgroups equivalent for Windows. Cgroups are not enabled by default on HDP. Cgroups require that the HDP cluster be Kerberos enabled.

Important
Important

The yarn.nodemanager.linux-container-executor.cgroups.mount property must be set to false. Setting this value to true is not currently supported.

Enable cgroups

The following commands must be run on every reboot of the NodeManager hosts to set up the CGroup hierarchy. Note that operating systems use different mount points for the CGroup interface. Replace /sys/fs/cgroup with your operating system equivalent.

mkdir -p /sys/fs/cgroup/cpu/hadoop-yarn
chown -R yarn /sys/fs/cgroup/cpu/hadoop-yarn
mkdir -p /sys/fs/cgroup/memory/hadoop-yarn
chown -R yarn /sys/fs/cgroup/memory/hadoop-yarn
mkdir -p /sys/fs/cgroup/blkio/hadoop-yarn
chown -R yarn /sys/fs/cgroup/blkio/hadoop-yarn
mkdir -p /sys/fs/cgroup/net_cls/hadoop-yarn
chown -R yarn /sys/fs/cgroup/net_cls/hadoop-yarn
mkdir -p /sys/fs/cgroup/devices/hadoop-yarn
chown -R yarn /sys/fs/cgroup/devices/hadoop-yarn

  • To enable cgroups on an Ambari cluster, select YARN > Configs on the Ambari dashboard, then click CPU Isolation under CPU. Click Save, then restart all cluster components that require a restart. cgroups should be enabled along with CPU Scheduling.
  • On a non-Ambari cluster, set the following properties in the /etc/hadoop/conf/yarn-site.xml file on the ResourceManager and NodeManager hosts.

    Property: yarn.nodemanager.container-executor.class

    Value: org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor

    Example:

    <property>
     <name>yarn.nodemanager.container-executor.class</name>
     <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
    </property>

    Property: yarn.nodemanager.linux-container-executor.group

    Value: hadoop

    Example:

    <property>
     <name>yarn.nodemanager.linux-container-executor.group</name>
     <value>hadoop</value>
    </property>

    Property: yarn.nodemanager.linux-container-executor.resources-handler.class

    Value: org.apache.hadoop.yarn.server.nodemanager.util.cgroupsLCEResourcesHandler

    Example:

    <property>
     <name>
    yarn.nodemanager.linux-container-executor.resources-handler.class</name>
     <value>
    org.apache.hadoop.yarn.server.nodemanager.util.cgroupsLCEResourcesHandler
    </value>
    </property>

    Property: yarn.nodemanager.linux-container-executor.cgroups.hierarchy

    Value: /hadoop-yarn

    Example:

    <property>
     <name>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</name>
     <value>/hadoop-yarn</value>
    </property>

    Property: yarn.nodemanager.linux-container-executor.cgroups.mount

    Value: false

    Example:

    <property>
     <name>yarn.nodemanager.linux-container-executor.cgroups.mount</name>
     <value>false</value>
    </property>

    Property: yarn.nodemanager.linux-container-executor.cgroups.mount-path

    Value: /sys/fs/cgroup

    Example:

    <property>
     <name>yarn.nodemanager.linux-container-executor.cgroups.mount-path</name>
     <value>/sys/fs/cgroup</value>
    </property>

    Set the Percentage of CPU used by YARN

    Set the percentage of CPU that can be allocated for YARN containers. In most cases, the default value of 100% should be used. If you have another process that needs to run on a node that also requires CPU resources, you can lower the percentage of CPU allocated to YARN to free up resources for the other process.

    Property: yarn.nodemanager.resource.percentage-physical-cpu-limit

    Value: 100

    Example:

    <property>
     <name>yarn.nodemanager.resource.percentage-physical-cpu-limit</name>
     <value>100</value>
    </property>

    Set Flexible or Strict CPU limits

    CPU jobs are constrained with CPU scheduling and cgroups enabled, but by default these are flexible limits. If spare CPU cycles are available, containers are allowed to exceed the CPU limits set for them. With flexible limits, the amount of CPU resources available for containers to use can vary based on cluster usage -- the amount of CPU available in the cluster at any given time.

    You can use cgroups to set strict limits on CPU usage. When strict limits are enabled, each process receives only the amount of CPU resources it requests. With strict limits, a CPU process will receive the same amount of cluster resources every time it runs.

    Strict limits are not enabled (set to false) by default.

    Property: yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage

    Value: false

    Example:
     <property>
     <name>yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage</name>
     <value>false</value>
     </property>
    Note
    Note

    Irrespective of whether this property is true or false, at no point will total container CPU usage exceed the limit set in yarn.nodemanager.resource.percentage-physical-cpu-limit.

    Important
    Important
    CPU resource isolation leverages advanced features in the Linux kernel. At this time, setting yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage to true is not recommended due to known kernel panics. In addition, with some kernels, setting yarn.nodemanager.resource.percentage-physical-cpu-limit to a value less than 100 can result in kernel panics. If you require either of these features, you must perform scale testing to determine if the in-use kernel and workloads are stable. As a starting point, Linux kernel version 4.8.1 works with these features. However, testing the features with the desired workloads is very important.