Process memory

The memory on each node is allocated to the various Hadoop processes. This predictability reduces the chance of Hadoop processes inadvertently running out of memory and paging to disk, which in turn leads to severe degradation in performance.

A minimum of 4 GB of memory should be reserved on all nodes for operating system and other non-Hadoop use. This amount may need to be higher if additional non-Hadoop applications are running on the cluster nodes, such as third-party active monitoring and alerting tools.

Memory requirements and allocation for Hadoop components are discussed in further detail in other sections of this document. See Kernel and OS Tuning for more information.