Running Spark Jobs on an HDP Cluster

Execution errors might occur due to inadequate resource threshold values set for YARN.

In order to run Spark jobs on an HDP cluster, apply the following changes:
  1. From Ambari, go to Views > YARN Queue Manager service.
  2. Choose or add a queue. You can configure root, default, or individual queues.
  3. Under the Resources section, increase the value in the Maximum AM Resource field so that the queues have enough resources to execute. For example, you can set it to 80%.
    Child queues can inherit the settings from the parent queue.