Troubleshooting memory efficiency metrics not reported or missing

This topic describes the possible causes and solutions when memory efficiency is not reported for a Spark application.

Spark version

You must be on Spark version 3.3.0 or higher to see the data for Spark jobs on the memory wastage widget.

Missing configuration

If you are unable to see the data for Spark jobs on the memory wastage widget even after upgrading to Spark 3.3.0 or higher, you have to ensure you enable spark.executor.processTreeMetrics.enabled=true. For more information, see the Enabling Spark configuration for memory consumption analysis documentation.

Spark Jobs with shorter tasks

Memory usage metrics for Spark applications with short-running tasks may not be captured because the default sampling interval is 10 seconds. Tasks can complete and terminate between these sampling periods, resulting in incomplete metric data.

Solution: You can resolve this by explicitly configuring the spark.executor.metrics.pollingInterval property to a smaller value, such as 5000 (5000 milliseconds). The default value for this property is inherited from spark.executor.heartbeatInterval, which is 10 seconds.

Missing proc filesystem

The /proc filesystem, which is required to collect process memory usage metrics, is missing.

Solution: Enable the /proc filesystem on your cluster nodes. For more information, see the /proc Filesystem documentation.