Troubleshooting Failed Jobs
Steps for troubleshooting incomplete jobs running on your cluster.
Steps with examples from a Spark engine are included that describe how to further investigate and troubleshoot the root cause of an uncompleted job.
In a supported browser, log in to the web UI by doing
- In the web browser URL field, enter the URL that you were given by your system administrator and press Enter.
- When the Log in page opens, enter your user name and password access credentials.
- Click Log in.
In the Clusters page do one of the following:
- In the Search field, enter the name of the cluster whose workloads you want to analyze.
- From the Cluster Name column, locate and click on the name of the cluster whose workloads you want to analyze.
- From the time-range list in the Cluster Summary page, select a time period that meets your requirements.
In the Usage Analysis chart widget, notice which
engine's are displaying Failed jobs and then from the
Trend widget, select the tab of an engine whose
failed jobs you wish to analyze and then click its Total
The engine's Jobs page opens.
- From the Health Check list, select Failed to Finish, which filters the list to display a list of jobs that did not complete.
To view more details about why a job failed to complete, from the
Job column select a job's name. The job's page opens
displaying information about the job you selected and where the failure
From the !Failures section, in the Diagnostic
Information column, click +More.
The Diagnostic Information dialog box opens, which describes more details about why the job aborted. In the following example's case, the job was aborted whilst writing rows due to an out of bounds java exception:
- Click Close, to close the dialog box.
To display more information about the stage where the job failed, in this case
the Stage-2 process, in the Failing
from column, click the stage's link. Or select the
Execution Details tab and then click the failed
In the following example's Summary panel, it shows that Task 0 was attempted 4 times:
To display more information about all the failed attempts, in the
Summary panel, click the
Failed task value.
In the following example, the job aborted when Task 0 was writing rows. To understand more about what triggered the
SparkExceptionerror message and to further troubleshoot the root cause, you can open the associated log file by clicking Full error log.