Managing Jobs and Pipelines in Cloudera Data Science Workbench
Cloudera Data Science Workbench allows you to automate analytics workloads with a built-in job and pipeline scheduling system that supports real-time monitoring, job history, and email alerts. A job automates the action of launching an engine, running a script, and tracking the results, all in one batch process. Jobs are created within the purview of a single project and can be configured to run on a recurring schedule. You can customize the engine environment for a job, set up email alerts for successful or failed job runs, and email the output of the job to yourself or a colleague.
As data science projects mature beyond ad hoc scripts, you might want to break them up into multiple steps. For example, a project may include one or more data acquisition, data cleansing, and finally, data analytics steps. For such projects, Cloudera Data Science Workbench allows you to schedule multiple jobs to run one after another in what is called a pipeline, where each job is dependent on the output of the one preceding it.
Continue reading:
Creating a Job
- Navigate to the project for which you want to create a job.
- On the left-hand sidebar, click Jobs.
- Click New Job.
- Enter a Name for the job.
- Select a script to execute for this job by clicking on the folder icon. You will be able to select a script from a list of files that are already part of the project. To upload more files to the project, see Managing Project Files.
- Depending on the code you are running, select an Engine Kernel for the job from one of the following options: Python 2, Python 3, R, or Scala.
- Select a Schedule for the job runs from one of the following options.
- Manual - Select this option if you plan to run the job manually each time.
- Recurring - Select this option if you want the job to run in a recurring pattern every X minutes, or on an hourly, daily, weekly or monthly schedule.
- Dependent - Use this option when you are building a pipeline of jobs to run in a predefined sequence. From a dropdown list of existing jobs in this project, select the job that this one should depend on. Once you have configured a dependency, this job will run only after the preceding job in the pipeline has completed a successful run.
- Select an Engine Profile to specify the number of cores and memory available for each session.
- Enter an optional timeout value in minutes.
- Click Set environment variables if you want to set any values to override the overall project environment variables.
- Specify a list of Job Report Recipients to whom you can send email notifications with detailed job reports for job success, failure, or timeout. You can send these reports to yourself, your team (if the project was created under a team account), or any other external email addresses.
- Add any Attachments such as the console log to the job reports that will be emailed.
- Click Create Job.
Starting with version 1.1.x, you can use the Jobs API to schedule jobs from third partly workflow tools. For details, see Cloudera Data Science Workbench Jobs API.
Creating a Pipeline
The Jobs overview presents a list of all existing jobs created for a project along with a dependency graph to display any pipelines you've created. Job dependencies do not need to be configured at the time of job creation. Pipelines can be created after the fact by modifying the jobs to establish dependencies between them. From the job overview, you can modify the settings of a job, access the history of all job runs, and view the session output for individual job runs.
- Navigate to the project where the Data Acquisition and Data Analytics jobs were created.
- Click Jobs.
- From the list of jobs, select Data Analytics.
- Click the Settings tab.
- Click on the Schedule dropdown and select Dependent. Select Data Acquisition from the dropdown list of existing jobs in the project.
- Click Update Job.
Viewing Job History
- Navigate to the project where the job was created.
- Click Jobs.
- Select the relevant job.
- Click the History tab. You will see a list of all the job runs with some basic information such as who created the job, run duration, and status. Click individual runs to see the session output for each run.