Creating a Job

This topic describes how to automate analytics workloads with a built-in job and pipeline scheduling system that supports real-time monitoring, job history, and email alerts.

A job automates the action of launching an engine, running a script, and tracking the results, all in one batch process. Jobs are created within the purview of a single project and can be configured to run on a recurring schedule. You can customize the engine environment for a job, set up email alerts for successful or failed job runs, and email the output of the job to yourself or a colleague.

Jobs are created within the scope of a project. When you create a job, you will be asked to select a script to run as part of the job, and create a schedule for when the job should run. Optionally, you can configure a job to be dependent on another existing job, thus creating a pipeline of tasks to be accomplished in a sequence. Note that the script files and any other job dependencies must exist within the scope of the same project.

  1. Navigate to the project for which you want to create a job.
  2. On the left-hand sidebar, click Jobs.
  3. Click New Job.
  4. Enter a Name for the job.
  5. In Run Job as, if the job is to run in a service account, select Service Account and choose the account from the dropdown menu.
  6. In Script, select a script to run for this job by clicking on the folder icon. You will be able to select a script from a list of files that are already part of the project. To upload more files to the project, see Managing Project Files.
  7. In Arguments, enter arguments to provide to the script.

    Use the environment variable JOB_ARGUMENTS to access the arguments in a runtime-agnostic way.

    For non-PBJ Runtimes, the contents of this field are available as standard command line arguments.

  8. Depending on the code you are running, select an Engine Kernel for the job from one of the following option: Python 3.

    The resources you can choose are dependent on the default engine you have chosen: ML Runtimes or Legacy Engines. For ML Runtimes, you can also choose a Kernel Edition and Version.

  9. Select a Schedule for the job runs from one of the following options.
    • Manual - Select this option if you plan to run the job manually each time.
    • Recurring - Select this option if you want the job to run in a recurring pattern every X minutes, or on an hourly, daily, weekly or monthly schedule. Set the recurrence interval with the drop-down buttons.

      As an alternative, select Use a cron expression to enter a Unix-style cron expression to set the interval. The expression must have five fields, specifying the minutes, hours, day of month, month, and day of week. If the cron expression is deselected, the schedule indicated in the drop-down settings takes effect.

    • Dependent - Use this option when you are building a pipeline of jobs to run in a predefined sequence. From a dropdown list of existing jobs in this project, select the job that this one should depend on. Once you have configured a dependency, this job will run only after the preceding job in the pipeline has completed a successful run.
  10. Select an Resource Profile to specify the number of cores and memory available for each session.
  11. Enter an optional timeout value in minutes.
  12. Click Set environment variables if you want to set any values to override the overall project environment variables.
  13. Specify a list of Job Report Recipients to whom you can send email notifications with detailed job reports for job success, failure, or timeout. You can send these reports to yourself, your team (if the project was created under a team account), or any other external email addresses.
  14. Add any Attachments such as the console log to the job reports that will be emailed.
  15. Click Create Job.

    You can use the API v2 to schedule jobs from third partly workflow tools. For details, see Using the Jobs API as well as the CML APIv2 tab.

To create a job using the API, follow the code below:

job_body = cmlapi.CreateJobRequest()
      
# name and script
job_body.name = "my job name"
job_body.script = "pi.py"
      
# arguments
job_body.arguments = "arg1 arg2 \"all arg 3\""
      
# engine kernel
job_body.kernel = "python3" # or "r", or "scala"
      
# schedule
# manual by default
# for recurring/cron:
job_body.schedule = "* * * * 5" # or some valid cron string
      
# for dependent (don't set both parent_job_id and schedule)
job_body.parent_job_id = "abcd-1234-abcd-1234"
      
# resource profile (cpu and memory can be floating point for partial)
job_body.cpu = 1 # one cpu vcore
job_body.memory = 1 # one GB memory
job_body.nvidia_gpu = 1 # one nvidia gpu, cannot do partial gpus
      
# timeout
job_body.timeout = 300 # this is in seconds
      
# environment
job_body.environment = {"MY_ENV_KEY": "MY_ENV_VAL", "MY_SECOND_ENV_KEY": "MY_SECOND_ENV_VAL"}
      
# attachment
job_body.attachments = ["report/1.txt", "report/2.txt"] # will attach /home/cdsw/report/1.txt and /home/cdsw/report/2.txt to emails
      
# After setting the parameters above, create the job:
client = cmlapi.default_client("host", "api key")
client.create_job(job_body, project_id="id of project to create job in")

For some more examples of commands related to jobs, see: Using the Jobs API.