Managing SQL jobs
You can manage your SQL jobs and Flink jobs in the Streaming SQL Console or through REST API using the API Explorer.
Loading SQL queries to SQL Editor
You can reuse an already submitted SQL query from the Console page or the History page of Streaming SQL Console.
Loading jobs to SQL Editor
You can reload the already executed jobs regardless of success to the SQL Editor to make furhter and additional changes, and execute them again.
- On the Console page, click on SQL Jobs button, and click on a specific job from the list.
- On the SQL Jobs page, click on the
button next to the job status, and select Load Job.
The details of the job are reloaded to the SQL Editor with the latest saved configuration and Materialized Views.
Stopping SQL jobs
As a SQL Stream job processes streaming data, you need to stop the job to finish the process, and not let the job run for an unlimited time. You can stop a running job either on the Console page or the SQL Jobs page.
- Stopping job from Console page
- If the job is currently loaded to the SQL Editor, you need to click on the Stop button to terminate the job.
- Stopping job from SQL Jobs page
- On the SQL Jobs page, you need to click on the
button next to the job status, and select Stop.
Restarting SQL jobs
You can restart an already running job when it is active in the SQL Editor. You can also restart a stopped job by executing it again either from the Console page or the SQL Jobs page.
Restarting a running SQL job
If the job is currently loaded to the SQL Editor, you need to click on the Restart button to restart the job.

Executing a stopped SQL job
- Executing stopped job from Console page
- You need to click on the SQL Jobs button to open the list of
SQL jobs window. Without selecting the stopped job, click on the
button next to the job status, and select Execute.
- Executing job from SQL Jobs page
- On the SQL Jobs page, you need to click on the
button next to the job status, and select Execute.
Editing SQL jobs
You can edit every detail of your SQL jobs after stopping and reloading it to the SQL Editor on the Console page.
- Load the stopped job to the SQL Editor either from the Console or SQL Jobs page.
- Click Job Settings to edit any configuration of the selected SQL
job.
You can modify the job settings, Materialized Views and SQL query when editing a job.
- Click Save.
- Click Execute.
Deleting SQL jobs
You can delete running and stopped jobs from the Console and SQL Jobs page. The deleted job no longer appears on the Streaming SQL Console.
- Deleting job from Console page
- You need to click on the SQL Jobs button to open the list of SQL
jobs window. Without selecting the stopped job, click on the
button next to the job status, and select Delete.
- Deleting job from SQL Jobs page
- On the SQL Jobs page, you need to click on the
button next to the job status, and select Delete Job.
Configuring SQL job settings
If you need to further customize your SQL Stream job, you can add more advanced features to configure the job restarting method and time, threads for parallelism, sample behavior, exatly once processing and restoring from savepoint.


- Job parallelism (threads)
- The number of threads to start to process the job. Each thread consumes a slot on the cluster. When the Job Parallelism is set to 1, the job consumes the least resources. If the data provired supports parallel reads, increasing the parallelism can raise the maximum throughput. For example, when using Kafka as a data provider, setting the parallelism to the equal number as the partitions of the topic can be a starting point for performance tuning.
- Sample Count
- The number of sample entries shown under the Results tab. To have an unlimited number of sample entries, add 0 to the Sample Count value.
- Sample Window Size
- The number of sample entries to keep in under the Results tab. To have an unlimited number of sample entries, add 0 to the Sample Window Size value.
- Sample Behavior
- You have the following options to choose the behavior of the sampled data under the
Results tab:
- Sample all messages
- Sample one message every second
- Sample one message every five seconds
- Restore From Savepoint
- You can enable or disable restoring a SQL job from a Flink savepoint after stopping it. The savepoint is saved under hdfs:///user/flink/savepoints by default.
- Enable Checkpointing
- You can enable and disable checkpointing for a SQL job. By default the checkpointing is enabled.
- Checkpoint Mode
- Switching between checkpointing modes. You can choose between At Least Once or Exactly Once.
- Checkpoint Interval
- The time in milliseconds between checkpointing attempts.
- Checkpoint Timeout
- The maximum time in milliseconds until a checkpointing attempt is timed out.
- Tolerable Checkpoint Failures
- The number of checkpointing attempts until the job is aborted.
- Failure Restart Strategy
- Switching between restarting strategies when checkpointing is failed. You can choose between enabling and disabling auto recovery. By default auto recovery is enabled for checkpointing.
Using SQL Stream Builder REST API
You can use the REST API to monitor, manage and configure the SQL Stream jobs with GET, POST and DELETE HTTP methods. You can use the SQL Stream Builder (SSB) REST API in command line, import them to REST API Tools or by accessing the Swagger UI.
- GET to query information about the specified endpoint
- POST to create resources for the specified endpoint
- DELETE to remove objects from the specified endpoint
The REST API Reference document contains the available endpoints for SQL Stream Builder.
You can also reach the SSB REST API reference document from on the main menu of Streaming SQL Console using the API Explorer.
- SQL Operations: You can execute and analyze the SQL queries
- Session Operations: You can manage and reset the SSB session
- Sampling Operations: You can configure the sampling behavior and retrieve sampling results.
- Job Operations: You can create and stop jobs, and also retrieve job information
- Flink Session Cluster Operations: You can manage and reset the Flink session
- Flink Job Management: You can run flink applications
- Artifact Management: You can add and delete jar files and configuration files, and also retrieve information about them