You can use this Getting Started use case to get familiar with the most simple form
of running a SQL Stream job.
The Getting Started contains the basic steps of running a SQL Stream job. When
executing the job, you do not need to select a sink as the results are displayed in
the browser. The SQL Stream Builder provisions a job on your cluster to run the SQL
queries. You can select the Logs tab to review the status of the SQL job. As data is
returned, it shows up in the Results tab.
As the Getting Started is using the Stateful Tutorial as an
example, you need to set up a Kafka topic as transaction.log.1 in
Streams Messaging Manager, and submit the Kafka Data Generator job to generate data to
the source topic. For more information, see the Stateful Tutorial.
Navigate to the Streaming SQL Console.
Navigate to Management Console > Environments, and select the environment where you have created your
cluster.
Select the Streaming Analytics cluster from the list of
Data Hub clusters.
Select Streaming SQL Console from the list of
services.
The Streaming SQL Console opens in a new window.
Click on Data Providers from the main menu.
Register a Kafka Provider.
Click on Console from the main menu.
Click on Tables tab.
Add a Kafka table.
Name the Table to transactions
Select the registered Kafka cluster.
Select transaction.log.1 as the Kafka topic.
Select JSON as Data Format.
Click Detect Schema.
SSB detects the schema and displays it to the Schema Definition
field.
Click Save Changes.
Click on Compose tab.
Provide a name for the SQL job in the SQL Job Name text box.
Add the following SQL statement to the SQL window:
SELECT * FROM transactions
Click on Execute.
You can see the generated output in the Results
tab.
Click on Stop to stop the previous query.
Add the following SQL statement to the SQL window:
SELECT itemId, quantity
FROM (
SELECT itemId, quantity,
ROW_NUMBER() OVER (
ORDER BY '', quantity) AS rownum
FROM transactions)
WHERE rownum <= 4