Using Tables in SQL Stream jobs
The core abstraction for Streaming SQL is a Table which represents both inputs and outputs of the queries. SQL Stream Builder tables are an extension of the tables used in Flink SQL to allow a bit more flexibility to the users.
A Table is a logical definition of the data source that includes the location and connection parameters, a schema, and any required, context specific configuration parameters. Tables can be used for both reading and writing data in most cases. You can create and manage tables either manually or they can be automatically loaded from one of the catalogs as specified using the Data Providers section.
In SELECT
queries the FROM
clause defines the table sources
which can be multiple tables at the same time in case of JOIN
or more complex
queries.
SELECT
lat,lon
FROM
airplanes -- the name of the virtual table source
WHERE
icao <> 0;
When you execute a query, the results go to the Sink Table that you selected in the SQL window. This allows you to create aggregations, filters, joins, and so on, and then route the results to another table. The schema for the results is the schema that you created when you ran the query.
Supported tables in SSB
SSB supports different table types to ease development and access to all kinds of data sources. There are two main categories of tables, user defined tables and catalog tables. User defined tables need to be added manually and catalog tables are accessible automatically after registering a catalog provider.
- User defined tables
- The user defined tables need to be added manually using the Add
Tables wizard of the Streaming SQL Console.
- Kafka Table
Apache Kafka Tables represent data contained in a single Kafka topic in JSON or AVRO format. It can be defined using the Streaming SQL Console wizard. For more advanced use cases Kafka tables can be substituted by Flink DDL tables.
- Flink DDL Table
Flink DDL tables represent tables created by using the standard Flink SQL CREATE TABLE/CREATE VIEW syntax. This supports full flexibility in defining new or derived tables and views. You can either provide the syntax by directly adding it to the Flink DDL window or use one of the predefined DDL templates.
- Webhooks
Webhooks can only be used as sink tables for SQL queries. The result of your SQL query is sent to a specified webhook.
- Kafka Table
- Catalog tables
- The catalog tables are automatically imported based on the catalog type you have registered
on the Data Providers page. The following catalogs are supported in SSB:
- Schema Registry
- Kudu
- Hive
- Custom catalogs