To create Kafka tables in SQL Stream Builder (SSB) you need to register Kafka as a
Data Source, using the Streaming SQL Console.
A default Local Kafka is added to the SSB data sources
during installation, using a Kafka service within the same cluster as SSB. This Local
Kafka data source cannot be updated or delete, as it is used in the Streaming SQL
Console for sampling results and cleaning up sample topics. To add your own,
customizable Kafka data source instead, follow the steps in this task.
Make sure that you have the right permissions set in Ranger.
Navigate to the Streaming SQL Console.
Go to your cluster in Cloudera Manager.
Select SQL Stream Builder from the list of services.
Click SQLStreamBuilder Console.
The Streaming SQL Console opens in a new window.
Open a project from the Projects page of Streaming SQL
Console.
Select an already existing project from the list by clicking the
Open button or Switch button.
Create a new project by clicking the New Project
button.
Import a project by clicking the Import button.
You are redirected to the Explorer view of the project.
Open Data Sources from the
Explorer view.
Click next to Kafka.
Select New Kafka Source.
The Kafka Source window appears.
Add a Name to your Kafka provider.
Add the broker host name(s) to Brokers.
You need to copy the Kafka broker name(s) from Cloudera Manager.
Go to your cluster in Cloudera Manager.
Click Kafka from the list of services.
Click Instances.
Copy the hostname of the Kafka broker(s) you want to use.
By default the auto-discovery
of CDP TrustStore is enabled. The auto-discovery of CDP
TrustStore can be used for Kafka sources that are located in the
same CDP Private Cloud Base cluster as SSB. This means that the
default TrustStore path is used for authentication, which can be
customized in Cloudera Manager using
local.kafka.truststore.location and
local.kafka.truststore.password
parameters. If you disable the
auto-discovery, you need to provide the following
configurations:
Kafka TrustStore path
Kafka TrustStore Password
You also have the option to provide the Kafka
KeyStore path and KeyStore
Password that belongs to the Kafka
source.
Click Validate.
Click Create after validation is
successful.
Choose your authentication method:
By default the auto-discovery
of CDP TrustStore is enabled. The auto-discovery of CDP
TrustStore can be used for Kafka sources that are located in the
same CDP Private Cloud Base cluster as SSB. This means that the
default TrustStore path is used for authentication, which can be
customized in Cloudera Manager using
local.kafka.truststore.location and
local.kafka.truststore.password
parameters. If you disable the
auto-discovery, you need to provide the following
configurations:
Kafka TrustStore path
Kafka TrustStore Password
You also have the option to provide the Kafka
KeyStore path and KeyStore
Password that belongs to the Kafka
source.
Choose an SASL Mechanism.
Click Validate.
Click Create after validation is
successful.
Choose an SASL Mechanism.
Provide the Username for SASL.
Provide the Password for SASL.
Click Validate.
Click Create after validation is
successful.
You have registered Kafka as a data source to be able to
add Kafka as a table in your SQL query. The already existing Kafka topics can be
selected when adding Kafka as a table.After registering the Kafka data source, you can edit,
duplicate, and delete it from the Streaming SQL Console:
Open Data Sources from the Explorer
view.
Click next to Kafka.
Select Manage.
The Kafka Sources
tab opens where the registered Kafka providers are listed. You have the
following options to manage the Kafka sources:
Click on one of the existing Kafka providers to edit its
configurations.
Click to remove the Kafka
provider.
Click to duplicate the Kafka provider
with its configurations.