Integrating Schema Registry with Flink and Cloudera SQL Stream Builder
When using Kafka with Schema Registry, you can integrate them with Flink for large-scale data streams processing, and with Cloudera SQL Stream Builder to import schema as virtual tables and use the tables to create queries on streams of data through SQL.
When Kafka is chosen as source and sink for your application, you can use Schema Registry to register and retrieve schema information of the different Kafka topics. You must add Schema Registry dependency to your Flink project and add the appropriate schema object to your Kafka topics. Additionally, you can add Schema Registry as a catalog to Cloudera SQL Stream Builder to import existing schemas for use with SQL queries.
You can integrate Flink with your project along with Kafka and Schema Registry to process data streams at a large scale and to deliver real-time analytical insights about your processed data. For more information about how to integrate Flink with Schema Registry, see Schema Registry with Flink.
You can integrate Schema Registry as a catalog with Cloudera SQL Stream Builder to import schema as virtual tables which you can use to create queries on the streams of data through SQL. For more information about how to integrate Cloudera SQL Stream Builder with Schema Registry, see Adding Catalogs.