Using Apache Flink
Running a simple Flink application
Application development
Flink application structure
Source, operator and sink in DataStream API
Flink application example
Testing and validating Flink applications
Flink Project Template
Configuring Flink applications
Setting parallelism and max parallelism
Configuring Flink application resources
Configuring state backend
Enabling checkpoints for Flink applications
Configuring PyFlink applications
DataStream connectors
HBase sink with Flink
Creating and configuring the HBaseSinkFunction
Kafka with Flink
Schema Registry with Flink
Kafka Metrics Reporter
Kudu with Flink
Iceberg with Flink
File systems
Job lifecycle
Setting up Python for PyFlink
Running a Flink job
Using Flink CLI
Enabling savepoints for Flink applications
Monitoring
Enabling Flink DEBUG logging
Flink Dashboard
Streams Messaging Manager integration
SQL and Table API
SQL and Table API supported features
DataStream API interoperability
Converting DataStreams to Tables
Converting Tables to DataStreams
Supported data types
SQL catalogs for Flink
Hive catalog
Kudu catalog
Schema Registry catalog
SQL connectors for Flink
Kafka connector
Data types for Kafka connector
JSON format
CSV format
Avro format
Supported basic data types
Schema Registry formats
SQL Statements in Flink
CREATE Statements
DROP Statements
ALTER Statements
INSERT Statements
SQL Queries in Flink
Governance
Atlas entities in Flink metadata collection
Creating Atlas entity type definitions for Flink
Verifying metadata collection
Migrating Flink jobs
Migrating Flink jobs without state
Migrating stateful Flink jobs
Updating Flink job dependencies
Reference
Flink Terminology
Cloudera Flink Tutorials