Known issues and limitations
Learn about the known issues in Flink and Cloudera SQL Stream Builder, the impact or changes to the functionality, and the workaround in Cloudera Streaming Analytics 1.16.0.
Cloudera SQL Stream Builder
- CSA-5873 - Cloudera SQL Stream Builder environment variables cannot be used through the V2 API
- Environment variables are not being added to the session after activation.
- CSA-5743 - Materialized View created from UDFs throws Invalid request error
- Newly created UDFs are not recognized by Materialized View widget to analyze SQL operation. Clicking on the widget in Cloudera SQL Stream Builder displays SQL validation failed. Cannot instantiate user-defined function error if the UDF has not been added to the user session (starting a job with the UDF).
- CSA-5733 - Chart or diagram type dashboard widgets do not work when the label field is the same as the data field
- When creating a diagram type widget in Cloudera SQL Stream Builder, setting the label and data fields to the same value causes the graph to disappear.
- CSA-5732 - MV widget not fetching mv data when authenticated via spnego
- When using Cloudera SQL Stream Builder with SPNEGO authentication, creating a Materialized View widget fails with a Data Source Error.
Flink
In Cloudera Streaming Analytics, the following SQL API features are in preview:
- Match recognize
- Top-N
- Stream-Table join (without rowtime input)
- DataStream conversion limitations
-
- Converting between Tables and POJO DataStreams is currently not supported in Cloudera Streaming Analytics.
- Object arrays are not supported for Tuple conversion.
- The
java.timeclass conversions for Tuple DataStreams are only supported by using explicitTypeInformation:LegacyInstantTypeInfo,LocalTimeTypeInfo.getInfoFor(LocalDate/LocalDateTime/LocalTime.class). - Only
java.sql.Timestampis supported for rowtime conversion,java.time.LocalDateTimeis not supported.
- Kudu catalog limitations
-
CREATE TABLE- Primary keys can only be set by the
kudu.primary-key-columnsproperty. Using thePRIMARY KEYconstraint is not yet possible. - Range partitioning is not supported.
- Primary keys can only be set by the
- When getting a table through the catalog,
NOT NULLandPRIMARY KEYconstraints are ignored. All columns are described as being nullable, and not being primary keys. - Kudu tables cannot be altered through the catalog other than simply renaming them.
- Schema Registry catalog limitations
-
- Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
- No time-column and watermark support for Registry tables.
- No
CREATE TABLEsupport. Schemas have to be registered directly in theSchemaRegistryto be accessible through the catalog. - The catalog is read-only. It does not support table deletions or modifications.
- By default, it is assumed that Kafka message values contain the schema id as a
prefix, because this is the default behaviour for the
SchemaRegistryKafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client:store.schema.version.id.in.header: true.
