Known issues in Cloudera Streaming Analytics
Learn about the known issues in Cloudera Streaming Analytics clusters, the impact or changes to the functionality, and the workaround.
Cloudera SQL Stream Builder
- CSA-4858 - Kerberos encryption type detection does not always work correctly for Cloudera SQL Stream Builder
- Cloudera SQL Stream Builder detects no supported encryption types even though there is a
list of allowed encryption types in the
krb5.conffile. This causes an error when generating keytabs from the principal and password pair.
Flink
In Cloudera Streaming Analytics, the following SQL API features are in preview:
- Match recognize
- Top-N
- Stream-Table join (without rowtime input)
- CSA-5525 - Illegal join reordering in Flink optimizer
Flink optimizer's reordering might violate certain clauses (for example
FOR SYSTEM_TIME AS OF) that are supported only on a specific side of ajoinoperation, resulting in an error.Example error message:
Caused by: org.apache.flink.table.api.TableException: Temporal table join only support apply FOR SYSTEM_TIME AS OF on the right table- Third-party dependencies upgraded in Cloudera on cloud might cause Flink jobs to fail
After upgrading Cloudera on cloud, Flink jobs might fail due to upgraded 3rd-party dependencies. For example, this could happen with
awssdk, which has been updated to version 2.23.10 in Cloudera on cloud version 7.2.18.- DataStream conversion limitations
-
- Converting between Tables and POJO DataStreams is currently not supported in Cloudera Streaming Analytics.
- Object arrays are not supported for Tuple conversion.
- The
java.timeclass conversions for Tuple DataStreams are only supported by using explicitTypeInformation:LegacyInstantTypeInfo,LocalTimeTypeInfo.getInfoFor(LocalDate/LocalDateTime/LocalTime.class). - Only
java.sql.Timestampis supported for rowtime conversion,java.time.LocalDateTimeis not supported.
- Kudu catalog limitations
-
CREATE TABLE- Primary keys can only be set by the
kudu.primary-key-columnsproperty. Using thePRIMARY KEYconstraint is not yet possible. - Range partitioning is not supported.
- Primary keys can only be set by the
- When getting a table through the catalog,
NOT NULLandPRIMARY KEYconstraints are ignored. All columns are described as being nullable, and not being primary keys. - Kudu tables cannot be altered through the catalog other than simply renaming them.
- Schema Registry catalog limitations
-
- Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
- No time-column and watermark support for Registry tables.
- No
CREATE TABLEsupport. Schemas have to be registered directly in theSchemaRegistryto be accessible through the catalog. - The catalog is read-only. It does not support table deletions or modifications.
- By default, it is assumed that Kafka message values contain the schema id as a
prefix, because this is the default behaviour for the
SchemaRegistryKafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client:store.schema.version.id.in.header: true.
