Known issues and limitations
Learn about the known issues in Flink and Cloudera SQL Stream Builder, the impact or changes to the functionality, and the workaround in Cloudera Streaming Analytics 1.14.0.
Cloudera SQL Stream Builder
- CSA-5747 - Upgrade causes attribute conversion failure when there are data sources containing secret properties
- Due to the secret property encryption for data sources introduced in Cloudera Streaming Analytics 1.14.0, after upgrading from a version without the encryption (1.13.x or lower), data sources with secret properties may cause Cloudera Streaming Analytics not starting, or the Explorer Cloudera SQL Stream Builder failing to load resources properly.
- CSA-5564 - Unlocking keytab may fail in Cloudera SQL Stream Builder
- When the user's credential salt in FreeIPA contains a double quote, unlocking the keytab in Cloudera SQL Stream Builder fails with an Invalid credentials error.
- ENGESC-23078 - Job not found after successful job creation
- After successfully creating a job in Cloudera SQL Stream Builder, the SQL job is not
found due to tables having empty values. This issue is indicated with the following
error message in the log
files:
java.lang.IllegalArgumentException: argument "content" is null
The issue only applies when upgrading from a Cloudera Streaming Analytics version lower than 1.9.0. - CSA-4858 - Kerberos encryption type detection does not always work correctly for Cloudera SQL Stream Builder
- Cloudera SQL Stream Builder detects no supported encryption types even though there is a list of allowed encryption types in the krb5.conf file. This causes an error when generating keytabs from the principal and password pair.
- Auto discovery is not supported for Apache Knox
- You need to manually configure Knox with Cloudera SQL Stream Builder to enable Knox authentication. Complete the configuration based on the Cloudera Base on premises version you use. For more information, see the Enabling Knox authentication for Cloudera SQL Stream Builder documentation.
Flink
- CSA-5525 - Illegal join reordering in Flink optimizer
Flink optimizer's reordering might violate certain clauses (for example
FOR SYSTEM_TIME AS OF
) that are supported only on a specific side of ajoin
operation, resulting in an error.Example error message:
Caused by: org.apache.flink.table.api.TableException: Temporal table join only support apply FOR SYSTEM_TIME AS OF on the right table
- DataStream conversion limitations
-
- Converting between Tables and POJO DataStreams is currently not supported in Cloudera Streaming Analytics.
- Object arrays are not supported for Tuple conversion.
- The
java.time
class conversions for Tuple DataStreams are only supported by using explicitTypeInformation
:LegacyInstantTypeInfo
,LocalTimeTypeInfo.getInfoFor
(LocalDate
/LocalDateTime
/LocalTime.class
). - Only
java.sql.Timestamp
is supported for rowtime conversion,java.time.LocalDateTime
is not supported.
- Kudu catalog limitations
-
CREATE TABLE
- Primary keys can only be set by the
kudu.primary-key-columns
property. Using thePRIMARY KEY
constraint is not yet possible. - Range partitioning is not supported.
- Primary keys can only be set by the
- When getting a table through the catalog,
NOT NULL
andPRIMARY KEY
constraints are ignored. All columns are described as being nullable, and not being primary keys. - Kudu tables cannot be altered through the catalog other than simply renaming them.
- Schema Registry catalog limitations
-
- Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
- No time-column and watermark support for Registry tables.
- No
CREATE TABLE
support. Schemas have to be registered directly in theSchemaRegistry
to be accessible through the catalog. - The catalog is read-only. It does not support table deletions or modifications.
- By default, it is assumed that Kafka message values contain the schema id as a
prefix, because this is the default behaviour for the
SchemaRegistry
Kafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client:store.schema.version.id.in.header: true
.