Known Issues in Streaming Analytics
Learn about the known issues in Streaming Analytics clusters, the impact or changes to the functionality, and the workaround.
SQL Stream Builder
- FLINK-18027: ROW value constructor cannot deal with complex expressions
- When querying data from a table or a view with a
ROW()function an exception is thrown due to a Calcite parsing issue. For example, the following query will return an error:CREATE VIEW example AS SELECT col1, ROW(col2) FROM table; SELECT * FROM example; - Uploading connector files fail
- When trying to upload a new connector JAR with a size file more than 1 MB, the upload process fails with an error.
- Upgrading Streaming Analytics cluster to 7.2.15
- Due to missing information in the database of SQL Stream Builder, upgrading the Streaming Analytics clusters to 7.2.15 is not possible.
- CSA-3742: Catalogs are not working due to expired Kerberos TGT
- When SSB is running for a longer period of time than the lifetime of the Kerberos Ticket Granting Ticket (TGT), authentication with the catalog services will fail and the catalogs stop working.
- CSA-2016: Deleting table from other teams
- There is a limitation when using the Streaming SQL Console for deleting tables. It is not possible to delete a table that belongs to another team using the Delete button on the User Interface.
- CSA-1454: Timezone settings can cause unexpected behavior in Kafka tables
- You must consider the timezone settings of your environment when
using timestamps in a Kafka table as it can affect the results of your query. When the
timestamp in a query is identified with
from_unixtime, it returns the results based on the timezone of the system. If the timezone is not set in UTC+0, the timestamp of the query results will shift in time and will not be correct. - CSA-1231: Big numbers are incorrectly represented on the Streaming SQL Console UI
- The issue impacts the following scenarios in Streaming SQL
Console:
- When having integers bigger than 253-1 among your values, the Input transformations and User Defined Functions are considered unsafe and produce incorrect results as these numbers will lose precision during parsing.
- When having integers bigger than 253-1 among your values, sampling to the Streaming SQL Console UI produces incorrect results as these numbers will lose precision during parsing.
Flink
- FLINK-18027: ROW value constructor cannot deal with complex expressions
- When querying data from a table or a view with a
ROW()function an exception is thrown due to a Calcite parsing issue. For example, the following query will return an error:CREATE VIEW example AS SELECT col1, ROW(col2) FROM table; SELECT * FROM example;
In Cloudera Streaming Analytics, the following SQL API features are in preview:
- Match recognize
- Top-N
- Stream-Table join (without rowtime input)
- DataStream conversion limitations
-
- Converting between Tables and POJO DataStreams is currently not supported in CSA.
- Object arrays are not supported for Tuple conversion.
- The
java.timeclass conversions for Tuple DataStreams are only supported by using explicitTypeInformation:LegacyInstantTypeInfo,LocalTimeTypeInfo.getInfoFor(LocalDate/LocalDateTime/LocalTime.class). - Only
java.sql.Timestampis supported for rowtime conversion,java.time.LocalDateTimeis not supported.
- Kudu catalog limitations
-
CREATE TABLE- Primary keys can only be set by the
kudu.primary-key-columnsproperty. Using thePRIMARY KEYconstraint is not yet possible. - Range partitioning is not supported.
- Primary keys can only be set by the
- When getting a table through the catalog,
NOT NULLandPRIMARY KEYconstraints are ignored. All columns are described as being nullable, and not being primary keys. - Kudu tables cannot be altered through the catalog other than simply renaming them.
- Schema Registry catalog limitations
-
- Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
- No time-column and watermark support for Registry tables.
- No
CREATE TABLEsupport. Schemas have to be registered directly in theSchemaRegistryto be accessible through the catalog. - The catalog is read-only. It does not support table deletions or modifications.
- By default, it is assumed that Kafka message values contain the schema id as a
prefix, because this is the default behaviour for the
SchemaRegistryKafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client:store.schema.version.id.in.header: true.
