Known issues and limitations
Learn about the known issues in Flink and SQL Stream Builder, the impact or changes to the functionality, and the workaround in Cloudera Streaming Analytics 1.6.0.
SQL Stream Builder
- CSA-2551: Dynamic filters are not working with greater value than a character
- The dynamic filtering feature cannot be used for the Materialized View when providing a parameter with a value that is greater than the value of a character type.
- CSA-2547: Vulnerability issue for user impersonation
- With SPENGO authentication, adding the doAs=other_user parameters results in that users can be impersonated as the call is proxied to the Streaming SQL Engine as ssb principal.
- CSA-2538: Error when saving Materialized View configuration
- Due to data type mismatch for retention_interval_ms in the console and admin databases, configurations using MV retention times (greater than 2147483 seconds) cannot be saved for Materialized Views.
- CSA-2529: Cannot set consumer groups for Kafka tables
- Queries fail when adding consumer groups for Kafka table settings.
- CSA-2528: Improvement for Materialized View table names
- The automatically created names of Materialized View tables are not expressive enough to easily work with.
- Db2 CDC connector is not available from Connectors and Templates
- The Db2 Change Data Capture (CDC) is not yet available on the Streaming SQL Console under Templates and Connectors. This does not limit the use of the Db2 connector.
- You can use the following the Db2 CDC example as a
reference to create a
CREATE TABLE db2_cdc_source ( 'column_name' INT, 'column_name' STRING ) WITH ( 'connector' = 'db2-cdc', 'hostname' = '...', 'port' = '...', 'username' = '...', 'password' = '...', 'database-name' = '...', 'schema-name' = '...', 'table-name' = '...' )
- CSA-2016: Deleting table from other teams
- There is a limitation when using the Streaming SQL Console for deleting tables. It is not possible to delete a table that belongs to another team using the Delete button on the User Interface.
DROP TABLEstatement from the SQL window.
- CSA-1985: DROP TABLE limitation when using Webhook table
DROP TABLEcannot be executed against Webhook type tables. The following error message is displayed when trying to delete a Webhook table using the SQL window:
Table with identifier 'xyz' does not exist.
- Use the Delete button on the Streaming SQL Console.
- CSA-1454: Timezone settings can cause unexpected behavior in Kafka tables
- You must consider the timezone settings of your environment when
using timestamps in a Kafka table as it can affect the results of your query. When the
timestamp in a query is identified with
from_unixtime, it returns the results based on the timezone of the system. If the timezone is not set in UTC+0, the timestamp of the query results will shift in time and will not be correct.
- Change your local timezone settings to UTC+0.
- CSA-1232: Big numbers are incorrectly represented on the Streaming SQL Console UI
- The issue impacts the following scenarios in Streaming SQL
- When having integers bigger than 253-1 among your values, the Input transformations and User Defined Functions are considered unsafe and produce incorrect results as these numbers will lose precision during parsing.
- When having integers bigger than 253-1 among your values, sampling to the Streaming SQL Console UI produces incorrect results as these numbers will lose precision during parsing.
In Cloudera Streaming Analytics, the following SQL API features are in preview:
- Match recognize
- Stream-Table join (without rowtime input)
- DataStream conversion limitations
- Converting between Tables and POJO DataStreams is currently not supported in CSA.
- Object arrays are not supported for Tuple conversion.
java.timeclass conversions for Tuple DataStreams are only supported by using explicit
java.sql.Timestampis supported for rowtime conversion,
java.time.LocalDateTimeis not supported.
- Kudu catalog limitations
- Primary keys can only be set by the
kudu.primary-key-columnsproperty. Using the
PRIMARY KEYconstraint is not yet possible.
- Range partitioning is not supported.
- Primary keys can only be set by the
- When getting a table through the catalog,
PRIMARY KEYconstraints are ignored. All columns are described as being nullable, and not being primary keys.
- Kudu tables cannot be altered through the catalog other than simply renaming them.
- Schema Registry catalog limitations
- Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
- No time-column and watermark support for Registry tables.
CREATE TABLEsupport. Schemas have to be registered directly in the
SchemaRegistryto be accessible through the catalog.
- The catalog is read-only. It does not support table deletions or modifications.
- By default, it is assumed that Kafka message values contain the schema id as a
prefix, because this is the default behaviour for the
SchemaRegistryKafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client: