Known issues and limitations
Learn about the known issues in Flink and SQL Stream Builder, the impact or changes to the functionality, and the workaround in Cloudera Streaming Analytics 1.6.2.
SQL Stream Builder
- FLINK-18027: ROW value constructor cannot deal with complex expressions
- When querying data from a table or a view with a
ROW()
function an exception is thrown due to a Calcite parsing issue. For example, the following query will return an error:CREATE VIEW example AS SELECT col1, ROW(col2) FROM table; SELECT * FROM example;
- Cannot access API Explorer
- The API Explorer page of SSB REST API cannot be accessed when using Apache Knox as authentication method. This issue is not present when using SPNEGO authentication.
- Uploading connector files fail
- When trying to upload a new connector JAR with a size file more than 1 MB, the upload process fails with an error.
- CSA-3537: Catalogs are deleted after Streaming SQL Engine restart
- When the Streaming SQL Engine instance is restarted on the cluster, the already registered catalogs are removed.
- CSA-3464: Overwriting new changes for SQL Job with edit
- A previous version of a job setting is applied when the SQL query is changed for an existing job after reloading it to the Compose page. The issue can be experienced when clicking on the Edit Job button on the SQL Jobs page.
- CSA-3308: Changing Primary Key for Materialized View without recreating table causes job failure
- The SQL job fails when directly changing the primary key of Materialized View and restarting the SQL job due to schema mismatch. The SQL job silently keeps retrying and overloads the Postgres database when the primary key of a Materialized View got changed without deleting existing data.
- CSA-2729: DLQ topic is filled with unexpected results
- Some messages that are left in the buffer can be sent to the Dead Letter Queue (DLQ) topic after handling a deserialization failure for Kafka tables.
- CSA-2016: Deleting table from other teams
- There is a limitation when using the Streaming SQL Console for deleting tables. It is not possible to delete a table that belongs to another team using the Delete button on the User Interface.
- CSA-1454: Timezone settings can cause unexpected behavior in Kafka tables
- You must consider the timezone settings of your environment when
using timestamps in a Kafka table as it can affect the results of your query. When the
timestamp in a query is identified with
from_unixtime
, it returns the results based on the timezone of the system. If the timezone is not set in UTC+0, the timestamp of the query results will shift in time and will not be correct. - CSA-1231: Big numbers are incorrectly represented on the Streaming SQL Console UI
- The issue impacts the following scenarios in Streaming SQL
Console:
- When having integers bigger than 253-1 among your values, the Input transformations and User Defined Functions are considered unsafe and produce incorrect results as these numbers will lose precision during parsing.
- When having integers bigger than 253-1 among your values, sampling to the Streaming SQL Console UI produces incorrect results as these numbers will lose precision during parsing.
Flink
- FLINK-18027: ROW value constructor cannot deal with complex expressions
- When querying data from a table or a view with a
ROW()
function an exception is thrown due to a Calcite parsing issue. For example, the following query will return an error:CREATE VIEW example AS SELECT col1, ROW(col2) FROM table; SELECT * FROM example;
- FLINK-27441: Scrollbar is missing for particular UI elements on Flink Dashboard
- The angular version bump introduced a bug that causes the scrollbar on the Flink Dashboard invisible.
In Cloudera Streaming Analytics, the following SQL API features are in preview:
- Match recognize
- Top-N
- Stream-Table join (without rowtime input)
- DataStream conversion limitations
-
- Converting between Tables and POJO DataStreams is currently not supported in CSA.
- Object arrays are not supported for Tuple conversion.
- The
java.time
class conversions for Tuple DataStreams are only supported by using explicitTypeInformation
:LegacyInstantTypeInfo
,LocalTimeTypeInfo.getInfoFor
(LocalDate
/LocalDateTime
/LocalTime.class
). - Only
java.sql.Timestamp
is supported for rowtime conversion,java.time.LocalDateTime
is not supported.
- Kudu catalog limitations
-
CREATE TABLE
- Primary keys can only be set by the
kudu.primary-key-columns
property. Using thePRIMARY KEY
constraint is not yet possible. - Range partitioning is not supported.
- Primary keys can only be set by the
- When getting a table through the catalog,
NOT NULL
andPRIMARY KEY
constraints are ignored. All columns are described as being nullable, and not being primary keys. - Kudu tables cannot be altered through the catalog other than simply renaming them.
- Schema Registry catalog limitations
-
- Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
- No time-column and watermark support for Registry tables.
- No
CREATE TABLE
support. Schemas have to be registered directly in theSchemaRegistry
to be accessible through the catalog. - The catalog is read-only. It does not support table deletions or modifications.
- By default, it is assumed that Kafka message values contain the schema id as a
prefix, because this is the default behaviour for the
SchemaRegistry
Kafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client:store.schema.version.id.in.header: true
.