Known Issues in Streaming Analytics
Learn about the known issues in Streaming Analytics clusters, the impact or changes to the functionality, and the workaround.
SQL Stream Builder
7.2.16.1
- FLINK-18027: ROW value constructor cannot deal with complex expressions
- When querying data from a table or a view with a
ROW()
function an exception is thrown due to a Calcite parsing issue. For example, the following query will return an error:CREATE VIEW example AS SELECT col1, ROW(col2) FROM table; SELECT * FROM example;
- CSA-4464: CSA parcel is built with an interim CDP build
- The CSA parcel is built using an interim CDP build and not with a build that corresponds to a release version. This can cause errors with components that have dependency to Flink.
- CSA-4412: Cannot delete Materialized View endpoint when using dynamic parameters
- Materialized View endpoints cannot be deleted if dynamic parameters were set for them.
7.2.16.0
- CSA-4413: Missing AWS library
- Due to the missing AWS library, SQL Stream Builder and Flink cannot be used in a RAZ-enabled environment.
- CSA-4412: Cannot delete Materialized View endpoint when using dynamic parameters
- Materialized View endpoints cannot be deleted if dynamic parameters were set for them.
- CSA-4390: Missing flink-shaded-zookeeper-3 artifacts
- The
flink-shaded-zookeeper-3
artifacts are missing the public artifact repository. - CSA-4378: Validating Kafka Data Source fails without security settings
- Kafka Data Source validation fails in a secure environment if the truststore details are not provided.
- CSA-4377: Improve masking information of credentials
- Credential information might be exposed on different configuration pages.
- CSA-4372: Allow Deletion is not synchronized with project
- When configuring a project with Allow Deletion setting and exported to a git repository, the Allow Deletion is not synchronized.
- CSA-4363: Production mode does not use clean session variables
- When setting a variable in production mode, the job fails with an error as the variables set in production mode will not be used regardless of resetting the value of variable.
- CSA-4360: Activating an environment does not reload catalogs
- When activating an environment file in which variables are used for registered catalogs, the catalogs do not show up only after revalidation or after registering them again.
- CSA-4356: Read-only jobs can be deleted
- Running and read-only jobs can be deleted even though a member of a project does not have the permissions to stop and restart them.
- CSA-4355: Virtual Table tab keeps loading for views
- The Virtual Table tab is loading for an undefined time when opening a created view.
- CSA-4354: Kudu lookup join fails
- When using a Kudu lookup join and adding a
WHERE
clause with a string, the SSB job fails. - CSA-4353: Editing DDL Kafka tables opens wizard
- When editing a Kafka table that was created using DDL, the configuration window appears as if the table was created using the Kafka Table wizard. This results in showing incorrect table information.
- CSA-4352: Schema is not shown in table view
- If a virtual table is opened in the table view, the schema is not shown and the information is displayed incorrectly.
- CSA-4337: Materialized View link throws error
- After importing a project and running the existing SQL job,
the Materialized View query link does not work and the following error is
shown:
"No data available yet. Verify that your Materialized View is configured, and the streaming job is running, and refresh page."
- CSA-3886: Session timeout configuration is not enforced
- When setting the SSB Session Timeout configuration in Cloudera Manager, the set time is not enforced and SSB continues to run indefinitely.
- CSA-3867: UDF case sensitivity
- Due to a case sensitivity mismatch, User-Defined Functions (UDFs) can be created with uppercase, and be called by lowercase, but not vice-versa. This results in error when executing jobs as the UDFs are saved differently in the database.
- CSA-3498: SSB fails with IllegalStateException
- SSB fails to work and throws an
IllegalStateException
when running a job due to sample websocket session timeout.
Flink
7.2.16.0, 7.2.16.1
- FLINK-18027: ROW value constructor cannot deal with complex expressions
- When querying data from a table or a view with a
ROW()
function an exception is thrown due to a Calcite parsing issue. For example, the following query will return an error:CREATE VIEW example AS SELECT col1, ROW(col2) FROM table; SELECT * FROM example;
In Cloudera Streaming Analytics, the following SQL API features are in preview:
- Match recognize
- Top-N
- Stream-Table join (without rowtime input)
- DataStream conversion limitations
-
- Converting between Tables and POJO DataStreams is currently not supported in CSA.
- Object arrays are not supported for Tuple conversion.
- The
java.time
class conversions for Tuple DataStreams are only supported by using explicitTypeInformation
:LegacyInstantTypeInfo
,LocalTimeTypeInfo.getInfoFor
(LocalDate
/LocalDateTime
/LocalTime.class
). - Only
java.sql.Timestamp
is supported for rowtime conversion,java.time.LocalDateTime
is not supported.
- Kudu catalog limitations
-
CREATE TABLE
- Primary keys can only be set by the
kudu.primary-key-columns
property. Using thePRIMARY KEY
constraint is not yet possible. - Range partitioning is not supported.
- Primary keys can only be set by the
- When getting a table through the catalog,
NOT NULL
andPRIMARY KEY
constraints are ignored. All columns are described as being nullable, and not being primary keys. - Kudu tables cannot be altered through the catalog other than simply renaming them.
- Schema Registry catalog limitations
-
- Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
- No time-column and watermark support for Registry tables.
- No
CREATE TABLE
support. Schemas have to be registered directly in theSchemaRegistry
to be accessible through the catalog. - The catalog is read-only. It does not support table deletions or modifications.
- By default, it is assumed that Kafka message values contain the schema id as a
prefix, because this is the default behaviour for the
SchemaRegistry
Kafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client:store.schema.version.id.in.header: true
.