Known issues in Streaming Analytics
Learn about the known issues in Streaming Analytics clusters, the impact or changes to the functionality, and the workaround.
- CSA-4464: CSA parcel is built with an interim CDP build
- The CSA parcel is built using an interim CDP build and not with a build that corresponds to a release version. This can cause errors with components that have dependency to Flink.
SQL Stream Builder
- CSA-5138 - SQL job submissions with UDF JARs fail when checkpointing is enabled
- Due to the handling of ClassLoaders for custom JARs, uploading
any Java UDFs with checkpoints enabled will cause the SQL job to fail with the following
error:
Once the SQL job fails, the session on Streaming SQL Console must be reset before resubmitting the job without checkpointing.ERROR com.cloudera.ssb.sqlio.service.SqlExecutorService: Error while submitting streaming job org.apache.flink.util.FlinkRuntimeException: org.apache.flink.api.common.InvalidProgramException: Table program cannot be compiled. - CSA-4858 - Kerberos encryption type detection does not always work correctly for SSB
- SSB detects no supported encryption types even though there is a
list of allowed encryption types in the
krb5.conffile. This causes an error when generating keytabs from the principal and password pair. - CSA-4800 - ToString of Job can cause stack overflow
- The
jobLogItemscan cause stack overflow errors whentoStringis called. - CSA-4799 - Table Metadata is not saved when job is run via sql/execute
- The
SqlExecutorService.persistIfNeededAndExecutedoes not save the table metadata before executing the SQL job, therefore the data cleaner of the Materialized View Engine does not clean up the data based on the retention settings. - CSA-4699 - Keytab upload starts failing in SSB after some time, requiring a restart
- The
/tmp/ssb_keytab_work_diris removed after a period of time and SSB can no longer create keytabs as the directory does not exist anymore. - CSA-4650: Inconsistent sidebar collapse behavior
- The sidebar is collapsed inconsistently on the homepage of Streaming SQL Console when opening a project.
- Limitations when configuring widgets
- The following widget configuration optionns are not available
for certain widgets on Streaming SQL Console:
- Gauge visualization type: Expand on hover, Unit
- Donut visualization type: Expand on hover, Title
- Pie visualization type: Expand on hover
Flink
In Cloudera Streaming Analytics, the following SQL API features are in preview:
- Match recognize
- Top-N
- Stream-Table join (without rowtime input)
- DataStream conversion limitations
-
- Converting between Tables and POJO DataStreams is currently not supported in CSA.
- Object arrays are not supported for Tuple conversion.
- The
java.timeclass conversions for Tuple DataStreams are only supported by using explicitTypeInformation:LegacyInstantTypeInfo,LocalTimeTypeInfo.getInfoFor(LocalDate/LocalDateTime/LocalTime.class). - Only
java.sql.Timestampis supported for rowtime conversion,java.time.LocalDateTimeis not supported.
- Kudu catalog limitations
-
CREATE TABLE- Primary keys can only be set by the
kudu.primary-key-columnsproperty. Using thePRIMARY KEYconstraint is not yet possible. - Range partitioning is not supported.
- Primary keys can only be set by the
- When getting a table through the catalog,
NOT NULLandPRIMARY KEYconstraints are ignored. All columns are described as being nullable, and not being primary keys. - Kudu tables cannot be altered through the catalog other than simply renaming them.
- Schema Registry catalog limitations
-
- Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
- No time-column and watermark support for Registry tables.
- No
CREATE TABLEsupport. Schemas have to be registered directly in theSchemaRegistryto be accessible through the catalog. - The catalog is read-only. It does not support table deletions or modifications.
- By default, it is assumed that Kafka message values contain the schema id as a
prefix, because this is the default behaviour for the
SchemaRegistryKafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client:store.schema.version.id.in.header: true.
