Known issues in Streaming Analytics

Learn about the known issues in Streaming Analytics clusters, the impact or changes to the functionality, and the workaround.

SQL Stream Builder

CSA-5138 - SQL job submissions with UDF JARs fail when checkpointing is enabled
Due to the handling of ClassLoaders for custom JARs, uploading any Java UDFs with checkpoints enabled will cause the SQL job to fail with the following error:
ERROR com.cloudera.ssb.sqlio.service.SqlExecutorService: Error while submitting streaming job
org.apache.flink.util.FlinkRuntimeException: org.apache.flink.api.common.InvalidProgramException: Table program cannot be compiled.
Once the SQL job fails, the session on Streaming SQL Console must be reset before resubmitting the job without checkpointing.
None
CSA-4858 - Kerberos encryption type detection does not always work correctly for SSB
SSB detects no supported encryption types even though there is a list of allowed encryption types in the krb5.conf file. This causes an error when generating keytabs from the principal and password pair.
  1. Run ktutil on your cluster.
  2. Change the configuration with the following commands:
    addent -password -p <username> -k 1 -e aes256-cts
    wkt /tmp/new_keytab.keytab
  3. Upload the new keytab on Streaming SQL Console.

Flink

In Cloudera Streaming Analytics, the following SQL API features are in preview:
  • Match recognize
  • Top-N
  • Stream-Table join (without rowtime input)
INSIGHT-6486: Third-party dependencies upgraded in CDP Public Cloud might cause Flink jobs to fail
After upgrading CDP Public Cloud, Flink jobs might fail due to upgraded 3rd-party dependencies. For example, this could happen with awssdk, which has been updated to version 2.23.10 in CDP Public Cloud version 7.2.18.
Verify your application's dependency versions against the Cloudera-supported versions before upgrading to a newer version of CDP Public Cloud. For more information see Updating Flink job dependencies.
DataStream conversion limitations
  • Converting between Tables and POJO DataStreams is currently not supported in CSA.
  • Object arrays are not supported for Tuple conversion.
  • The java.time class conversions for Tuple DataStreams are only supported by using explicit TypeInformation: LegacyInstantTypeInfo, LocalTimeTypeInfo.getInfoFor(LocalDate/LocalDateTime/LocalTime.class).
  • Only java.sql.Timestamp is supported for rowtime conversion, java.time.LocalDateTime is not supported.
Kudu catalog limitations
  • CREATE TABLE
    • Primary keys can only be set by the kudu.primary-key-columns property. Using the PRIMARY KEY constraint is not yet possible.
    • Range partitioning is not supported.
  • When getting a table through the catalog, NOT NULL and PRIMARY KEY constraints are ignored. All columns are described as being nullable, and not being primary keys.
  • Kudu tables cannot be altered through the catalog other than simply renaming them.
Schema Registry catalog limitations
  • Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
  • No time-column and watermark support for Registry tables.
  • No CREATE TABLE support. Schemas have to be registered directly in the SchemaRegistry to be accessible through the catalog.
  • The catalog is read-only. It does not support table deletions or modifications.
  • By default, it is assumed that Kafka message values contain the schema id as a prefix, because this is the default behaviour for the SchemaRegistry Kafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client: store.schema.version.id.in.header: true.