Known issues and limitations

Learn about the known issues in Flink and SQL Stream Builder, the impact or changes to the functionality, and the workaround in Cloudera Streaming Analytics 1.12.0.

CSA-4464: CSA parcel is built with an interim CDP build
The CSA parcel is built using an interim CDP build and not with a build that corresponds to a release version. This can cause errors with Flink components that have dependency to CDP.
If a Flink component transitively depends on some CDP related module and it is not accessible publicly, the dependency can be excluded. In case the project also depends on the excluded module, the publicly available version of the dependency can be added to Flink. For example, this could happen with kafka-clients, which is pulled in by flink-connector-kafka:
...
<dependency>
	<groupId>org.apache.flink</groupId>
	<artifactId>flink-connector-kafka</artifactId>
	<version>${flink.version}</version>
	<exclusions>
		<exclusion>
			<groupId>org.apache.kafka</groupId>
			<artifactId>kafka-clients</artifactId>
		</exclusion>
	</exclusions>
</dependency>

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-clients</artifactId>
	<version>3.1.2.7.2.16.0-287</version>
</dependency>
...
For more information about the supproted dependency version, see the Maven dependencies in Flink documentation.

SQL Stream Builder

ENGESC-23078 - Job not found after successful job creation
After successfully creating a job in SSB, the SQL job is not found due to tables having empty values. This issue is indicated with the following error message in the log files:
java.lang.IllegalArgumentException: argument "content" is null
The issue only applies when upgrading from a CSA version lower than 1.9.0.
Update the empty values with null string in the mv_config and checkpoint_config fields as shown in the following example:
UPDATE jobs SET mv_config = 'null' WHERE mv_config IS NULL;
UPDATE jobs SET checkpoint_config = 'null' WHERE checkpoint_config IS NULL;
CSA-4858 - Kerberos encryption type detection does not always work correctly for SSB
SSB detects no supported encryption types even though there is a list of allowed encryption types in the krb5.conf file. This causes an error when generating keytabs from the principal and password pair.
  1. Run ktutil on your cluster.
  2. Change the configuration with the following commands:
    addent -password -p <username> -k 1 -e aes256-cts
    wkt /tmp/new_keytab.keytab
  3. Upload the new keytab on Streaming SQL Console.
SSB service fails when using Active Directory (AD) Kerberos authentication
If you use AD Kerberos for authentication and the Load Balancer URL is not provided, it can cause the SQL Stream Builder (SSB) service to fail. The issue is caused by the keytab generation. When the keytab is generated by Cloudera Manager it requires the principals from the AD for the Load Balancer host, and without no host specified for the Load Balancer, the SSB service cannot be started by Cloudera Manager. This issue also persists when the Load Balancer role is not deployed or used with SSB.
Fill out the Load Balancer URL parameter in Cloudera Manager regardless of using Load Balancer with SSB. For more information, see the Enabling High Availability for SSB documentation.

Flink

In Cloudera Streaming Analytics, the following SQL API features are in preview:
  • Match recognize
  • Top-N
  • Stream-Table join (without rowtime input)
DataStream conversion limitations
  • Converting between Tables and POJO DataStreams is currently not supported in CSA.
  • Object arrays are not supported for Tuple conversion.
  • The java.time class conversions for Tuple DataStreams are only supported by using explicit TypeInformation: LegacyInstantTypeInfo, LocalTimeTypeInfo.getInfoFor(LocalDate/LocalDateTime/LocalTime.class).
  • Only java.sql.Timestamp is supported for rowtime conversion, java.time.LocalDateTime is not supported.
Kudu catalog limitations
  • CREATE TABLE
    • Primary keys can only be set by the kudu.primary-key-columns property. Using the PRIMARY KEY constraint is not yet possible.
    • Range partitioning is not supported.
  • When getting a table through the catalog, NOT NULL and PRIMARY KEY constraints are ignored. All columns are described as being nullable, and not being primary keys.
  • Kudu tables cannot be altered through the catalog other than simply renaming them.
Schema Registry catalog limitations
  • Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
  • No time-column and watermark support for Registry tables.
  • No CREATE TABLE support. Schemas have to be registered directly in the SchemaRegistry to be accessible through the catalog.
  • The catalog is read-only. It does not support table deletions or modifications.
  • By default, it is assumed that Kafka message values contain the schema id as a prefix, because this is the default behaviour for the SchemaRegistry Kafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client: store.schema.version.id.in.header: true.