Release NotesPDF version

Known issues and limitations

Learn about the known issues in Flink and Cloudera SQL Stream Builder, the impact or changes to the functionality, and the workaround in Cloudera Streaming Analytics 1.15.0.

CSA-5733 - Chart or diagram type dashboard widgets do not work when the label field is the same as the data field
When creating a diagram type widget in Cloudera SQL Stream Builder, setting the label and data fields to the same value causes the graph to disappear.
None.
CSA-5732 - MV widget not fetching mv data when authenticated via spnego
When using Cloudera SQL Stream Builder with SPNEGO authentication, creating a Materialized View widget fails with a Data Source Error.
None. Users are advised to authenticate through KNOX.
KNOX SSEDispatch does not work if HA is enabled for the service
By default, Cloudera SQL Stream Builder has a HA proxy configuration applied in KNOX, which is applied even when only one Cloudera SQL Stream Builder role instance exists. This breaks the async behavior of job sampling, because it uses a regular KNOX dispatch implementation.

Two workaround options exist:

  1. Execute the following command on every KNOX node:

    find /opt/cloudera/parcels/CDH/lib/knox/data/services -type f -exec fgrep -l 'classname="org.apache.knox.gateway.sse.SSEDispatch"' {} \; | \
    xargs sed -i.bak 's/classname="org.apache.knox.gateway.sse.SSEDispatch"/classname="org.apache.knox.gateway.sse.SSEDispatch" ha-classname="org.apache.knox.gateway.sse.SSEDispatch"/'
  2. Modify the SSB-SSE-UI service.xml on every KNOX instance via the KNOX Admin UI to have the below content:

    <service name="ssb-sse-ui" role="SSB-SSE-UI" version="1.13.0">
       <metadata>
      	<type>UI</type>
      	<context>/ssb-sse-ui</context>
      	<shortDesc>SQL Stream Builder UI</shortDesc>
      	<description>Cloudera's SQL Stream Builder is an IDE and manager tool for Flink SQL jobs.</description>
       </metadata>
       <dispatch classname="org.apache.knox.gateway.dispatch.ConfigurableDispatch" use-two-way-ssl="false">
      	<param>
         	<name>responseExcludeHeaders</name>
         	<value>CONTENT-LENGTH,WWW-AUTHENTICATE</value>
      	</param>
      	<param>
         	<name>httpclient.connectionTimeout</name>
         	<value>5m</value>
      	</param>
      	<param>
         	<name>httpclient.socketTimeout</name>
         	<value>5m</value>
      	</param>
       </dispatch>
       <routes>
      	<route path="/ssb-sse-ui/"/>
      	<route path="/ssb-sse-ui/**"/>
      	<route path="/ssb-sse-ui/**?**"/>
      	<route path="/ssb-sse-ui/swagger/**">
         	<rewrite apply="SSB-SSE-UI/filter/outbound/swagger/body" to="response.body"/>
      	</route>
      	<route path="/ssb-sse-ui/**/event-stream/**">
         	<rewrite apply="SSB-SSE-UI/ssb-sse-ui/inbound/event-stream" to="request.url"/>
CSA-5747 - Upgrade causes attribute conversion failure when there are data sources containing secret properties
Due to the secret property encryption for data sources introduced in Cloudera Streaming Analytics 1.14.0, after upgrading from a version without the encryption (1.13.x or lower), data sources with secret properties may cause Cloudera Streaming Analytics not starting, or the Explorer Cloudera SQL Stream Builder failing to load resources properly.
  1. Connect to the configured Cloudera SQL Stream Builder admin database. The database information can be found in Cloudera Manager > SQL Stream Builder Service > Configuration and filtering for database.
  2. In the database, select the records of the data_sources table.
  3. Remove all data_source entries that has properties containing keys matching the following keywords: secret, password, pwd, credentials, token, user-info, user.info.
  4. Verify that the Cloudera SQL Stream Builder loads the resources properly.
  5. Re-register the previously deleted data sources.
ENGESC-23078 - Job not found after successful job creation
After successfully creating a job in Cloudera SQL Stream Builder, the SQL job is not found due to tables having empty values. This issue is indicated with the following error message in the log files:
java.lang.IllegalArgumentException: argument "content" is null
The issue only applies when upgrading from a CSA version lower than 1.9.0.
Update the empty values with null string in the mv_config and checkpoint_config fields as shown in the following example:
UPDATE jobs SET mv_config = 'null' WHERE mv_config IS NULL;
UPDATE jobs SET checkpoint_config = 'null' WHERE checkpoint_config IS NULL;
CSA-4858 - Kerberos encryption type detection does not always work correctly for Cloudera SQL Stream Builder
SSB detects no supported encryption types even though there is a list of allowed encryption types in the krb5.conf file. This causes an error when generating keytabs from the principal and password pair.
  1. Run ktutil on your cluster.
  2. Change the configuration with the following commands:
    addent -password -p [***USERNAME***] -k 1 -e aes256-cts
    wkt /tmp/new_keytab.keytab
  3. Upload the new keytab to Cloudera SQL Stream Builder.
Auto discovery is not supported for Apache Knox
You need to manually configure Knox with Cloudera SQL Stream Builder to enable Knox authentication. Complete the configuration based on the Cloudera Base on premises version you use. For more information, see the Enabling Knox authentication for Cloudera SQL Stream Builder documentation.
In Cloudera Streaming Analytics, the following SQL API features are in preview:
  • Match recognize
  • Top-N
  • Stream-Table join (without rowtime input)
DataStream conversion limitations
  • Converting between Tables and POJO DataStreams is currently not supported in Cloudera Streaming Analytics.
  • Object arrays are not supported for Tuple conversion.
  • The java.time class conversions for Tuple DataStreams are only supported by using explicit TypeInformation: LegacyInstantTypeInfo, LocalTimeTypeInfo.getInfoFor(LocalDate/LocalDateTime/LocalTime.class).
  • Only java.sql.Timestamp is supported for rowtime conversion, java.time.LocalDateTime is not supported.
Kudu catalog limitations
  • CREATE TABLE
    • Primary keys can only be set by the kudu.primary-key-columns property. Using the PRIMARY KEY constraint is not yet possible.
    • Range partitioning is not supported.
  • When getting a table through the catalog, NOT NULL and PRIMARY KEY constraints are ignored. All columns are described as being nullable, and not being primary keys.
  • Kudu tables cannot be altered through the catalog other than simply renaming them.
Schema Registry catalog limitations
  • Currently, the Schema Registry catalog / format only supports reading messages with the latest enabled schema for any given Kafka topic at the time when the SQL query was compiled.
  • No time-column and watermark support for Registry tables.
  • No CREATE TABLE support. Schemas have to be registered directly in the SchemaRegistry to be accessible through the catalog.
  • The catalog is read-only. It does not support table deletions or modifications.
  • By default, it is assumed that Kafka message values contain the schema id as a prefix, because this is the default behaviour for the SchemaRegistry Kafka producer format. To consume messages with schema written in the header, the following property must be set for the Registry client: store.schema.version.id.in.header: true.

We want your opinion

How can we improve this page?

What kind of feedback do you have?