Known issues

Review the list of known issues in Cloudera Flow Management (CFM).

Known issues in CFM 2.1.6

Review the list of known issues in Cloudera Flow Management (CFM) 2.1.6.

Flow deletion fails for imported flows with funnels

When attempting to delete a flow that was imported from open-source Apache NiFi into Cloudera Flow Management, you may encounter an error if the flow contains funnels. The issue is triggered during the two-stage commit process, resulting in a failure to complete the deletion.

The NiFi UI displays a generic error, and backend logs indicate a failure in the two-stage commit process during funnel deletion.

  • NiFi UI error message:

    Node <hostname>:8443 is unable to fulfill this request due to: An unexpected error has occurred.

  • Log message in nifi-app.log:

    Received a status of 500 for request DELETE /nifi-api/funnels/<UUID> when performing first stage of two-stage commit.

Background:

This is a flow design issue. NiFi supports flexible flow design, which allows you to create many different combinations of components and connections. In some cases, such as flows that include funnels with loops or complex connection paths, this flexibility can lead to problems during deletion. The platform does not block such designs but some designs can cause issues with how components are deleted. When this happens, NiFi returns a generic error message that does not provide actionable feedback.

To successfully delete the affected flow:
  1. Insert two LogAttribute processors into the flow.
  2. Redirect any connections currently routed through the problematic funnels to the new processors.
  3. Delete the original funnels and connections.
  4. Proceed with deleting the flow.
LDAP authentication error caused by special characters in passwords
After upgrading to CFM 2.1.7 SP1, 2.1.7, 2.1.6 SP1 or 2.1.6 versions, you may encounter the following error if you use LDAP:
org.springframework.ldap.AuthenticationException: LDAP: error code 49
The issue is caused by the use of special characters in LDAP passwords, which are configured under the "Manager Password" property.

To resolve this issue, upgrade to CFM 2.1.7.1001.

If you prefer to upgrade to a CFM 2.1.6 version, a new HOTFIX build is needed to address this issue. Contact Cloudera Support for assistance.

Truststore changes with Ranger Plugin causing TLS handshake errors

When using the Ranger plugin, the default truststore is changed from cacerts to AutoTLS truststore (cm-auto-global_truststore.jks). This can lead to unintended issues such as TLS handshake errors with common CAs. Connections with common CAs may fail, causing service outages because the AutoTLS truststore contains only internal CA certificates and not the public root certificates.

Add the required certificates manually to the Cloudera Manager truststore.

  1. Open Cloudera Manager and navigate to Administration > Security > Update Auto-TLS Truststore.

  2. Import the certificates in PEM format.

Snowflake - NoSuchMethodError
Due to the issue documented in NIFI-11905, you may encounter the following error when utilizing Snowflake components:
java.lang.NoSuchMethodError: 'net.snowflake.client.jdbc.telemetry.Telemetry net.snowflake.client.jdbc.telemetry.TelemetryClient.createSessionlessTelemetry(net.snowflake.client.jdbc.internal.apache.http.impl.client.CloseableHttpClient, java.lang.String)'
    at net.snowflake.ingest.connection.TelemetryService.<init>(TelemetryService.java:68)
This issue has been resolved with the introduction of NIFI-12126, which will be available in CFM 2.1.6 SP1. In the interim, Cloudera recommends to use the NARs from the CFM 2.1.5 SP1 release:
Per Process Group Logging
The Per Process Group logging feature is currently not working even when you specify a log suffix in the configuration of a process group. As a result, you may not observe the expected logging behavior.
No fix or workaround is available until a Service Pack is released for CFM 2.1.6. However, if you encounter this problem, contact Cloudera to request a fix.
Configuration of java.arg.7
A property has been added for defining java.arg.7 to provide the ability to override the default location of the temporary directory used by JDK. By default this value is empty in Cloudera Manager. If you use this argument for another purpose, change it to a different, unused argument number (or use letters instead: java.arg.mycustomargument). Not changing the argument can impact functionalities after upgrades/migrations.
JDK error
JDK 8 version u252 is supported. Any lower version may result in this error when NiFi starts:
SHA512withRSAandMGF1 Signature not available
When using Java 8, only version u252, and above are supported.
JDK limitation
JDK 8u271, JDK 8u281, and JDK 8u291 may cause socket leak issues in NiFi due to JDK-8245417 and JDK-8256818. Verify the build version of your JDK. Later builds are fixed as described in JDK-8256818.
When using Java 8, only version u252, and above are supported.
Kudu Client
All the records are sent as a single Kafka message containing an array of records.

There is an issue in the Kudu client preventing the creation of a new tables using the NiFi processors. The table needs to exist before NiFi tries to push data into it. You may see this error when this issue arises:

Caused by: org.apache.kudu.client.NonRecoverableException: failed to wait for Hive Metastore notification log listener to catch up: failed to retrieve notification log events: failed to open Hive Metastore connection: SASL(-15): mechanism too weak for this user
Verify the necessary table exists in Kudu.
NiFi Node Connection test failures
In CFM 2.1.3, Cloudera Manager includes a new health check feature. The health check alerts users if a NiFi instance is running but disconnected from the NiFi cluster. For this health check to be successful, you must update a Ranger policy. There is a known issue when the NiFi service is running but the NiFi Node(s) report Bad Health due to the NiFi Node Connection test.
Update the policy:
  1. From the Ranger UI, access the Controller policy for the NiFi service.
  2. Verify the nifi group is set in the policy.
  3. Add the nifi user, to the policy, with READ permissions.
NiFi UI Performance considerations
A known issue in Chrome 92.x causes significant slowness in the NiFi UI and may lead to high CPU consumption.

For more information, see the Chrome Known Issues documentation at 1235045.

Use another version of Chrome or a different browser.
SSHJ version change and key negotiation issue with old SSH servers
ListSFTP and PutSFTP processors fail when using the legacy ssh-rsa algorithm for authentication with the following error:
UserAuthException: Exhausted available authentication methods
Set Key Algorithms Allowed property in PutSFTP to ssh-rsa.
KeyStoreException: placeholder not found
After an upgrade, NiFi may fail to start with the following error:
WARN org.apache.nifi.web.server.JettyServer: Failed to start web server... shutting down.
java.security.KeyStoreException: placeholder not found

The error is caused by missing configuration for the type of the keystore and truststore files.

  1. Go to Cloudera Manager -> NiFi service -> Configuration.
  2. Add the below properties for NiFi Node Advanced Configuration Snippet (Safety Valve) for staging/nifi.properties.xml.
    nifi.security.keystoreType=**[value]**
    nifi.security.truststoreType=**[value]**

    Where value must be PKCS12, JKS, or BCFKS. JKS is the preferred type, BCFKS and PKCS12 files are loaded with BouncyCastle provider.

  3. Restart NiFi.
InferAvroSchema may fail when inferring schema for JSON data
In Apache NiFi 1.17, the dependency on Apache Avro has been upgraded to 1.11.0. However, the InferAvroSchema processor depends on the hadoop-libraries NAR from which the Avro version comes from, causing a NoSuchMethodError exception. Having well defined schemas ensures consistent behavior, allows for proper schema versioning and prevents downstream systems to generate errors because of unexpected schema changes. Besides, schema inference may not always be 100% accurate and can be an expensive operation in terms of performances.

Use the ExtractRecordSchema processor to infer the schema of your data with an appropriate reader and add the schema as a FlowFile attribute.

Known issues in CFM 2.1.6 SP1

Review the list of known issues in Cloudera Flow Management (CFM) 2.1.6 SP1.

LDAP authentication error caused by special characters in passwords
After upgrading to CFM 2.1.7 SP1, 2.1.7, 2.1.6 SP1 or 2.1.6 versions, you may encounter the following error if you use LDAP:
org.springframework.ldap.AuthenticationException: LDAP: error code 49
The issue is caused by the use of special characters in LDAP passwords, which are configured under the "Manager Password" property.

To resolve this issue, upgrade to CFM 2.1.7.1001.

If you prefer to upgrade to a CFM 2.1.6 version, a new HOTFIX build is needed to address this issue. Contact Cloudera Support for assistance.