Known issues in Cloudera Flow Management

Review the list of known issues in Cloudera Flow Management.

Known issues in Cloudera Flow Management 2.1.7.2000

Known issues

JSON processors become invalid after upgrade

After upgrading to Cloudera Flow Management 2.1.7 from 2.1.6, JSON processors or controller services may become invalid due to more than one max-string-length properties.

Error message:
'max-string-length' validated against '20MB' is invalid because 'max-string-length' is not a supported property or has no Validator associated with it.
For a few JSON-type processors or controller services:

Open the NiFi UI and manually delete the extra max-string-length property from each affected processor or controller service.

For a larger number of affected JSON-type processors or controller services:
Follow these steps to update the configuration directly:
  1. Stop the NiFi service.
  2. Back up the current flow.xml.gz and flow.json.gz files.
    mv /var/lib/nifi/conf/flow.xml.gz /var/lib/nifi/conf/flow.xml.gz.bkp
    cp /var/lib/nifi/conf/flow.json.gz /var/lib/nifi/conf/flow.json.gz.bkp
  3. Extract the flow.json.gz file.
    gzip -d /var/lib/nifi/conf/flow.json.gz
    
  4. Run the following command to remove the duplicate property.
    sed 's/\"max-string-length\":\"20 MB\",/ /g' /var/lib/nifi/conf/flow.json > /tmp/flow.json
  5. Compress the edited file.
    gzip /tmp/flow.json
  6. Replace the updated flow.json.gz back into the configuration directory.
    mv /tmp/flow.json.gz /var/lib/nifi/conf/flow.json.gz
  7. Start the NiFi service.
CFM-4200: Hive3QL slowness

You can experience a performance issue due to slower record processing on EC2 clusters when using the Hive3QL processor in Cloudera Private Cloud Base 7.1.9 Service Pack 2.

Unused NiFi configuration values
The following NiFi configuration values are no longer in use. They are still visible in the UI, but they are obsolete and have no effect on functionality.
  • nifi.nar.hotfix.provider.file.list.identifier
  • nifi.nar.hotfix.provider.location.identifier
  • nifi.nar.hotfix.provider.last.modification.identifier
  • nifi.nar.hotfix.provider.directory.identifier
  • nifi.nar.hotfix.provider.date.time.format
  • nifi.nar.hotfix.provider.proxy.user
  • nifi.nar.hotfix.provider.proxy.password
  • nifi.nar.hotfix.provider.proxy.server
  • nifi.nar.hotfix.provider.proxy.server.port
  • nifi.nar.hotfix.provider.connect.timeout
  • nifi.nar.hotfix.provider.read.timeout
  • nifi.nar.hotfix.provider.nar.location
  • nifi.nar.hotfix.provider.poll.interval
  • nifi.nar.hotfix.provider.implementation
  • nifi.nar.hotfix.provider.user.name
  • nifi.nar.hotfix.provider.password
  • nifi.nar.hotfix.provider.base.url
  • nifi.nar.hotfix.provider.required.version
  • nifi.nar.hotfix.provider.enabled

Known issues in Cloudera Flow Management 2.1.7.1000

JSON processors become invalid after upgrade

After upgrading to Cloudera Flow Management 2.1.7 from 2.1.6, JSON processors or controller services may become invalid due to more than one max-string-length properties.

Error message:
'max-string-length' validated against '20MB' is invalid because 'max-string-length' is not a supported property or has no Validator associated with it.
For a few JSON-type processors or controller services:

Open the NiFi UI and manually delete the extra max-string-length property from each affected processor or controller service.

For a larger number of affected JSON-type processors or controller services:
Follow these steps to update the configuration directly:
  1. Stop the NiFi service.
  2. Back up the current flow.xml.gz and flow.json.gz files.
    mv /var/lib/nifi/conf/flow.xml.gz /var/lib/nifi/conf/flow.xml.gz.bkp
    cp /var/lib/nifi/conf/flow.json.gz /var/lib/nifi/conf/flow.json.gz.bkp
  3. Extract the flow.json.gz file.
    gzip -d /var/lib/nifi/conf/flow.json.gz
    
  4. Run the following command to remove the duplicate property.
    sed 's/\"max-string-length\":\"20 MB\",/ /g' /var/lib/nifi/conf/flow.json > /tmp/flow.json
  5. Compress the edited file.
    gzip /tmp/flow.json
  6. Replace the updated flow.json.gz back into the configuration directory.
    mv /tmp/flow.json.gz /var/lib/nifi/conf/flow.json.gz
  7. Start the NiFi service.
CFM-3870: QueryAirtableTable processor is no longer working

The use of API keys for authentication in Airtable has been deprecated. As a result, the QueryAirtableTable processor no longer functions with API keys.

To ensure continued functionality, generate a Personal Access Token (PAT) in Airtable and replace the API key in the "API Key" property of the QueryAirtableTable processor for authentication.

CFM-4200: Hive3QL slowness

You can experience a performance issue due to slower record processing on EC2 clusters when using the Hive3QL processor in Cloudera Private Cloud Base 7.1.9 Service Pack 1.

LDAP authentication error caused by special characters in passwords
After upgrading to Cloudera Flow Management 2.1.7 SP1, 2.1.7, 2.1.6 SP1 or 2.1.6 versions, you may encounter the following error if you use LDAP:
org.springframework.ldap.AuthenticationException: LDAP: error code 49
The issue is caused by the use of special characters in LDAP passwords, which are configured under the "Manager Password" property.

To resolve this issue, upgrade to Cloudera Flow Management 2.1.7.1001.

If you prefer to upgrade to a Cloudera Flow Management 2.1.6 version, a new HOTFIX build is needed to address this issue. Contact Cloudera Support for assistance.

Limitation in JDK 17 support

Cloudera Flow Management 2.1.7.1000 (Service Pack 1) supports JDK 17, but with specific configuration requirements. To ensure proper functionality and avoid any potential issues with JDK17 compatibility, add the following lines to the bootstrap.conf file (located in the NiFi Node Advanced Configuration Snippet for staging/bootstrap.conf.xml in Cloudera Manager):

java.arg.add-opens.java.lang=--add-opens=java.base/java.lang=ALL-UNNAMED
java.arg.add-opens.java.nio=--add-opens=java.base/java.nio=ALL-UNNAME
java.arg.add-opens.java.net=--add-opens=java.base/java.net=ALL-UNNAMED
Unused NiFi configuration values
The following NiFi configuration values are no longer in use. They are still visible in the UI, but they are obsolete and have no effect on functionality.
  • nifi.nar.hotfix.provider.file.list.identifier
  • nifi.nar.hotfix.provider.location.identifier
  • nifi.nar.hotfix.provider.last.modification.identifier
  • nifi.nar.hotfix.provider.directory.identifier
  • nifi.nar.hotfix.provider.date.time.format
  • nifi.nar.hotfix.provider.proxy.user
  • nifi.nar.hotfix.provider.proxy.password
  • nifi.nar.hotfix.provider.proxy.server
  • nifi.nar.hotfix.provider.proxy.server.port
  • nifi.nar.hotfix.provider.connect.timeout
  • nifi.nar.hotfix.provider.read.timeout
  • nifi.nar.hotfix.provider.nar.location
  • nifi.nar.hotfix.provider.poll.interval
  • nifi.nar.hotfix.provider.implementation
  • nifi.nar.hotfix.provider.user.name
  • nifi.nar.hotfix.provider.password
  • nifi.nar.hotfix.provider.base.url
  • nifi.nar.hotfix.provider.required.version
  • nifi.nar.hotfix.provider.enabled

Known issues in Cloudera Flow Management 2.1.7

LDAP authentication error caused by special characters in passwords
After upgrading to Cloudera Flow Management 2.1.7 SP1, 2.1.7, 2.1.6 SP1 or 2.1.6 versions, you may encounter the following error if you use LDAP:
org.springframework.ldap.AuthenticationException: LDAP: error code 49
The issue is caused by the use of special characters in LDAP passwords, which are configured under the "Manager Password" property.

To resolve this issue, upgrade to Cloudera Flow Management 2.1.7.1001.

If you prefer to upgrade to a Cloudera Flow Management 2.1.6 version, a new HOTFIX build is needed to address this issue. Contact Cloudera Support for assistance.

Truststore changes with Ranger Plugin causing TLS handshake errors
When using the Ranger plugin, the default truststore is changed from cacerts to AutoTLS truststore (cm-auto-global_truststore.jks). This can lead to unintended issues such as TLS handshake errors with common CAs. Connections with common CAs may fail, causing service outages because the AutoTLS truststore contains only internal CA certificates and not the public root certificates.

Add the required certificates manually to the Cloudera Manager truststore.

  1. Open Cloudera Manager and navigate to Administration > Security > Update Auto-TLS Truststore.
  2. Import the certificates in PEM format.
Configuration of java.arg.7
A property has been added for defining java.arg.7 to provide the ability to override the default location of the temporary directory used by JDK. By default this value is empty in Cloudera Manager. If you use this argument for another purpose, change it to a different, unused argument number (or use letters instead: java.arg.mycustomargument). Not changing the argument can impact functionalities after upgrades/migrations.
JDK error
JDK 8 version u252 is supported. Any lower version may result in this error when NiFi starts:
SHA512withRSAandMGF1 Signature not available
When using Java 8, only version u252, and above are supported.
JDK limitation
JDK 8u271, JDK 8u281, and JDK 8u291 may cause socket leak issues in NiFi due to JDK-8245417 and JDK-8256818. Verify the build version of your JDK. Later builds are fixed as described in JDK-8256818.
When using Java 8, only version u252, and above are supported.
Kudu Client
All the records are sent as a single Kafka message containing an array of records.

There is an issue in the Kudu client preventing the creation of a new tables using the NiFi processors. The table needs to exist before NiFi tries to push data into it. You may see this error when this issue arises:

Caused by: org.apache.kudu.client.NonRecoverableException: failed to wait for Hive Metastore notification log listener to catch up: failed to retrieve notification log events: failed to open Hive Metastore connection: SASL(-15): mechanism too weak for this user
Verify the necessary table exists in Kudu.
NiFi Node Connection test failures
In Cloudera Flow Management 2.1.3, Cloudera Manager includes a new health check feature. The health check alerts users if a NiFi instance is running but disconnected from the NiFi cluster. For this health check to be successful, you must update a Ranger policy. There is a known issue when the NiFi service is running but the NiFi Node(s) report Bad Health due to the NiFi Node Connection test.
Update the policy:
  1. From the Ranger UI, access the Controller policy for the NiFi service.
  2. Verify the nifi group is set in the policy.
  3. Add the nifi user, to the policy, with READ permissions.
NiFi UI Performance considerations
A known issue in Chrome 92.x causes significant slowness in the NiFi UI and may lead to high CPU consumption.

For more information, see the Chrome Known Issues documentation at 1235045.

Use another version of Chrome or a different browser.
SSHJ version change and key negotiation issue with old SSH servers
ListSFTP and PutSFTP processors fail when using the legacy ssh-rsa algorithm for authentication with the following error:
UserAuthException: Exhausted available authentication methods
Set Key Algorithms Allowed property in PutSFTP to ssh-rsa.
KeyStoreException: placeholder not found
After an upgrade, NiFi may fail to start with the following error:
WARN org.apache.nifi.web.server.JettyServer: Failed to start web server... shutting down.
java.security.KeyStoreException: placeholder not found

The error is caused by missing configuration for the type of the keystore and truststore files.

  1. Go to Cloudera Manager -> NiFi service -> Configuration.
  2. Add the below properties for NiFi Node Advanced Configuration Snippet (Safety Valve) for staging/nifi.properties.xml.
    nifi.security.keystoreType=**[value]**
    nifi.security.truststoreType=**[value]**

    Where value must be PKCS12, JKS, or BCFKS. JKS is the preferred type, BCFKS and PKCS12 files are loaded with BouncyCastle provider.

  3. Restart NiFi.
InferAvroSchema may fail when inferring schema for JSON data
In Apache NiFi 1.17, the dependency on Apache Avro has been upgraded to 1.11.0. However, the InferAvroSchema processor depends on the hadoop-libraries NAR from which the Avro version comes from, causing a NoSuchMethodError exception. Having well defined schemas ensures consistent behavior, allows for proper schema versioning and prevents downstream systems to generate errors because of unexpected schema changes. Besides, schema inference may not always be 100% accurate and can be an expensive operation in terms of performances.

Use the ExtractRecordSchema processor to infer the schema of your data with an appropriate reader and add the schema as a FlowFile attribute.