Known issues in Flow Management
Learn about the known issues and limitations in Flow Management clusters, their impact on functionality, and any available workarounds to mitigate these issues.
7.3.1.400
NiFi 1.28.1 with Cloudera Flow Management 2.2.9.400
There are no known issues in this release.
NiFi 2.3.0 with Cloudera Flow Management 4.2.1.400
- NiFi service fails to start after upgrading to 7.3.1.400 from an earlier 7.3.1 version due to missing flow.json.gz file
-
When upgrading a Data Hub cluster from version 7.3.1.0, 7.3.1.100, 7.3.1.200, or 7.3.1.300 to 7.3.1.400 with NiFi 2, the upgrade process may fail, leaving NiFi instances in an unhealthy state and preventing the NiFi service from starting.
The issue occurs when only a flow.xml.gz file is present in the affected clusters, as the upgrade process expects a flow.json.gz file, the default format in NiFi 2.
This causes the post-upgrade validation script to fail with an error:ERROR: please provide correct path to flow.json.gz file, current one is empty or invalid: "/hadoopfs/fs1/working-dir/flow.json.gz"
7.3.1.0
NiFi 1.28.1 with Cloudera Flow Management 2.2.9
- CFM-4331: HBase 1.1.2 components incompatible with JDK17
-
HBase 1.1.2 components are not compatible with JDK 17.
- Unused NiFi configuration values
-
The following NiFi configuration values are no longer in use. They are still visible in the UI, but they are obsolete and have no effect on functionality.
nifi.nar.hotfix.provider.file.list.identifier
nifi.nar.hotfix.provider.location.identifier
nifi.nar.hotfix.provider.last.modification.identifier
nifi.nar.hotfix.provider.directory.identifier
nifi.nar.hotfix.provider.date.time.format
nifi.nar.hotfix.provider.proxy.user
nifi.nar.hotfix.provider.proxy.password
nifi.nar.hotfix.provider.proxy.server
nifi.nar.hotfix.provider.proxy.server.port
nifi.nar.hotfix.provider.connect.timeout
nifi.nar.hotfix.provider.read.timeout
nifi.nar.hotfix.provider.nar.location
nifi.nar.hotfix.provider.poll.interval
nifi.nar.hotfix.provider.implementation
nifi.nar.hotfix.provider.user.name
nifi.nar.hotfix.provider.password
nifi.nar.hotfix.provider.base.url
nifi.nar.hotfix.provider.required.version
nifi.nar.hotfix.provider.enabled
- Unable to view NiFi or NiFi Registry user interface after upgrade due to authorization provider change
-
After upgrading Flow Management Data Hub clusters to Cloudera on cloud 7.3.1 (or 7.2.18), you may encounter an issue where the NiFi or NiFi Registry user interface is inaccessible, displaying the following error:
Unable to view the user interface
In versions prior to 7.2.18, NiFi group authorization relied on the host’s SSSD configuration to synchronize groups using the SHELL user group provider. Starting in Cloudera on cloud 7.2.18, the SHELL user group provider is deprecated, and newly deployed clusters default to the LDAP user group provider. The impacted components are NiFi and NiFi Registry. - PutIcebergCDC processor error: Unable to specify server’s Kerberos Principal name
-
When using the PutIcebergCDC processor, you may encounter an error if the Hadoop Configuration Resources property specified for the Catalog Service only includes the standard Hadoop configuration files from Cloudera environment (/etc/hadoop/conf/core-site.xml, /etc/hadoop/conf/ssl-client.xml, and /etc/hive/conf/hive-site.xml). The error message states:
Failed to specify server’s Kerberos principal name.
- Incomplete Ranger policy for NiFi metrics in Cloudera Manager
-
Cloudera Manager does not accurately reflect NiFi metrics for the NiFi service due to incomplete Flow NiFi access policies in Ranger. The required 'nifi' group is not included in the access policies, resulting in restricted access to the metrics data.
- InferAvroSchema may fail when inferring schema for JSON data
-
In Apache NiFi 1.17, the dependency on Apache Avro has been upgraded to 1.11.0. However, the InferAvroSchema processor depends on the hadoop-libraries NAR from which the Avro version comes from, causing a NoSuchMethodError exception.
NiFi 2.0.0 with Cloudera Flow Management 4.2.1
- Processors using OpenAI library may not work
-
When using Flow Management clusters, several processors relying on the OpenAI library are not functional due to compatibility issues caused by OpenAI API changes. The affected processors use an outdated OpenAI library version that is no longer supported. The impacted processors are:
- PutChroma
- QueryChroma
- PromptChatGPT
- PutOpenSearchVector
- QueryOpenSearchVector
- PutPinecone
- QueryPinecone
- PutQdrant
- QueryQdrant
These processors require an updated OpenAI library version (1.56.2 or later) to function correctly.
- Invalid Python version
-
Due to the invalid Python version defined for the NiFi service, the Python API based processors (such as PromptChatGPT, QueryPinecone, and so on) will remain invalid as the NiFi service will be unable to download the associated dependencies. The issue can be resolved by changing the version for the
nifi.python.command
property. - PutIcebergCDC processor error: Unable to specify server’s Kerberos Principal name
-
When using the
PutIcebergCDC
processor, you may encounter an error if the Hadoop Configuration Resources property specified for the Catalog Service only includes the standard Hadoop configuration files from Cloudera environment (/etc/hadoop/conf/core-site.xml, /etc/hadoop/conf/ssl-client.xml, and /etc/hive/conf/hive-site.xml). The error message states: Failed to specify server’s Kerberos principal name. - NiFi service fails to start after upgrading from 7.3.1.0 to a higher version due to missing flow.json.gz file
-
When upgrading a Data Hub cluster from version 7.3.1.0 to 7.3.1.100, 7.3.1.200, or 7.3.1.300 with NiFi 2, the upgrade process may fail, leaving NiFi instances in an unhealthy state and preventing the NiFi service from starting.
The issue occurs because the upgrade process expects a flow.json.gz file (the default for NiFi 2), but the affected clusters only contain a flow.xml.gz file. This mismatch causes the post-upgrade validation script to fail with the following error:ERROR: please provide correct path to flow.json.gz file, current one is empty or invalid: "/hadoopfs/fs1/working-dir/flow.json.gz"