Known Issues in Flow Management

Learn about the known issues in Flow Management clusters, the impact or changes to the functionality, and the workaround.

Learn about the known issues and limitations in Flow Management in this release:
Incomplete Ranger policy for NiFi metrics in Cloudera Manager
To have Cloudera Manager properly reflect the NiFi metrics for the NiFi service, the Flow NiFi access policies in Ranger needs to be updated to include the "nifi" group.
KafkaRecordSink puts multiple records in one message
All the records are sent as a single Kafka message containing an array of records.

For more information, see NIFI-8326.

There is no workaround for this issue.
Kudu client preventing the creation of new tables using NiFi processors
There is an issue in the Kudu client preventing the creation of new tables using NiFi processors. The table needs to exist before NiFi tries to push data into it. You may see this error when this issue arises:
Caused by: org.apache.kudu.client.NonRecoverableException: failed to wait for Hive Metastore notification log listener to catch up: failed to retrieve notification log events: failed to open Hive Metastore connection: SASL(-15): mechanism too weak for this user

For more information, see KUDU-3297.

There is no workaround for this issue.
NiFi Atlas reporting task does not work after data lake upgrade from light to medium

After you upgrade your data lake from light to medium scale, the data lake machine hostname and IP address will change. As the Atlas reporting task uses Atlas and Kafka server hostnames, after the upgrade the wrong hostnames will prevent NiFi to report into Atlas.

Update the configuration of the ReportLineageToAtlas reporting task:
  1. Open the Global menu on the NiFi UI.
  2. Click Controller settings.
  3. Select the Reporting tasks tab in the dialog box.
  4. Stop the ReportLineageToAtlas reporting task and update the configuration:
    • Replace the hostname value in the Atlas Urls configuration with the new Atlas hostname.
    • Replace the hostnames value in the Kafka Bootstrap servers configuration with the new Kafka bootstrap server hostnames.
  5. Start the ReportLineageToAtlas reporting task.
InferAvroSchema may fail when inferring schema for JSON data
In Apache NiFi 1.17, the dependency on Apache Avro has been upgraded to 1.11.0. However, the InferAvroSchema processor depends on the hadoop-libraries NAR from which the Avro version comes from, causing a NoSuchMethodError exception. Having well defined schemas ensures consistent behavior, allows for proper schema versioning and prevents downstream systems to generate errors because of unexpected schema changes. Besides, schema inference may not always be 100% accurate and can be an expensive operation in terms of performances.
Use the ConvertRecord processor and have the Record Writer write the schema as a FlowFile attribute.