Known issues in Cloudera Flow Management

Review the list of known issues in Cloudera Flow Management.

Known issues in Cloudera Flow Management 2.1.7.2000

Known issues

JSON processors become invalid after upgrade

After upgrading to Cloudera Flow Management 2.1.7 from 2.1.6, JSON processors or controller services may become invalid due to more than one max-string-length properties.

Error message:
'max-string-length' validated against '20MB' is invalid because 'max-string-length' is not a supported property or has no Validator associated with it.
For a few JSON-type processors or controller services:

Open the NiFi UI and manually delete the extra max-string-length property from each affected processor or controller service.

For a larger number of affected JSON-type processors or controller services:
Follow these steps to update the configuration directly:
  1. Stop the NiFi service.
  2. Back up the current flow.xml.gz and flow.json.gz files.
    mv /var/lib/nifi/conf/flow.xml.gz /var/lib/nifi/conf/flow.xml.gz.bkp
    cp /var/lib/nifi/conf/flow.json.gz /var/lib/nifi/conf/flow.json.gz.bkp
  3. Extract the flow.json.gz file.
    gzip -d /var/lib/nifi/conf/flow.json.gz
    
  4. Run the following command to remove the duplicate property.
    sed 's/\"max-string-length\":\"20 MB\",/ /g' /var/lib/nifi/conf/flow.json > /tmp/flow.json
  5. Compress the edited file.
    gzip /tmp/flow.json
  6. Replace the updated flow.json.gz back into the configuration directory.
    mv /tmp/flow.json.gz /var/lib/nifi/conf/flow.json.gz
  7. Start the NiFi service.
CFM-4200: Hive3QL slowness

You can experience a performance issue due to slower record processing on EC2 clusters when using the Hive3QL processor in Cloudera Private Cloud Base 7.1.9 Service Pack 2.

Unused NiFi configuration values
The following NiFi configuration values are no longer in use. They are still visible in the UI, but they are obsolete and have no effect on functionality.
  • nifi.nar.hotfix.provider.file.list.identifier
  • nifi.nar.hotfix.provider.location.identifier
  • nifi.nar.hotfix.provider.last.modification.identifier
  • nifi.nar.hotfix.provider.directory.identifier
  • nifi.nar.hotfix.provider.date.time.format
  • nifi.nar.hotfix.provider.proxy.user
  • nifi.nar.hotfix.provider.proxy.password
  • nifi.nar.hotfix.provider.proxy.server
  • nifi.nar.hotfix.provider.proxy.server.port
  • nifi.nar.hotfix.provider.connect.timeout
  • nifi.nar.hotfix.provider.read.timeout
  • nifi.nar.hotfix.provider.nar.location
  • nifi.nar.hotfix.provider.poll.interval
  • nifi.nar.hotfix.provider.implementation
  • nifi.nar.hotfix.provider.user.name
  • nifi.nar.hotfix.provider.password
  • nifi.nar.hotfix.provider.base.url
  • nifi.nar.hotfix.provider.required.version
  • nifi.nar.hotfix.provider.enabled

Known issues in Cloudera Flow Management 2.1.7.1000

Known issues

JSON processors become invalid after upgrade

After upgrading to Cloudera Flow Management 2.1.7 from 2.1.6, JSON processors or controller services may become invalid due to more than one max-string-length properties.

Error message:
'max-string-length' validated against '20MB' is invalid because 'max-string-length' is not a supported property or has no Validator associated with it.
For a few JSON-type processors or controller services:

Open the NiFi UI and manually delete the extra max-string-length property from each affected processor or controller service.

For a larger number of affected JSON-type processors or controller services:
Follow these steps to update the configuration directly:
  1. Stop the NiFi service.
  2. Back up the current flow.xml.gz and flow.json.gz files.
    mv /var/lib/nifi/conf/flow.xml.gz /var/lib/nifi/conf/flow.xml.gz.bkp
    cp /var/lib/nifi/conf/flow.json.gz /var/lib/nifi/conf/flow.json.gz.bkp
  3. Extract the flow.json.gz file.
    gzip -d /var/lib/nifi/conf/flow.json.gz
    
  4. Run the following command to remove the duplicate property.
    sed 's/\"max-string-length\":\"20 MB\",/ /g' /var/lib/nifi/conf/flow.json > /tmp/flow.json
  5. Compress the edited file.
    gzip /tmp/flow.json
  6. Replace the updated flow.json.gz back into the configuration directory.
    mv /tmp/flow.json.gz /var/lib/nifi/conf/flow.json.gz
  7. Start the NiFi service.
CFM-3870: QueryAirtableTable processor is no longer working

The use of API keys for authentication in Airtable has been deprecated. As a result, the QueryAirtableTable processor no longer functions with API keys.

To ensure continued functionality, generate a Personal Access Token (PAT) in Airtable and replace the API key in the "API Key" property of the QueryAirtableTable processor for authentication.

CFM-4200: Hive3QL slowness

You can experience a performance issue due to slower record processing on EC2 clusters when using the Hive3QL processor in Cloudera Private Cloud Base 7.1.9 Service Pack 1.

LDAP authentication error caused by special characters in passwords
After upgrading to Cloudera Flow Management 2.1.7 SP1, 2.1.7, 2.1.6 SP1 or 2.1.6 versions, you may encounter the following error if you use LDAP:
org.springframework.ldap.AuthenticationException: LDAP: error code 49
The issue is caused by the use of special characters in LDAP passwords, which are configured under the "Manager Password" property.

To resolve this issue, upgrade to Cloudera Flow Management 2.1.7.1001.

If you prefer to upgrade to a Cloudera Flow Management 2.1.6 version, a new HOTFIX build is needed to address this issue. Contact Cloudera Support for assistance.

Limitation in JDK 17 support

Cloudera Flow Management 2.1.7.1000 (Service Pack 1) supports JDK 17, but with specific configuration requirements. To ensure proper functionality and avoid any potential issues with JDK17 compatibility, add the following lines to the bootstrap.conf file (located in the NiFi Node Advanced Configuration Snippet for staging/bootstrap.conf.xml in Cloudera Manager):

java.arg.add-opens.java.lang=--add-opens=java.base/java.lang=ALL-UNNAMED
java.arg.add-opens.java.nio=--add-opens=java.base/java.nio=ALL-UNNAME
java.arg.add-opens.java.net=--add-opens=java.base/java.net=ALL-UNNAMED
Unused NiFi configuration values
The following NiFi configuration values are no longer in use. They are still visible in the UI, but they are obsolete and have no effect on functionality.
  • nifi.nar.hotfix.provider.file.list.identifier
  • nifi.nar.hotfix.provider.location.identifier
  • nifi.nar.hotfix.provider.last.modification.identifier
  • nifi.nar.hotfix.provider.directory.identifier
  • nifi.nar.hotfix.provider.date.time.format
  • nifi.nar.hotfix.provider.proxy.user
  • nifi.nar.hotfix.provider.proxy.password
  • nifi.nar.hotfix.provider.proxy.server
  • nifi.nar.hotfix.provider.proxy.server.port
  • nifi.nar.hotfix.provider.connect.timeout
  • nifi.nar.hotfix.provider.read.timeout
  • nifi.nar.hotfix.provider.nar.location
  • nifi.nar.hotfix.provider.poll.interval
  • nifi.nar.hotfix.provider.implementation
  • nifi.nar.hotfix.provider.user.name
  • nifi.nar.hotfix.provider.password
  • nifi.nar.hotfix.provider.base.url
  • nifi.nar.hotfix.provider.required.version
  • nifi.nar.hotfix.provider.enabled

CVEs not fixed

The following Common Vulnerabilities and Exposures (CVE) remain unresolved in Cloudera Flow Management 2.1.7.1000.

CVE-2022-31159: Partial Path Traversal in com.amazonaws:aws-java-sdk-s3

The AWS SDK for Java enables Java developers to work with Amazon Web Services. A partial-path traversal issue exists within the `downloadDirectory` method in the AWS S3 TransferManager component of the AWS SDK for Java v1 prior to version 1.12.261. Applications using the SDK control the `destinationDirectory` argument, but S3 object keys are determined by the application that uploaded the objects. The `downloadDirectory` method allows the caller to pass a filesystem object in the object key but contained an issue in the validation logic for the key name. A knowledgeable actor could bypass the validation logic by including a UNIX double-dot in the bucket key. Under certain conditions, this could permit them to retrieve a directory from their S3 bucket that is one level up in the filesystem from their working directory. This issue’s scope is limited to directories whose name prefix matches the destinationDirectory. E.g. for destination directory`/tmp/foo`, the actor can cause a download to `/tmp/foo-bar`, but not `/tmp/bar`. If `com.amazonaws.services.s3.transfer.TransferManager::downloadDirectory` is used to download an untrusted buckets contents, the contents of that bucket can be written outside of the intended destination directory. Version 1.12.261 contains a patch for this issue. As a workaround, when calling `com.amazonaws.services.s3.transfer.TransferManager::downloadDirectory`, pass a `KeyFilter` that forbids `S3ObjectSummary` objects that `getKey` method return a string containing the substring `..` .

CVE-2019-11358

jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.

CVE-2020-11022: Potential XSS vulnerability in jQuery

In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.

CVE-2020-11023: Potential XSS vulnerability in jQuery

In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.

CVE-2024-21634: Ion Java StackOverflow vulnerability

Amazon Ion is a Java implementation of the Ion data notation. Prior to version 1.10.5, a potential denial-of-service issue exists in `ion-java` for applications that use `ion-java` to deserialize Ion text encoded data, or deserialize Ion text or binary encoded data into the `IonValue` model and then invoke certain `IonValue` methods on that in-memory representation. An actor could craft Ion data that, when loaded by the affected application and/or processed using the `IonValue` model, results in a `StackOverflowError` originating from the `ion-java` library. The patch is included in `ion-java` 1.10.5. As a workaround, do not load data which originated from an untrusted source or that could have been tampered with.

CVEs excluded based on the NiFi exclusion list

CVE-2024-35255: Azure Identity Libraries And Microsoft Authentication Library Elevation Of Privilege Vulnerability

Azure Identity Libraries and Microsoft Authentication Library Elevation of Privilege Vulnerability

Reason: Transitive dep for msal4j 1.16.0, no newer version available.

CVE-2018-14335

An issue was discovered in H2 1.4.197. Insecure handling of permissions in the backup function allows attackers to read sensitive files (outside of their permissions) via a symlink to a fake database file.

Reason: No suggested resolution available yet.

CVE-2022-3171: Memory handling vulnerability in ProtocolBuffers Java core and lite

A parsing issue with binary data in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields causes objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above.

Reason: This is protobuf shaded by Hadoop / dependency comes from hive-exec 3.1.3000.7.1.9.1000-103.

CVE-2022-3509: Parsing issue in protobuf textformat

A parsing issue similar to CVE-2022-3171, but with textformat in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields causes objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above.

Reason: This is protobuf shaded by Hadoop / dependency comes from hive-exec 3.1.3000.7.1.9.1000-103.

CVE-2021-22569: Denial of Service of protobuf-java parsing procedure

An issue in protobuf-java allowed the interleaving of com.google.protobuf.UnknownFieldSet fields in such a way that would be processed out of order. A small malicious payload can occupy the parser for several minutes by creating large numbers of short-lived objects that cause frequent, repeated pauses. We recommend upgrading libraries beyond the vulnerable versions.

Reason: This is protobuf shaded by Hadoop / dependency comes from hive-exec 3.1.3000.7.1.9.1000-103.

CVE-2024-36114: Decompressors Can Crash The JVM And Leak Memory Content In Aircompressor

Aircompressor is a library with ports of the Snappy, LZO, LZ4, and Zstandard compression algorithms to Java. All decompressor implementations of Aircompressor (LZ4, LZO, Snappy, Zstandard) can crash the JVM for certain input, and in some cases also leak the content of other memory of the Java process (which could contain sensitive information). When decompressing certain data, the decompressors try to access memory outside the bounds of the given byte arrays or byte buffers. Because Aircompressor uses the JDK class `sun.misc.Unsafe` to speed up memory access, no additional bounds checks are performed and this has similar security consequences as out-of-bounds access in C or C++, namely it can lead to non-deterministic behavior or crash the JVM. Users should update to Aircompressor 0.27 or newer where these issues have been fixed. When decompressing data from untrusted users, this can be exploited for a denial-of-service attack by crashing the JVM, or to leak other sensitive information from the Java process. There are no known workarounds for this issue.

Reason: Dependency comes from hive-exec 3.1.3000.7.1.9.1000-103.

CVE-2023-36479: Jetty vulnerable to errant command quoting in CGI Servlet

Eclipse Jetty Canonical Repository is the canonical repository for the Jetty project. Users of the CgiServlet with a very specific command structure may have the wrong command executed. If a user sends a request to a org.eclipse.jetty.servlets.CGI Servlet for a binary with a space in its name, the servlet will escape the command by wrapping it in quotation marks. This wrapped command, plus an optional command prefix, will then be executed through a call to Runtime.exec. If the original binary name provided by the user contains a quotation mark followed by a space, the resulting command line will contain multiple tokens instead of one. This issue was patched in version 9.4.52, 10.0.16, 11.0.16 and 12.0.0-beta2.

Reason: Jetty 10 requires Java 11

CVE-2021-42392

The org.h2.util.JdbcUtils.getConnection method of the H2 database takes as parameters the class name of the driver and URL of the database. An attacker may pass a JNDI driver name and a URL leading to a LDAP or RMI servers, causing remote code execution. This can be exploited through various attack vectors, most notably through the H2 Console which leads to unauthenticated remote code execution.

Reason: h2-database-v14 using this version - We recommend to remove the NAR when not needed.

CVE-2022-23221

H2 Console before 2.1.210 allows remote attackers to execute arbitrary code via a jdbc:h2:mem JDBC URL containing the IGNORE_UNKNOWN_SETTINGS=TRUE;FORBID_CREATION=FALSE;INIT=RUNSCRIPT substring, a different vulnerability than CVE-2021-42392.

Reason: h2-database-v14 using this version - We recommend to remove the NAR when not needed.

CVE-2021-23463: XML External Entity (XXE) Injection

The package com.h2database:h2 from 1.4.198 and before 2.0.202 are vulnerable to XML External Entity (XXE) Injection via the org.h2.jdbc.JdbcSQLXML class object, when it receives parsed string data from org.h2.jdbc.JdbcResultSet.getSQLXML() method. If it executes the getSource() method when the parameter is DOMSource.class it will trigger the vulnerability.

Reason: h2-database-v14 using this version - We recommend to remove the NAR when not needed.

CVE-2022-45868

The web-based admin console in H2 Database Engine before 2.2.220 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states "This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that." Nonetheless, the issue was fixed in 2.2.220.

Reason: h2-database-v14 using this version - We recommend to remove the NAR when not needed.

CVE-2018-1000840

Processing Foundation Processing version 3.4 and earlier contains a XML External Entity (XXE) vulnerability in loadXML() function that can result in An attacker can read arbitrary files and exfiltrate their contents via HTTP requests. This attack appear to be exploitable via The victim must use Processing to parse a crafted XML document.

Reason: nifi-xml-processing is marked as vulnerability, no clear solution, might be false positive.

CVE-2023-48795

The SSH transport protocol with certain OpenSSH extensions, found in OpenSSH before 9.6 and other products, allows remote attackers to bypass integrity checks such that some packets are omitted (from the extension negotiation message), and a client and server may consequently end up with a connection for which some security features have been downgraded or disabled, aka a Terrapin attack. This occurs because the SSH Binary Packet Protocol (BPP), implemented by these extensions, mishandles the handshake phase and mishandles use of sequence numbers. For example, there is an effective attack against SSH's use of ChaCha20-Poly1305 (and CBC with Encrypt-then-MAC). The bypass occurs in chacha20-poly1305@openssh.com and (if CBC is used) the -etm@openssh.com MAC algorithms. This also affects Maverick Synergy Java SSH API before 3.1.0-SNAPSHOT, Dropbear through 2022.83, Ssh before 5.1.1 in Erlang/OTP, PuTTY before 0.80, AsyncSSH before 2.14.2, golang.org/x/crypto before 0.17.0, libssh before 0.10.6, libssh2 through 1.11.0, Thorn Tech SFTP Gateway before 3.4.6, Tera Term before 5.1, Paramiko before 3.4.0, jsch before 0.2.15, SFTPGo before 2.5.6, Netgate pfSense Plus through 23.09.1, Netgate pfSense CE through 2.7.2, HPN-SSH through 18.2.0, ProFTPD before 1.3.8b (and before 1.3.9rc2), ORYX CycloneSSH before 2.3.4, NetSarang XShell 7 before Build 0144, CrushFTP before 10.6.0, ConnectBot SSH library before 2.2.22, Apache MINA sshd through 2.11.0, sshj through 0.37.0, TinySSH through 20230101, trilead-ssh2 6401, LANCOM LCOS and LANconfig, FileZilla before 3.66.4, Nova before 11.8, PKIX-SSH before 14.4, SecureCRT before 9.4.3, Transmit5 before 5.10.4, Win32-OpenSSH before 9.5.0.0p1-Beta, WinSCP before 6.2.2, Bitvise SSH Server before 9.32, Bitvise SSH Client before 9.33, KiTTY through 0.76.1.13, the net-ssh gem 7.2.0 for Ruby, the mscdex ssh2 module before 1.15.0 for Node.js, the thrussh library before 0.35.1 for Rust, and the Russh crate before 0.40.2 for Rust.

Reason: This is in NiFi Registry Web API

CVE-2018-17196

In Apache Kafka versions between 0.11.0.0 and 2.1.0, it is possible to manually craft a Produce request which bypasses transaction/idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability. Users should upgrade to 2.1.1 or later where this vulnerability has been fixed.

Reason: Not fixed, we recommend customers to use nifi-kafka-2-6-nar*

CVE-2024-39901: OpenSearch Observability Does Not Properly Restrict Access To Private Tenant Resources

OpenSearch Observability is collection of plugins and applications that visualize data-driven events. An issue in the OpenSearch observability plugins allows unintended access to private tenant resources like notebooks. The system did not properly check if the user was the resource author when accessing resources in a private tenant, leading to potential data being revealed. The patches are included in OpenSearch 2.14.

Reason: CDP 7.1.9 SP1 related dependency, can’t be fixed in NiFi

CVE-2023-39017

quartz-jobs 2.3.2 and below was discovered to contain a code injection vulnerability in the component org.quartz.jobs.ee.jms.SendQueueMessageJob.execute. This vulnerability is exploited via passing an unchecked argument. NOTE: this is disputed by multiple parties because it is not plausible that untrusted user input would reach the code location where injection must occur.

Reason: Still RC1 is the latest available version.

CVE-2024-29025: Netty HttpPostRequestDecoder Can OOM

Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. The `HttpPostRequestDecoder` can be tricked to accumulate data. While the decoder can store items on the disk if configured so, there are no limits to the number of fields the form can have, an attacher can send a chunked post consisting of many small fields that will be accumulated in the `bodyListHttpData` list. The decoder cumulates bytes in the `undecodedChunk` buffer until it can decode a field, this field can cumulate data without limits. This vulnerability is fixed in 4.1.108.Final.

Reason: CDP issue. Can't fix it in NiFi.

CVE-2023-51437: Apache Pulsar: Timing Attack In SASL Token Signature Verification

Observable timing discrepancy vulnerability in Apache Pulsar SASL Authentication Provider can allow an attacker to forge a SASL Role Token that will pass signature verification. Users are recommended to upgrade to version 2.11.3, 3.0.2, or 3.1.1 which fixes the issue. Users should also consider updating the configured secret in the `saslJaasServerRoleTokenSignerSecretPath` file. Any component matching an above version running the SASL Authentication Provider is affected. That includes the Pulsar Broker, Proxy, Websocket Proxy, or Function Worker. 2.11 Pulsar users should upgrade to at least 2.11.3. 3.0 Pulsar users should upgrade to at least 3.0.2. 3.1 Pulsar users should upgrade to at least 3.1.1. Any users running Pulsar 2.8, 2.9, 2.10, and earlier should upgrade to one of the above patched versions. For additional details on this attack vector, please refer to https://codahale.com/a-lesson-in-timing-attacks/ .

Reason: Upgrade to higher nifi-pulsar-nar version is not possible, 1.20.0 requires Java17, 2.0 requires Java21

CVE-2024-21490

This affects versions of the package angular from 1.3.0. A regular expression used to split the value of the ng-srcset directive is vulnerable to super-linear runtime due to backtracking. With large carefully-crafted input, this can result in catastrophic backtracking and cause a denial of service. **Note:** This package is EOL and will not receive any updates to address this issue. Users should migrate to [@angular/core](https://www.npmjs.com/package/@angular/core).

Reason: Angularjs update is not possible, this fw is not supported, will be replaced by Angular 2+ in NiFi 2.x

CVE-2023-50386: Apache Solr: Backup/Restore APIs allow for deployment of executables in malicious ConfigSets

Improper Control of Dynamically-Managed Code Resources, Unrestricted Upload of File with Dangerous Type, Inclusion of Functionality from Untrusted Control Sphere vulnerability in Apache Solr.This issue affects Apache Solr: from 6.0.0 through 8.11.2, from 9.0.0 before 9.4.1. In the affected versions, Solr ConfigSets accepted Java jar and class files to be uploaded through the ConfigSets API. When backing up Solr Collections, these configSet files would be saved to disk when using the LocalFileSystemRepository (the default for backups). If the backup was saved to a directory that Solr uses in its ClassPath/ClassLoaders, then the jar and class files would be available to use with any ConfigSet, trusted or untrusted. When Solr is run in a secure way (Authorization enabled), as is strongly suggested, this vulnerability is limited to extending the Backup permissions with the ability to add libraries. Users are recommended to upgrade to version 8.11.3 or 9.4.1, which fix the issue. In these versions, the following protections have been added: * Users are no longer able to upload files to a configSet that could be executed via a Java ClassLoader. * The Backup API restricts saving backups to directories that are used in the ClassLoader.

Reason: Dependency coming from CDP

CVE-2023-50291: Apache Solr: System Property redaction logic inconsistency can lead to leaked passwords

Insufficiently Protected Credentials vulnerability in Apache Solr. This issue affects Apache Solr: from 6.0.0 through 8.11.2, from 9.0.0 before 9.3.0. One of the two endpoints that publishes the Solr process' Java system properties, /admin/info/properties, was only setup to hide system properties that had "password" contained in the name. There are a number of sensitive system properties, such as "basicauth" and "aws.secretKey" do not contain "password", thus their values were published via the "/admin/info/properties" endpoint. This endpoint populates the list of System Properties on the home screen of the Solr Admin page, making the exposed credentials visible in the UI. This /admin/info/properties endpoint is protected under the "config-read" permission. Therefore, Solr Clouds with Authorization enabled will only be vulnerable through logged-in users that have the "config-read" permission. Users are recommended to upgrade to version 9.3.0 or 8.11.3, which fixes the issue. A single option now controls hiding Java system property for all endpoints, "-Dsolr.hiddenSysProps". By default all known sensitive properties are hidden (including "-Dbasicauth"), as well as any property with a name containing "secret" or "password". Users who cannot upgrade can also use the following Java system property to fix the issue: '-Dsolr.redaction.system.pattern=.*(password|secret|basicauth).*'

Reason: Dependency coming from CDP

CVE-2023-50292: Apache Solr: Solr Schema Designer blindly "trusts" all configsets, possibly leading to RCE by unauthenticated users

Incorrect Permission Assignment for Critical Resource, Improper Control of Dynamically-Managed Code Resources vulnerability in Apache Solr. This issue affects Apache Solr: from 8.10.0 through 8.11.2, from 9.0.0 before 9.3.0. The Schema Designer was introduced to allow users to more easily configure and test new Schemas and configSets. However, when the feature was created, the "trust" (authentication) of these configSets was not considered. External library loading is only available to configSets that are "trusted" (created by authenticated users), thus non-authenticated users are unable to perform Remote Code Execution. Since the Schema Designer loaded configSets without taking their "trust" into account, configSets that were created by unauthenticated users were allowed to load external libraries when used in the Schema Designer. Users are recommended to upgrade to version 9.3.0, which fixes the issue.

Reason: Dependency coming from CDP

CVE-2023-50298: Apache Solr: Solr can expose ZooKeeper credentials via Streaming Expressions

Exposure of Sensitive Information to an Unauthorized Actor vulnerability in Apache Solr.This issue affects Apache Solr: from 6.0.0 through 8.11.2, from 9.0.0 before 9.4.1. Solr Streaming Expressions allows users to extract data from other Solr Clouds, using a "zkHost" parameter. When original SolrCloud is setup to use ZooKeeper credentials and ACLs, they will be sent to whatever "zkHost" the user provides. An attacker could setup a server to mock ZooKeeper, that accepts ZooKeeper requests with credentials and ACLs and extracts the sensitive information, then send a streaming expression using the mock server's address in "zkHost". Streaming Expressions are exposed via the "/streaming" handler, with "read" permissions. Users are recommended to upgrade to version 8.11.3 or 9.4.1, which fix the issue. From these versions on, only zkHost values that have the same server address (regardless of chroot), will use the given ZooKeeper credentials and ACLs when connecting.

Reason: Dependency coming from CDP

Known issues in Cloudera Flow Management 2.1.7

Known issues

LDAP authentication error caused by special characters in passwords
After upgrading to Cloudera Flow Management 2.1.7 SP1, 2.1.7, 2.1.6 SP1 or 2.1.6 versions, you may encounter the following error if you use LDAP:
org.springframework.ldap.AuthenticationException: LDAP: error code 49
The issue is caused by the use of special characters in LDAP passwords, which are configured under the "Manager Password" property.

To resolve this issue, upgrade to Cloudera Flow Management 2.1.7.1001.

If you prefer to upgrade to a Cloudera Flow Management 2.1.6 version, a new HOTFIX build is needed to address this issue. Contact Cloudera Support for assistance.

Truststore changes with Ranger Plugin causing TLS handshake errors
When using the Ranger plugin, the default truststore is changed from cacerts to AutoTLS truststore (cm-auto-global_truststore.jks). This can lead to unintended issues such as TLS handshake errors with common CAs. Connections with common CAs may fail, causing service outages because the AutoTLS truststore contains only internal CA certificates and not the public root certificates.

Add the required certificates manually to the Cloudera Manager truststore.

  1. Open Cloudera Manager and navigate to Administration > Security > Update Auto-TLS Truststore.
  2. Import the certificates in PEM format.
Configuration of java.arg.7
A property has been added for defining java.arg.7 to provide the ability to override the default location of the temporary directory used by JDK. By default this value is empty in Cloudera Manager. If you use this argument for another purpose, change it to a different, unused argument number (or use letters instead: java.arg.mycustomargument). Not changing the argument can impact functionalities after upgrades/migrations.
JDK error
JDK 8 version u252 is supported. Any lower version may result in this error when NiFi starts:
SHA512withRSAandMGF1 Signature not available
When using Java 8, only version u252, and above are supported.
JDK limitation
JDK 8u271, JDK 8u281, and JDK 8u291 may cause socket leak issues in NiFi due to JDK-8245417 and JDK-8256818. Verify the build version of your JDK. Later builds are fixed as described in JDK-8256818.
When using Java 8, only version u252, and above are supported.
Kudu Client
All the records are sent as a single Kafka message containing an array of records.

There is an issue in the Kudu client preventing the creation of a new tables using the NiFi processors. The table needs to exist before NiFi tries to push data into it. You may see this error when this issue arises:

Caused by: org.apache.kudu.client.NonRecoverableException: failed to wait for Hive Metastore notification log listener to catch up: failed to retrieve notification log events: failed to open Hive Metastore connection: SASL(-15): mechanism too weak for this user
Verify the necessary table exists in Kudu.
NiFi Node Connection test failures
In Cloudera Flow Management 2.1.3, Cloudera Manager includes a new health check feature. The health check alerts users if a NiFi instance is running but disconnected from the NiFi cluster. For this health check to be successful, you must update a Ranger policy. There is a known issue when the NiFi service is running but the NiFi Node(s) report Bad Health due to the NiFi Node Connection test.
Update the policy:
  1. From the Ranger UI, access the Controller policy for the NiFi service.
  2. Verify the nifi group is set in the policy.
  3. Add the nifi user, to the policy, with READ permissions.
NiFi UI Performance considerations
A known issue in Chrome 92.x causes significant slowness in the NiFi UI and may lead to high CPU consumption.

For more information, see the Chrome Known Issues documentation at 1235045.

Use another version of Chrome or a different browser.
SSHJ version change and key negotiation issue with old SSH servers
ListSFTP and PutSFTP processors fail when using the legacy ssh-rsa algorithm for authentication with the following error:
UserAuthException: Exhausted available authentication methods
Set Key Algorithms Allowed property in PutSFTP to ssh-rsa.
KeyStoreException: placeholder not found
After an upgrade, NiFi may fail to start with the following error:
WARN org.apache.nifi.web.server.JettyServer: Failed to start web server... shutting down.
java.security.KeyStoreException: placeholder not found

The error is caused by missing configuration for the type of the keystore and truststore files.

  1. Go to Cloudera Manager -> NiFi service -> Configuration.
  2. Add the below properties for NiFi Node Advanced Configuration Snippet (Safety Valve) for staging/nifi.properties.xml.
    nifi.security.keystoreType=**[value]**
    nifi.security.truststoreType=**[value]**

    Where value must be PKCS12, JKS, or BCFKS. JKS is the preferred type, BCFKS and PKCS12 files are loaded with BouncyCastle provider.

  3. Restart NiFi.
InferAvroSchema may fail when inferring schema for JSON data
In Apache NiFi 1.17, the dependency on Apache Avro has been upgraded to 1.11.0. However, the InferAvroSchema processor depends on the hadoop-libraries NAR from which the Avro version comes from, causing a NoSuchMethodError exception. Having well defined schemas ensures consistent behavior, allows for proper schema versioning and prevents downstream systems to generate errors because of unexpected schema changes. Besides, schema inference may not always be 100% accurate and can be an expensive operation in terms of performances.

Use the ExtractRecordSchema processor to infer the schema of your data with an appropriate reader and add the schema as a FlowFile attribute.

CVEs not fixed

The following Common Vulnerabilities and Exposures (CVE) remain unresolved in Cloudera Flow Management 2.1.7.

CVE-2020-36518

jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects.

Reason: com.cloudera:jwtprovider-knox:jar:shaded contains jackson-databind:2.10.5.1, and the dependency cannot be excluded upstream because it uses a downstream-specific package ('com.cloudera').

CVE-2021-46877

jackson-databind 2.10.x through 2.12.x before 2.12.6 and 2.13.x before 2.13.1 allows attackers to cause a denial of service (2 GB transient heap usage per read) in uncommon situations involving JsonNode JDK serialization.

Reason: com.cloudera:jwtprovider-knox:jar:shaded contains jackson-databind:2.10.5.1, and the dependency cannot be excluded upstream because it uses a downstream-specific package ('com.cloudera').

CVE-2022-42003

In FasterXML jackson-databind before versions 2.13.4.1 and 2.12.17.1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled.

Reason: com.cloudera:jwtprovider-knox:jar:shaded contains jackson-databind:2.10.5.1, and the dependency cannot be excluded upstream because it uses a downstream-specific package ( 'com.cloudera').

CVE-2022-42004

In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.

Reason: com.cloudera:jwtprovider-knox:jar:shaded contains jackson-databind:2.10.5.1, and the dependency cannot be excluded upstream because it uses a downstream-specific package ('com.cloudera').

CVE-2021-23463: XML External Entity (XXE) Injection

The package com.h2database:h2 from 1.4.198 and before 2.0.202 are vulnerable to XML External Entity (XXE) Injection via the org.h2.jdbc.JdbcSQLXML class object, when it receives parsed string data from org.h2.jdbc.JdbcResultSet.getSQLXML() method. If it executes the getSource() method when the parameter is DOMSource.class it will trigger the vulnerability.

Reason: The h2-database-v14 package uses this vulnerable version. Cloudera recommends removing the NAR when not needed.

CVE-2021-42392

The org.h2.util.JdbcUtils.getConnection method of the H2 database takes as parameters the class name of the driver and URL of the database. An attacker may pass a JNDI driver name and a URL leading to LDAP or RMI servers, causing remote code execution. This can be exploited through various attack vectors, most notably through the H2 Console which leads to unauthenticated remote code execution.

Reason: The h2-database-v14 package uses this vulnerable version. Cloudera recommends removing the NAR when not needed.

CVE-2022-23221

H2 Console before 2.1.210 allows remote attackers to execute arbitrary code via a jdbc:h2:mem JDBC URL containing the IGNORE_UNKNOWN_SETTINGS=TRUE;FORBID_CREATION=FALSE;INIT=RUNSCRIPT substring, a different vulnerability than CVE-2021-42392.

Reason: The h2-database-v14 package uses this vulnerable version. Cloudera recommends removing the NAR when not needed.

CVE-2022-45868

The web-based admin console in H2 Database Engine before 2.2.220 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states "This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that." Nonetheless, the issue was fixed in 2.2.220.

Reason: The h2-database-v14 package uses this vulnerable version. Cloudera recommends removing the NAR when not needed.

CVE-2018-14335

An issue was discovered in H2 1.4.197. Insecure handling of permissions in the backup function allows attackers to read sensitive files (outside of their permissions) via a symlink to a fake database file.

Reason: No suggested resolution is available yet.

CVE-2023-36415: Azure Identity SDK Remote Code Execution Vulnerability

Azure Identity SDK Remote Code Execution Vulnerability

Reason: No suggested resolution is available yet.

CVE-2020-8908: Temp directory permission issue in Guava

A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured.

Reason: The gcs-connector found in the POM file is shaded by Ranger.

CVE-2023-2976: Use of temporary directory for file creation in `FileBackedOutputStream` in Guava

Use of Java's default temporary directory for file creation in `FileBackedOutputStream` in Google Guava versions 1.0 to 31.1 on Unix systems and Android Ice Cream Sandwich allows other users and apps on the machine with access to the default Java temporary directory to be able to access the files created by the class. Even though the security vulnerability is fixed in version 32.0.0, we recommend using version 32.0.1 as version 32.0.0 breaks some functionality under Windows.

Reason: The gcs-connector found in the POM file is shaded by Ranger.

CVE-2021-22569: Denial of Service of protobuf-java parsing procedure

An issue in protobuf-java allowed the interleaving of com.google.protobuf.UnknownFieldSet fields in such a way that would be processed out of order. A small malicious payload can occupy the parser for several minutes by creating large numbers of short-lived objects that cause frequent, repeated pauses. We recommend upgrading libraries beyond the vulnerable versions.

Reason: This is protobuf shaded by Hadoop.

CVE-2022-3171: Memory handling vulnerability in ProtocolBuffers Java core and lite

A parsing issue with binary data in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields cause objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above.

Reason: This is protobuf shaded by Hadoop.

CVE-2022-3509: Parsing issue in protobuf textformat

A parsing issue similar to CVE-2022-3171, but with textformat in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields cause objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above.

Reason: This is protobuf shaded by Hadoop.

CVE-2019-11358

jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.

Reason: JQuery version upgrade is needed in the UI.

CVE-2020-11022: Potential XSS vulnerability in jQuery

In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.

Reason: JQuery version upgrade is needed in the UI.

CVE-2020-11023: Potential XSS vulnerability in jQuery

In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.

Reason: JQuery version upgrade is needed in the UI.

CVE-2021-29425: Possible limited path traversal vulnerabily in Apache Commons IO

In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like "//../foo", or "\\..\foo", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus "limited" path traversal), if the calling code would use the result to construct a path value.

Reason: jwtprovider-knox found in the POM file is shaded by Ranger.

CVE-2023-1370: Stack exhaustion in json-smart leads to denial of service when parsing malformed JSON

[Json-smart](https://netplex.github.io/json-smart/) is a performance focused, JSON processor lib. When reaching a ‘[‘ or ‘{‘ character in the JSON input, the code parses an array or an object respectively. It was discovered that the code does not have any limit to the nesting of such arrays or objects. Since the parsing of nested arrays and objects is done recursively, nesting too many of them can cause a stack exhaustion (stack overflow) and crash the software.

Reason: jwtprovider-knox found in the POM file is shaded by Ranger.

CVE-2018-17196

In Apache Kafka versions between 0.11.0.0 and 2.1.0, it is possible to manually craft a Produce request which bypasses transaction/idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability. Users should upgrade to 2.1.1 or later where this vulnerability has been fixed.

Reason: Not fixed. Cloudera recommends to use nifi-kafka-2-6-nar*

CVE-2023-48795

The SSH transport protocol with certain OpenSSH extensions, found in OpenSSH before 9.6 and other products, allows remote attackers to bypass integrity checks such that some packets are omitted (from the extension negotiation message), and a client and server may consequently end up with a connection for which some security features have been downgraded or disabled, aka a Terrapin attack. This occurs because the SSH Binary Packet Protocol (BPP), implemented by these extensions, mishandles the handshake phase and mishandles use of sequence numbers. For example, there is an effective attack against SSH's use of ChaCha20-Poly1305 (and CBC with Encrypt-then-MAC). The bypass occurs in chacha20-poly1305@openssh.com and (if CBC is used) the -etm@openssh.com MAC algorithms. This also affects Maverick Synergy Java SSH API before 3.1.0-SNAPSHOT, Dropbear through 2022.83, Ssh before 5.1.1 in Erlang/OTP, PuTTY before 0.80, AsyncSSH before 2.14.2, golang.org/x/crypto before 0.17.0, libssh before 0.10.6, libssh2 through 1.11.0, Thorn Tech SFTP Gateway before 3.4.6, Tera Term before 5.1, Paramiko before 3.4.0, jsch before 0.2.15, SFTPGo before 2.5.6, Netgate pfSense Plus through 23.09.1, Netgate pfSense CE through 2.7.2, HPN-SSH through 18.2.0, ProFTPD before 1.3.8b (and before 1.3.9rc2), ORYX CycloneSSH before 2.3.4, NetSarang XShell 7 before Build 0144, CrushFTP before 10.6.0, ConnectBot SSH library before 2.2.22, Apache MINA sshd through 2.11.0, sshj through 0.37.0, TinySSH through 20230101, trilead-ssh2 6401, LANCOM LCOS and LANconfig, FileZilla before 3.66.4, Nova before 11.8, PKIX-SSH before 14.4, SecureCRT before 9.4.3, Transmit5 before 5.10.4, Win32-OpenSSH before 9.5.0.0p1-Beta, WinSCP before 6.2.2, Bitvise SSH Server before 9.32, Bitvise SSH Client before 9.33, KiTTY through 0.76.1.13, the net-ssh gem 7.2.0 for Ruby, the mscdex ssh2 module before 1.15.0 for Node.js, the thrussh library before 0.35.1 for Rust, and the Russh crate before 0.40.2 for Rust.

Reason: This is in the NiFi Registry Web API.

CVE-2023-36479: Jetty vulnerable to errant command quoting in CGI Servlet

Eclipse Jetty Canonical Repository is the canonical repository for the Jetty project. Users of the CgiServlet with a very specific command structure may have the wrong command executed. If a user sends a request to a org.eclipse.jetty.servlets.CGI Servlet for a binary with a space in its name, the servlet will escape the command by wrapping it in quotation marks. This wrapped command, plus an optional command prefix, will then be executed through a call to Runtime.exec. If the original binary name provided by the user contains a quotation mark followed by a space, the resulting command line will contain multiple tokens instead of one. This issue was patched in version 9.4.52, 10.0.16, 11.0.16 and 12.0.0-beta2.

Reason: Jetty 10 requires Java 11.

CVE-2018-1000840

Processing Foundation Processing version 3.4 and earlier contains a XML External Entity (XXE) vulnerability in loadXML() function that can result in An attacker can read arbitrary files and exfiltrate their contents via HTTP requests. This attack appears to be exploitable via The victim must use Processing to parse a crafted XML document.

Reason: nifi-xml-processing is marked as a vulnerability, with no clear solution.

CVE-2020-5408: Dictionary attack with Spring Security queryable text encryptor

Spring Security versions 5.3.x prior to 5.3.2, 5.2.x prior to 5.2.4, 5.1.x prior to 5.1.10, 5.0.x prior to 5.0.16 and 4.2.x prior to 4.2.16 use a fixed null initialization vector with CBC Mode in the implementation of the queryable text encryptor. A malicious user with access to the data that has been encrypted using such an encryptor may be able to derive the unencrypted values using a dictionary attack.

Reason: The solution suggests downgrading the current version of the spring-security-crypto dependency, which is currently not feasible.

CVEs excluded based on the NiFi exclusion list

CVE-2023-4759: Improper handling of case insensitive filesystems in Eclipse JGit allows arbitrary file write

Arbitrary File Overwrite in Eclipse JGit <= 6.6.0 In Eclipse JGit, all versions <= 6.6.0.202305301015-r, a symbolic link present in a specially crafted git repository can be used to write a file to locations outside the working tree when this repository is cloned with JGit to a case-insensitive filesystem, or when a checkout from a clone of such a repository is performed on a case-insensitive filesystem. This can happen on checkout (DirCacheCheckout), merge (ResolveMerger via its WorkingTreeUpdater), pull (PullCommand using merge), and when applying a patch (PatchApplier). This can be exploited for remote code execution (RCE), for instance if the file written outside the working tree is a git filter that gets executed on a subsequent git command. The issue occurs only on case-insensitive file systems, like the default file systems on Windows and macOS. The user performing the clone or checkout must have the rights to create symbolic links for the problem to occur, and symbolic links must be enabled in the git configuration. Setting the git configuration option core.symlinks = false before checking out avoids the problem. The issue was fixed in Eclipse JGit version 6.6.1.202309021850-r and 6.7.0.202309050840-r, available via Maven Central https://repo1.maven.org/maven2/org/eclipse/jgit/ and repo.eclipse.org https://repo.eclipse.org/content/repositories/jgit-releases/ . A backport is available in 5.13.3 starting from 5.13.3.202401111512-r. The JGit maintainers would like to thank RyotaK for finding and reporting this issue.

CVE-2024-21634: Ion Java StackOverflow vulnerability

Amazon Ion is a Java implementation of the Ion data notation. Prior to version 1.10.5, a potential denial-of-service issue exists in `ion-java` for applications that use `ion-java` to deserialize Ion text encoded data, or deserialize Ion text or binary encoded data into the `IonValue` model and then invoke certain `IonValue` methods on that in-memory representation. An actor could craft Ion data that, when loaded by the affected application and/or processed using the `IonValue` model, results in a `StackOverflowError` originating from the `ion-java` library. The patch is included in `ion-java` 1.10.5. As a workaround, do not load data which originated from an untrusted source or that could have been tampered with.