Known issues

Review the list of known issues in Cloudera Flow Management (CFM).

Known issues in CFM 2.1.6

Review the list of known issues in Cloudera Flow Management (CFM) 2.1.6.

LDAP authentication error caused by special characters in passwords
After upgrading to CFM 2.1.7 SP1, 2.1.7, 2.1.6 SP1 or 2.1.6 versions, you may encounter the following error if you use LDAP:
org.springframework.ldap.AuthenticationException: LDAP: error code 49
The issue is caused by the use of special characters in LDAP passwords, which are configured under the "Manager Password" property.

To resolve this issue, upgrade to CFM 2.1.7.1001-2.

If you prefer to upgrade to a CFM 2.1.6 version, a new HOTFIX build is needed to address this issue. Contact Cloudera Support for assistance.

Truststore changes with Ranger Plugin causing TLS handshake errors

When using the Ranger plugin, the default truststore is changed from cacerts to AutoTLS truststore (cm-auto-global_truststore.jks). This can lead to unintended issues such as TLS handshake errors with common CAs. Connections with common CAs may fail, causing service outages because the AutoTLS truststore contains only internal CA certificates and not the public root certificates.

Add the required certificates manually to the Cloudera Manager truststore.

  1. Open Cloudera Manager and navigate to Administration > Security > Update Auto-TLS Truststore.

  2. Import the certificates in PEM format.

Snowflake - NoSuchMethodError
Due to the issue documented in NIFI-11905, you may encounter the following error when utilizing Snowflake components:
java.lang.NoSuchMethodError: 'net.snowflake.client.jdbc.telemetry.Telemetry net.snowflake.client.jdbc.telemetry.TelemetryClient.createSessionlessTelemetry(net.snowflake.client.jdbc.internal.apache.http.impl.client.CloseableHttpClient, java.lang.String)'
    at net.snowflake.ingest.connection.TelemetryService.<init>(TelemetryService.java:68)
This issue has been resolved with the introduction of NIFI-12126, which will be available in CFM 2.1.6 SP1. In the interim, Cloudera recommends to use the NARs from the CFM 2.1.5 SP1 release:
Per Process Group Logging
The Per Process Group logging feature is currently not working even when you specify a log suffix in the configuration of a process group. As a result, you may not observe the expected logging behavior.
No fix or workaround is available until a Service Pack is released for CFM 2.1.6. However, if you encounter this problem, contact Cloudera to request a fix.
Configuration of java.arg.7
A property has been added for defining java.arg.7 to provide the ability to override the default location of the temporary directory used by JDK. By default this value is empty in Cloudera Manager. If you use this argument for another purpose, change it to a different, unused argument number (or use letters instead: java.arg.mycustomargument). Not changing the argument can impact functionalities after upgrades/migrations.
JDK error
JDK 8 version u252 is supported. Any lower version may result in this error when NiFi starts:
SHA512withRSAandMGF1 Signature not available
When using Java 8, only version u252, and above are supported.
JDK limitation
JDK 8u271, JDK 8u281, and JDK 8u291 may cause socket leak issues in NiFi due to JDK-8245417 and JDK-8256818. Verify the build version of your JDK. Later builds are fixed as described in JDK-8256818.
When using Java 8, only version u252, and above are supported.
Kudu Client
All the records are sent as a single Kafka message containing an array of records.

There is an issue in the Kudu client preventing the creation of a new tables using the NiFi processors. The table needs to exist before NiFi tries to push data into it. You may see this error when this issue arises:

Caused by: org.apache.kudu.client.NonRecoverableException: failed to wait for Hive Metastore notification log listener to catch up: failed to retrieve notification log events: failed to open Hive Metastore connection: SASL(-15): mechanism too weak for this user
Verify the necessary table exists in Kudu.
NiFi Node Connection test failures
In CFM 2.1.3, Cloudera Manager includes a new health check feature. The health check alerts users if a NiFi instance is running but disconnected from the NiFi cluster. For this health check to be successful, you must update a Ranger policy. There is a known issue when the NiFi service is running but the NiFi Node(s) report Bad Health due to the NiFi Node Connection test.
Update the policy:
  1. From the Ranger UI, access the Controller policy for the NiFi service.
  2. Verify the nifi group is set in the policy.
  3. Add the nifi user, to the policy, with READ permissions.
NiFi UI Performance considerations
A known issue in Chrome 92.x causes significant slowness in the NiFi UI and may lead to high CPU consumption.

For more information, see the Chrome Known Issues documentation at 1235045.

Use another version of Chrome or a different browser.
SSHJ version change and key negotiation issue with old SSH servers
ListSFTP and PutSFTP processors fail when using the legacy ssh-rsa algorithm for authentication with the following error:
UserAuthException: Exhausted available authentication methods
Set Key Algorithms Allowed property in PutSFTP to ssh-rsa.
KeyStoreException: placeholder not found
After an upgrade, NiFi may fail to start with the following error:
WARN org.apache.nifi.web.server.JettyServer: Failed to start web server... shutting down.
java.security.KeyStoreException: placeholder not found

The error is caused by missing configuration for the type of the keystore and truststore files.

  1. Go to Cloudera Manager -> NiFi service -> Configuration.
  2. Add the below properties for NiFi Node Advanced Configuration Snippet (Safety Valve) for staging/nifi.properties.xml.
    nifi.security.keystoreType=**[value]**
    nifi.security.truststoreType=**[value]**

    Where value must be PKCS12, JKS, or BCFKS. JKS is the preferred type, BCFKS and PKCS12 files are loaded with BouncyCastle provider.

  3. Restart NiFi.
InferAvroSchema may fail when inferring schema for JSON data
In Apache NiFi 1.17, the dependency on Apache Avro has been upgraded to 1.11.0. However, the InferAvroSchema processor depends on the hadoop-libraries NAR from which the Avro version comes from, causing a NoSuchMethodError exception. Having well defined schemas ensures consistent behavior, allows for proper schema versioning and prevents downstream systems to generate errors because of unexpected schema changes. Besides, schema inference may not always be 100% accurate and can be an expensive operation in terms of performances.

Use the ExtractRecordSchema processor to infer the schema of your data with an appropriate reader and add the schema as a FlowFile attribute.

Known issues in CFM 2.1.6 SP1

Review the list of known issues in Cloudera Flow Management (CFM) 2.1.6 SP1.

Known issues

LDAP authentication error caused by special characters in passwords
After upgrading to CFM 2.1.7 SP1, 2.1.7, 2.1.6 SP1 or 2.1.6 versions, you may encounter the following error if you use LDAP:
org.springframework.ldap.AuthenticationException: LDAP: error code 49
The issue is caused by the use of special characters in LDAP passwords, which are configured under the "Manager Password" property.

To resolve this issue, upgrade to CFM 2.1.7.1001-2.

If you prefer to upgrade to a CFM 2.1.6 version, a new HOTFIX build is needed to address this issue. Contact Cloudera Support for assistance.

CVEs not fixed

A fix is required on HBase and/or Hive side, or alternatively, the use of a different version of the same is recommended:

CVE-2014-0114

Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.

CVE-2014-3643

jersey: XXE via parameter entities not disabled by the jersey SAX parser

CVE-2017-9735

Jetty through 9.4.x is prone to a timing channel in util/security/Password.java, which makes it easier for remote attackers to obtain access by observing elapsed times before rejection of incorrect passwords.

CVE-2018-1320

Apache Thrift Java client library versions 0.5.0 through 0.11.0 can bypass SASL negotiation isComplete validation in the org.apache.thrift.transport.TSaslTransport class. An assert used to determine if the SASL handshake had successfully completed could be disabled in production settings making the validation incomplete.

CVE-2020-27216

In Eclipse Jetty versions 1.0 thru 9.4.32.v20200930, 10.0.0.alpha1 through 10.0.0.beta2, and 11.0.0.alpha1 through 11.0.0.beta2O, on Unix like systems, the system's temporary directory is shared between all users on that system. A collocated user can observe the process of creating a temporary sub directory in the shared temporary directory and race to complete the creation of the temporary subdirectory. If the attacker wins the race then they will have read and write permission to the subdirectory used to unpack web applications, including their WEB-INF/lib jar files and JSP files. If any code is ever executed out of this temporary directory, this can lead to a local privilege escalation vulnerability.

CVE-2022-37865

Apache Ivy allows creating/overwriting any file on the system

With Apache Ivy 2.4.0 an optional packaging attribute has been introduced that allows artifacts to be unpacked on the fly if they used pack200 or zip packaging. For artifacts using the "zip", "jar" or "war" packaging Ivy prior to 2.5.1 does not verify the target path when extracting the archive. An archive containing absolute paths or paths that try to traverse "upwards" using ".." sequences can then write files to any location on the local file system that the user executing Ivy has write access to. Ivy users of version 2.4.0 to 2.5.0 should upgrade to Ivy 2.5.1.

CVE-2022-37866: Apache Ivy allows path traversal in the presence of a malicious repository

When Apache Ivy downloads artifacts from a repository it stores them in the local file system based on a user-supplied "pattern" that may include placeholders for artifacts coordinates like the organization, module or version. If said coordinates contain "../" sequences - which are valid characters for Ivy coordinates in general - it is possible the artifacts are stored outside of Ivy's local cache or repository or can overwrite different artifacts inside of the local cache. In order to exploit this vulnerability an attacker needs collaboration by the remote repository as Ivy will issue http requests containing ".." sequences and a "normal" repository will not interpret them as part of the artifact coordinates. Users of Apache Ivy 2.0.0 to 2.5.1 should upgrade to Ivy 2.5.1.

CVE-2023-34610

An issue was discovered json-io thru 4.14.0 allows attackers to cause a denial of service or other unspecified impacts via crafted object that uses cyclic dependencies.

Not fixed for other reasons:

CVE-2018-17196

In Apache Kafka versions between 0.11.0.0 and 2.1.0, it is possible to manually craft a Produce request which bypasses transaction/idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability. Users should upgrade to 2.1.1 or later where this vulnerability has been fixed.

Reason: Cloudera recommends you to use nifi-kafka-2-6-nar*

CVE-2018-20225

An issue was discovered in pip (all versions) because it installs the version with the highest version number, even if the user had intended to obtain a private package from a private index. This only affects use of the --extra-index-url option, and exploitation requires that the package does not already exist in the public index (and thus the attacker can put the package there with an arbitrary version number). NOTE: it has been reported that this is intended functionality and the user is responsible for using --extra-index-url securely

Reason: The used 2.7.3 version is the latest from jython-standalone.

CVE-2019-20916

The pip package before 19.2 for Python allows Directory Traversal when a URL is given in an install command, because a Content-Disposition header can have ../ in a filename, as demonstrated by overwriting the /root/.ssh/authorized_keys file. This occurs in _download_http_url in _internal/download.py.

Reason: The used 2.7.3 version is the latest from jython-standalone.

CVE-2020-9040

Couchbase Server Java SDK before 2.7.1.1 allows a potential attacker to forge an SSL certificate and pose as the intended peer. An attacker can leverage this flaw by crafting a cryptographically valid certificate that will be accepted by Java SDK's Netty component due to missing hostname verification.

Reason: Cloudera recommends customers to remove nifi-couchbase*.nar

CVE-2022-42889: Apache Commons Text prior to 1.10.0 allows RCE when applied to untrusted input due to insecure interpolation defaults

Apache Commons Text performs variable interpolation, allowing properties to be dynamically evaluated and expanded. The standard format for interpolation is "${prefix:name}", where "prefix" is used to locate an instance of org.apache.commons.text.lookup.StringLookup that performs the interpolation. Starting with version 1.5 and continuing through 1.9, the set of default Lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers. These lookups are: - "script" - execute expressions using the JVM script execution engine (javax.script) - "dns" - resolve dns records - "url" - load values from urls, including from remote servers Applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used. Users are recommended to upgrade to Apache Commons Text 1.10.0, which disables the problematic interpolators by default.

Reason: Dependency not fixed in nifi-kite-processors, it is recommended to remove nifi-kite-processor.

CVE-2022-46337: Apache Derby: LDAP injection vulnerability in authenticator

A cleverly devised username might bypass LDAP authentication checks. In LDAP-authenticated Derby installations, this could let an attacker fill up the disk by creating junk Derby databases. In LDAP-authenticated Derby installations, this could also allow the attacker to execute malware which was visible to and executable by the account which booted the Derby server. In LDAP-protected databases which weren't also protected by SQL GRANT/REVOKE authorization, this vulnerability could also let an attacker view and corrupt sensitive data and run sensitive database functions and procedures. Mitigation: Users should upgrade to Java 21 and Derby 10.17.1.0. Alternatively, users who wish to remain on older Java versions should build their own Derby distribution from one of the release families to which the fix was backported: 10.16, 10.15, and 10.14. Those are the releases which correspond, respectively, with Java LTS versions 17, 11, and 8.

Reason: The requested versions of org.apache.derby require Java 9 or higher.

CVE-2023-24998: Apache Commons FileUpload, Apache Tomcat: FileUpload DoS with excessive parts

Apache Commons FileUpload before 1.5 does not limit the number of request parts to be processed resulting in the possibility of an attacker triggering a DoS with a malicious upload or series of uploads. Note that, like all of the file upload limits, the new configuration option (FileUploadBase#setFileCountMax) is not enabled by default and must be explicitly configured.

Reason: It is not possible to change atlas.version to other than 2.1.0.7.1.7.1000-141. The dependency was excluded from the atlas jar, the required dependency version was added.

CVE-2023-34062

In Reactor Netty HTTP Server, versions 1.1.x prior to 1.1.13 and versions 1.0.x prior to 1.0.39, a malicious user can send a request using a specially crafted URL that can lead to a directory traversal attack. Specifically, an application is vulnerable if Reactor Netty HTTP Server is configured to serve static resources.

Reason : The reactor-netty-http was addressed in NIFI-12393, only the relevant part of the change was applied.

CVE-2023-36478

HTTP/2 HPACK integer overflow and buffer allocation

Eclipse Jetty provides a web server and servlet container. In versions 11.0.0 through 11.0.15, 10.0.0 through 10.0.15, and 9.0.0 through 9.4.52, an integer overflow in `MetaDataBuilder.checkSize` allows for HTTP/2 HPACK header values to exceed their size limit. `MetaDataBuilder.java` determines if a header name or value exceeds the size limit, and throws an exception if the limit is exceeded. However, when length is very large and huffman is true, the multiplication by 4 in line 295 will overflow, and length will become negative. `(_size+length)` will now be negative, and the check on line 296 will not be triggered. Furthermore, `MetaDataBuilder.checkSize` allows for user-entered HPACK header value sizes to be negative, potentially leading to a very large buffer allocation later on when the user-entered size is multiplied by 2. This means that if a user provides a negative length value (or, more precisely, a length value which, when multiplied by the 4/3 fudge factor, is negative), and this length value is a very large positive number when multiplied by 2, then the user can cause a very large buffer to be allocated on the server. Users of HTTP/2 can be impacted by a remote denial of service attack. The issue has been fixed in versions 11.0.16, 10.0.16, and 9.4.53. There are no known workarounds.

Reason: The latest 9.x version is used, 10+ versions ofjetty require Java 9 or higher.

CVE-2023-39410: Apache Avro Java SDK: Memory when deserializing untrusted data in Avro Java SDK

When deserializing untrusted or corrupted data, it is possible for a reader to consume memory beyond the allowed constraints and thus lead to out of memory on the system. This issue affects Java applications using Apache Avro Java SDK up to and including 1.11.2. Users should update to apache-avro version 1.11.3 which addresses this issue.

Reason: Dependency not fixed in nifi-kite-processors and nifi-hive-1-1, it is recommended to remove those processors or use nifi-hive3.nar

CVE-2023-46604: Apache ActiveMQ, Apache ActiveMQ Legacy OpenWire Module: Unbounded deserialization causes ActiveMQ to be vulnerable to a remote code execution (RCE) attack

The Java OpenWire protocol marshaller is vulnerable to Remote Code Execution. This vulnerability may allow a remote attacker with network access to either a Java-based OpenWire broker or client to run arbitrary shell commands by manipulating serialized class types in the OpenWire protocol to cause either the client or the broker (respectively) to instantiate any class on the classpath. Users are recommended to upgrade both brokers and clients to version 5.15.16, 5.16.7, 5.17.6, or 5.18.3 which fixes this issue.

Reason: Version from 5.17.x require Java 11, it is not possible to upgrade to 6.x.

CVE-2023-6378: Logback "receiver" DOS vulnerability

A serialization vulnerability in the logback receiver component part of logback version 1.4.11 allows an attacker to mount a Denial-Of-Service attack by sending poisoned data.

Reason: Versions from 1.4.x require Java 11.

CVEs excluded based on the NiFi exclusion list

You can find the exclusion list here.

CVE-2023-25194: Apache Kafka Connect API: Possible RCE/Denial of service attack via SASL JAAS JndiLoginModule configuration using Kafka Connect

A possible security vulnerability has been identified in Apache Kafka Connect API. This requires access to a Kafka Connect worker, and the ability to create/modify connectors on it with an arbitrary Kafka client SASL JAAS config and a SASL-based security protocol, which has been possible on Kafka Connect clusters since Apache Kafka Connect 2.3.0. When configuring the connector via the Kafka Connect REST API, an authenticated operator can set the `sasl.jaas.config` property for any of the connector's Kafka clients to "com.sun.security.auth.module.JndiLoginModule", which can be done via the `producer.override.sasl.jaas.config`, `consumer.override.sasl.jaas.config`, or `admin.override.sasl.jaas.config` properties. This will allow the server to connect to the attacker's LDAP server and deserialize the LDAP response, which the attacker can use to execute java deserialization gadget chains on the Kafka connect server. Attackers can cause unrestricted deserialization of untrusted data (or) RCE vulnerability when there are gadgets in the classpath. Since Apache Kafka 3.0.0, users are allowed to specify these properties in connector configurations for Kafka Connect clusters running with out-of-the-box configurations. Before Apache Kafka 3.0.0, users may not specify these properties unless the Kafka Connect cluster has been reconfigured with a connector client override policy that permits them. Since Apache Kafka 3.4.0, asystem property ("-Dorg.apache.kafka.disallowed.login.modules") has been added to disable the problematic login modules usage in SASL JAAS configuration. Also by default "com.sun.security.auth.module.JndiLoginModule" is disabled in Apache Kafka Connect 3.4.0. All Kafka Connect users are advised to validate connector configurations and only allow trusted JNDI configurations. Also examine connector dependencies for vulnerable versions and either upgrade their connectors, upgrading that specific dependency, or removing the connectors as options for remediation. Finally, in addition to leveraging the "org.apache.kafka.disallowed.login.modules" system property, Kafka Connect users can also implement their own connector client config override policy, which can be used to control which Kafka client properties can be overridden directly in a connector config and which cannot.

CVE-2023-4759: Improper handling of case insensitive filesystems in Eclipse JGit allows arbitrary file write

Arbitrary File Overwrite in Eclipse JGit <= 6.6.0 In Eclipse JGit, all versions <= 6.6.0.202305301015-r, a symbolic link present in a specially crafted git repository can be used to write a file to locations outside the working tree when this repository is cloned with JGit to a case-insensitive filesystem, or when a checkout from a clone of such a repository is performed on a case-insensitive filesystem. This can happen on checkout (DirCacheCheckout), merge (ResolveMerger via its WorkingTreeUpdater), pull (PullCommand using merge), and when applying a patch (PatchApplier). This can be exploited for remote code execution (RCE), for instance if the file written outside the working tree is a git filter that gets executed on a subsequent git command. The issue occurs only on case-insensitive filesystems, like the default file systems on Windows and macOS. The user performing the clone or checkout must have the rights to create symbolic links for the problem to occur, and symbolic links must be enabled in the git configuration. Setting the git configuration option core.symlinks = false before checking out avoids the problem. The issue was fixed in Eclipse JGit version 6.6.1.202309021850-r and 6.7.0.202309050840-r, available via Maven Central https://repo1.maven.org/maven2/org/eclipse/jgit/ and repo.eclipse.org https://repo.eclipse.org/content/repositories/jgit-releases/ . A backport is available in 5.13.3 starting from 5.13.3.202401111512-r. The JGit maintainers would like to thank RyotaK for finding and reporting this issue.

CVE-2024-21634: Ion Java StackOverflow vulnerability

Amazon Ion is a Java implementation of the Ion data notation. Prior to version 1.10.5, a potential denial-of-service issue exists in `ion-java` for applications that use `ion-java` to deserialize Ion text encoded data, or deserialize Ion text or binary encoded data into the `IonValue` model and then invoke certain `IonValue` methods on that in-memory representation. An actor could craft Ion data that, when loaded by the affected application and/or processed using the `IonValue` model, results in a `StackOverflowError` originating from the `ion-java` library. The patch is included in `ion-java` 1.10.5. As a workaround, do not load data which originated from an untrusted source or that could have been tampered with.