Chapter 1. Hortonworks DataFlow 3.1.0 Release Notes
This document provides you with the latest information about the HDF 3.1.0 release and its product documentation.
Component Support
HDF 3.1.0 includes the following components:
Apache Ambari 2.6.1
Apache Kafka 1.0.0
Apache NiFi 1.5.0
NiFi Registry 0.1.0
Apache Ranger 0.7.0
Apache Storm 1.1.1
Apache ZooKeeper 3.4.6
Apache MiNiFi Java Agent 0.4.0
Apache MiNiFi C++ 0.4.0
Hortonworks Schema Registry 0.5.0
Hortonworks Streaming Analytics Manager 0.6.0
Component Availability in HDF
Previous HDF releases shipped with the following components versions.
NiFi | Storm | Kafka | ZooKeeper | Ambari | Ranger | MiNiFi Java Agent | MiNiFi C++ | Streaming Analytics Manager | Schema Registry | NiFi Registry | |
---|---|---|---|---|---|---|---|---|---|---|---|
HDF 3.1.0 | 1.5.0 | 1.1.1 | 1.0.0 | 3.4.6 | 2.6.1 | 0.7.0 | 0.4.0 | 0.4.0 | 0.6.0 | 0.5.0 | 0.1.0 |
HDF 3.0.3 (IBM Power System only) | 1.2.0 | 1.1.0` | 0.10.2.1 | 3.4.6 | 2.6.0 | 0.7.0 | 0.2.0 | TP | 0.5.0 | 0.3.0 | N/A |
HDF 3.0.2 | 1.2.0 | 1.1.0` | 0.10.2.1 | 3.4.6 | 2.6.0 | 0.7.0 | 0.2.0 | TP | 0.5.0 | 0.3.0 | N/A |
HDF 3.0.1 | 1.2.0 | 1.1.0 | 0.10.2.1 | 3.4.6 | 2.5.1 | 0.7.0 | 0.2.0 | TP | 0.5.0 | 0.3.0 | N/A |
HDF 3.0.0 | 1.2.0 | 1.1.0 | 0.10.2.1 | 3.4.6 | 2.5.1 | 0.7.0 | 0.2.0 | TP | 0.5.0 | 0.3.0 | N/A |
HDF 2.1.4 | 1.1.0 | 1.0.2 | 1.10.1 | 3.4.6 | 2.4.2.0 | 0.6.2 | 0.1.0 | TP | N/A | N/A | N/A |
HDF 2.1.2 | 1.1.0 | 1.0.2 | 0.10.1 | 3.4.6 | 2.4.2.0 | 0.6.2 | 0.1.0 | TP | N/A | N/A | N/A |
HDF 2.1.1 | 1.1.0 | 1.0.2 | 0.10.1 | 3.4.6 | 2.4.2.0 | 0.6.2 | 0.1.0 | TP | N/A | N/A | N/A |
HDF 2.1.0 | 1.1.0 | 1.0.2 | 0.10.1 | 3.4.6 | 2.4.2.0 | 0.6.2 | 0.1.0 | TP | N/A | N/A | N/A |
HDF 2.0.2 | 1.0.0 | 1.0.1 | 0.10.0.1 | 3.4.6 | 2.4.1.0 | 0.6.0 | 0.0.1 | TP | N/A | N/A | N/A |
HDF 2.0.1 | 1.0.0 | 1.0.1 | 0.10.0.1 | 3.4.6 | 2.4.1.0 | 0.6.0 | 0.0.1 | TP | N/A | N/A | N/A |
HDF 2.0.0 | 1.0.0 | 1.0.1 | 0.10.0.1 | 3.4.6 | 2.4.0.1 | 0.6.0 | 0.0.1 | TP | N/A | N/A | N/A |
HDF 1.2.1 | 0.6.1 | 0.10.0 | 0.9.0.1 | 3.4.6 | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
HDF 1.2.0.1 | 0.6.1 | 0.10.0 | 0.9.0.1 | 3.4.6 | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
HDF 1.2.0 | 0.6.0 | 0.10.0 | 0.9.0.1 | 3.4.6 | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
HDF 1.1.0 | 0.4.0 | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
HDF 1.0 | 0.3.0 | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
What's New in HDF 3.1.0
HDF 3.1.0 is a minor release that includes the following improvements and bug fixes.
What's new in Flow Management
Improved data flow management using Apache NiFi Registry
Apache NiFi Registry, a new Apache sub-project now included within HDF Enterprise Management Services, facilitates the development, management and portability of data flows. Core to its functionality is the ability to abstract data flow schemas and programs to enable users to track and monitor data flow changes at a more granular level. Data flow schemas are stored in a shared repository that allows for easy sharing on a global basis as well as versioning of schemas.
Through this, the export and import of data flows allow easy porting and enables smooth migration of data flows from one environment to another. The functionality significantly improves the storage, control, and management of versioned flows, further shortening the software development life cycle and accelerating application deployment to achieve faster time to value.
Deployment improvements
Deploying NiFi in edge environments using MiNiFi C++
Containerized NiFi deployment using Docker
NiFi and MiNiFi Java Agent are available for deployment on Windows platforms
iOS/Android MiNIFi libraries available for deployment on mobile devices
What's new in Streaming Analytics and Schema Registry
Kafka 1.0 Support with full integration with HDF Services
Apache Kafka 1.0 provides important new features including more stringent message processing semantics including support for message headers and transactions, performance improvements and advanced security options
Apache Ambari support for Kafka 1.0 - Install, configure, manage, upgrade, monitor, and secure Kafka 1.0 clusters with Ambari.
Apache Ranger support for Kafka 1.0 - Manage access control policies (ACLs) using resource or tag-based security for Kafka 1.0 clusters.
New NiFi and SAM processors for Kafka 1.0 - New processors in NiFi and Streaming Analytics Manager support Kafka 1.0 features including message headers and transactions.
SAM Test Mode and New SAM Operations Module
SAM continues to make the Developers and DevOps jobs easier with two new capabilities: SAM Test Mode and the new SAM Operations Module.
SAM "Test Mode" allows developers to test SAM apps by mocking out sources using test data enabling the creation of unit tests for SAM Apps integrated into their continuous integration and delivery (CI/CD) environments.
The new SAM Operations module provides DevOps tooling with rich visualization to monitor and troubleshoot app performance/failure issues.
SAM Extensibility Improvements
Developers can now build and register custom sources and sinks integrated with Schema Registry
Developers can use existing Storm code and wrap it with the SAM SDK and register it with SAM.
Schema Registry’s New Schema Version Lifecycle Management
Developers and platform teams can update and manage schema states including archive, disable and build custom states.
Developers and governance teams can branch schema versions and provide workflow lifecycle actions including fork, start review, finish review, enable and merge.
Oracle 11/12 Support for SAM and Schema Registry backend stores
What's new in Platform Integration
Increased operational efficiencies through automation:
Improvements to Apache Ambari and Apache Ranger that automate a significant portion of the processes for managing Apache NiFi resources. Users can now easily add a new NiFi node to an existing cluster without manually updating node information. They can also quickly define group-based policies for NiFi resources. The elimination of repetitive manual process streamlines the operations to achieve optimal efficiency and utilization of people .
Integration with Atlas, SmartSense, and Knox
When HDF is co-located with Hortonworks Data Platform (HDP), HDF will now be able to integrate with Apache Atlas for governance, Hortonworks SmartSense, and Apache Knox for security to provide better manageability and access of data as well as toolsets across the platform. This integration creates opportunities for:
Comprehensive cross-component lineage view at dataset level with Atlas
Easier data collection process with SmartSense for proactive support and troubleshooting
Convenience of single sign-on capability with Knox as a standard security gateway
The integration also allows Atlas to obtain meta information of NiFi data flows to enable holistic data governance across data at-rest and in-motion for compliance.
Unsupported Features
Some features exist within HDF 3.1.0, but Hortonworks does not currently support these capabilities.
Technical Preview Features
The following features are available within HDF 3.1.0 but are not ready for production deployment. Hortonworks encourages you to explore these technical preview features in non-production environments and provide feedback on your experiences through the Hortonworks Community Forums.
Table 1.1. Technical Previews
Component | Feature |
---|---|
MiNiFi |
Android/iOS MiNiFi libraries for mobile integration |
Community Driven Features
The following features are developed and tested by the Hortonworks community but are not officially supported by Hortonworks. These features are excluded for a variety of reasons, including insufficient reliability or incomplete test case coverage, declaration of non-production readiness by the community at large, and feature deviation from Hortonworks best practices. Do not use these features in your production environments.
Community Driven Kafka features
Kafka Connect
Kafka Streams
Community Driven NiFi Tools and Services
Embedded ZooKeeper
Sensitive key migration toolkit
Community Driven NiFi Processors
AttributeRollingWindow
AWSCredentialsProviderControllerService
CompareFuzzyHash
ConsumeAzureEventHub
ConsumeEWS
ConsumeIMAP
ConsumeKafka_0_11
ConsumeKafkaRecord_0_11
ConsumePOP3
ConvertExcelToCSVProcessor
CountText
DebugFlow
DeleteDynamoDB
DeleteGCSObject
DeleteHDFS
DeleteMongo
DeleteRethinkDB
ExecuteFlumeSink
ExecuteFlumeSource
ExecuteSparkInteractive
ExtractCCDAAttributes
ExtractEmailAttachments
ExtractEmailHeaders
ExtractMediaMetadata
ExtractTNEFAttachments
FetchAzureBlobStorage
FetchGCSObject
FuzzyHashContent
GetDynamoDB
GetHDFSEvents
GetRethinkDB
GetSNMP
InvokeGRPC
ISPEnrichIP
InferAvroSchema
ListenBeats
ListenGRPC
ListenLumberjack
ListenSMTP
ListAzureBlobStorage
ListGCSBucket
ListS3
LogMessage
ModifyBytes
MoveHDFS
PublishKafka_0_11
PublishKafkaRecord_0_11
PutKudu
PutMongoRecord
PutRethinkDB
OrcFormatConversion
PutAzureBlobStorage
PutDynamoDB
PutGCSObject
PutIgniteCache
PutKinesisFirehose
PutKinesisStream
PutLambda
PutSlack
PutTCP
PutUDP
QueryDNS
SetSNMP
SpringContextProcessor
StoreInKiteDataset
Note | |
---|---|
HDF 3.1.x does not support Hive 2. As a result, NiFi Processors working with Hive (PutHiveQL, PutHiveStreaming, SelectHiveQL) and the NiFi Controller Service HiveConnectionPool may not support Hive 2. |
Community Driven NiFi Controller Services
AWSCredentialsProviderControllerService
ConfluentSchemaRegistry
GCPCredentialsControllerService
GraphiteMetricReporterService
IPLookupService
JettyWebSocketClient
JettyWebSocketServer
LivySessionController
MongoDBControllerService
MongoDBLookupService
PropertiesFileLookupService
RedisConnectionPoolService
RedisDistributedMapCacheClientService
SimpleCsvFileLookupService
SimpleKeyValueLookupService
XMLFileLookupService
Community Driven NiFi Reporting Tasks
DataDogReportingTask
MetricsReportingTask
SiteToSiteBulletinReportingTask
SiteToSiteStatusReportingTask
StandardGangliaReporter
Unsupported Customizations
Hortonworks cannot guarantee that default NiFi processors are compatible with proprietary protocol implementations or proprietary interface extensions. For example, we support interfaces like JMS and JDBC that are built around standards, specifications, or open protocols. But we do not support customizations of those interfaces, or proprietary extensions built on top of those interfaces.
Deprecated Technologies
This section points out any technology from previous releases that has been deprecated or removed from this release (operating systems, Java versions, databases, product features). Use this section as a guide for your implementation plans.
- Deprecated
Technology that Hortonworks is removing in a future release. Deprecated items are supported until they are removed; deprecation gives you time to plan for removal.
- Removed
Technology that Hortonworks has removed from production and is no longer supported.
Table 1.2. Deprecated Operating Systems
Operating System | Release Deprecated | Release Removed |
---|---|---|
Ubuntu 12 | HDF 3.0.0 | HDF 3.0.0 |
Debian 6 | HDF 2.1.2 | HDF 3.0.0 |
Table 1.3. Deprecated NiFi Processors
Processor | Release Deprecated |
---|---|
ConvertCSVToAvro | HDF 3.0.0 |
ConvertJSONToAvro | HDF 3.0.0 |
GetKafka | HDF 2.0.0 |
PutKafka | HDF 2.0.0 |
EvaluateRegularExpression | HDF 1.0.0 |
Table 1.4. Deprecated Kafka APIs
API | Release Deprecated | Use Instead |
---|---|---|
kafka.producer.Producer | HDF 3.1.0 | org.apache.kafka.clients.producer.KafkaProducer |
kafka.consumer.SimpleConsumer | HDF 3.1.0 | org.apache.kafka.clients.consumer.KafkaConsumer |
SecurityProtocol.PLAINTEXTSASL | HDF 3.1.0 | SASL_PLAINTEXT |
HDF Repository Locations
Use the following table to identify the HDF 3.1.0 repository location for your operating system and operational objectives. HDF 3.1.0 supports the following operating systems:
Table 1.5. RHEL/Oracle Linux/CentOS 6 HDF repository & additional download locations
OS | Format | Download location |
---|---|---|
RHEL/Oracle Linux/CentOS 6 (64-bit): | HDF Build number | 3.1.0.0-564 |
HDF Base URL |
http://public-repo-1.hortonworks.com/HDF/centos6/3.x/updates/3.1.0.0 | |
HDF Repo |
http://public-repo-1.hortonworks.com/HDF/centos6/3.x/updates/3.1.0.0/hdf.repo | |
RPM tarball |
http://public-repo-1.hortonworks.com/HDF/centos6/3.x/updates/3.1.0.0/HDF-3.1.0.0-centos6-rpm.tar.gz | |
Tars tarball | ||
HDF Management Pack | ||
MiNiFi C++ | http://public-repo-1.hortonworks.com/HDF/centos6/3.x/updates/3.1.0.0/tars/nifi-minifi-cpp/nifi-minifi-cpp-0.4.0-bin.tar.gz | |
HDP and Ambari Repositories | ||
Ambari | http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.6.1.0/ambari.repo | |
HDP | http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.4.0/hdp.repo | |
OS Agnostic Downloads | ||
NiFi only | ||
NiFi Toolkit | ||
Docker Hub | ||
NiFi Registry | ||
MiNiFi Java Agent | ||
MiNiFi Toolkit | ||
iOS/Android Libraries |
Table 1.6. RHEL/Oracle Linux/CentOS 7 HDF repository & additional download locations
Table 1.7. SLES 11 SP3/SP4 HDF repository & additional download locations
OS | Format | Download location |
---|---|---|
SUSE Enterprise Linux 11 SP3, SP4 | HDF Build Number | 3.1.0.0-564 |
HDF Base URL |
http://public-repo-1.hortonworks.com/HDF/suse11sp3/3.x/updates/3.1.0.0 | |
Repo | http://public-repo-1.hortonworks.com/HDF/suse11sp3/3.x/updates/3.1.0.0/hdf.repo | |
RPM tarball | http://public-repo-1.hortonworks.com/HDF/suse11sp3/3.x/updates/3.1.0.0/HDF-3.1.0.0-suse11sp3-rpm.tar.gz | |
Tars tarball | http://public-repo-1.hortonworks.com/HDF/suse11sp3/3.x/updates/3.1.0.0/HDF-3.1.0.0-suse11sp3-tars-tarball.tar.gz | |
HDF Management Pack | http://public-repo-1.hortonworks.com/HDF/suse11sp3/3.x/updates/3.1.0.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.1.0.0-564.tar.gz | |
HDP and Ambari Repositories | ||
Ambari | http://public-repo-1.hortonworks.com/ambari/suse11/2.x/updates/2.6.1.0/ambari.repo | |
HDP | http://public-repo-1.hortonworks.com/HDP/suse11sp3/2.x/updates/2.6.4.0/hdp.repo | |
OS Agnostic Downloads | ||
NiFi only | ||
NiFi Toolkit | ||
Docker Hub | ||
NiFi Registry | ||
MiNiFi Java Agent | ||
MiNiFi Toolkit | ||
iOS/Android Libraries |
Table 1.8. SLES 12 HDF repository & additional download locations
OS | Format | Download location |
---|---|---|
SUSE Linux Enterprise Server (SLES) v12 SP1 | HDF Build Number | 3.1.0.0-564 |
HDF Base URL | http://public-repo-1.hortonworks.com/HDF/sles12/3.x/updates/3.1.0.0 | |
Repo | http://public-repo-1.hortonworks.com/HDF/sles12/3.x/updates/3.1.0.0/hdf.repo | |
RPM tarball | http://public-repo-1.hortonworks.com/HDF/sles12/3.x/updates/3.1.0.0/HDF-3.1.0.0-sles12-rpm.tar.gz | |
Tars tarball | http://public-repo-1.hortonworks.com/HDF/sles12/3.x/updates/3.1.0.0/HDF-3.1.0.0-sles12-tars-tarball.tar.gz | |
HDF Management Pack | http://public-repo-1.hortonworks.com/HDF/sles12/3.x/updates/3.1.0.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.1.0.0-564.tar.gz | |
HDP and Ambari Repositories | ||
Ambari | http://public-repo-1.hortonworks.com/ambari/sles12/2.x/updates/2.6.1.0/ambari.repo | |
HDP | http://public-repo-1.hortonworks.com/HDP/sles12/2.x/updates/2.6.4.0/hdp.repo | |
OS Agnostic Downloads | ||
NiFi only | ||
NiFi Toolkit | ||
Docker Hub | ||
NiFi Registry | ||
MiNiFi Java Agent | ||
MiNiFi Toolkit | ||
iOS/Android Libraries |
Table 1.9. Ubuntu 14 HDF repository & additional download locations
OS | Format | Download location |
---|---|---|
Ubuntu Trusty (14.04) (64-bit) | HDF Build Number | 3.1.0.0-564 |
HDF Base URL | http://public-repo-1.hortonworks.com/HDF/ubuntu14/3.x/updates/3.1.0.0 | |
Repo | http://public-repo-1.hortonworks.com/HDF/ubuntu14/3.x/updates/3.1.0.0/hdf.list | |
Deb tarball | http://public-repo-1.hortonworks.com/HDF/ubuntu14/3.x/updates/3.1.0.0/HDF-3.1.0.0-ubuntu14-deb.tar.gz | |
Tars tarball | http://public-repo-1.hortonworks.com/HDF/ubuntu14/3.x/updates/3.1.0.0/HDF-3.1.0.0-ubuntu14-tars-tarball.tar.gz | |
HDF Management Pack | http://public-repo-1.hortonworks.com/HDF/ubuntu14/3.x/updates/3.1.0.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.1.0.0-564.tar.gz | |
HDP and Ambari Repositories | ||
Ambari | http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.6.1.0/ambari.list | |
HDP | http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.6.4.0/hdp.list | |
OS Agnostic Downloads | ||
NiFi only | ||
NiFi Toolkit | ||
Docker Hub | ||
NiFi Registry | ||
MiNiFi Java Agent | ||
MiNiFi Toolkit | ||
iOS/Android Libraries |
Table 1.10. Ubuntu 16 HDF repository & additional download locations
Table 1.11. Debian 7 HDF repository & additional download locations
OS | Format | Download location |
---|---|---|
Debian 7 | HDF Build Number | 3.1.0.0-564 |
HDF Base URL | http://public-repo-1.hortonworks.com/HDF/debian7/3.x/updates/3.1.0.0 | |
Repo | http://public-repo-1.hortonworks.com/HDF/debian7/3.x/updates/3.1.0.0/hdf.list | |
Deb tarball | http://public-repo-1.hortonworks.com/HDF/debian7/3.x/updates/3.1.0.0/HDF-3.1.0.0-debian7-deb.tar.gz | |
Tars tarball | http://public-repo-1.hortonworks.com/HDF/debian7/3.x/updates/3.1.0.0/HDF-3.1.0.0-debian7-tars-tarball.tar.gz | |
HDF Management Pack | http://public-repo-1.hortonworks.com/HDF/debian7/3.x/updates/3.1.0.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.1.0.0-564.tar.gz | |
HDP and Ambari Repositories | ||
Ambari | http://public-repo-1.hortonworks.com/ambari/debian7/2.x/updates/2.6.1.0/ambari.list | |
HDP | http://public-repo-1.hortonworks.com/HDP/debian7/2.x/updates/2.6.4.0/hdp.list | |
OS Agnostic Downloads | ||
NiFi only | ||
NiFi Toolkit | ||
Docker Hub | ||
NiFi Registry | ||
MiNiFi Java Agent | ||
MiNiFi Toolkit | ||
iOS/Android Libraries |
Table 1.12. NiFi and MiNiFi MSI files
Common Vulnerabilities and Exposures
The following CVEs have been fixed in HDF 3.1.0.
CVE-2017-12632
Summary: Apache NiFi host header poisoning issue |
Severity: Medium |
Versions Affected: Apache NiFi 0.1.0 - 1.4.0, HDF 1.x, 2.x, 3.0.x |
Description: A malicious host header in an incoming HTTP request could cause NiFi to load resources from an external server. |
Mitigation: The fix to sanitize host headers and compare to a controlled whitelist was applied on the Apache NiFi 1.5.0 release. HDF users should upgrade to HDF 3.1.0. |
CVE-2017-15697
Summary: Apache NiFi XSS issue in context path handling |
Severity: Moderate |
Versions Affected: Apache NiFi 1.0.0 - 1.4.0, HDF 2.0.0 - 3.0.x |
Description: A malicious X-ProxyContextPath or X-Forwarded-Context header containing external resources or embedded code could cause remote code execution. |
Mitigation: The fix to properly handle these headers was applied on the Apache NiFi 1.5.0 release. HDF users should upgrade to HDF 3.1.0. |
CVE-2017-12623
Summary: Apache NiFi XXE issue in template XML upload |
Severity: Important |
Versions Affected: Apache NiFi 1.0.0 - 1.3.0; HDF 2.x, 3.0.0 - 3.0.1.1 |
Impact: Any authenticated user could upload a template which contained malicious code and accessed sensitive files via an XML External Entity (XXE) attack. |
Mitigation: The fix to properly handle XML External Entities was applied on the Apache NiFi 1.4.0 release. Users running a prior 1.x release should upgrade to the appropriate release. HDF users should upgrade to HDF 3.1.0. |
CVE-2017-15703
Summary: Apache NiFi Java deserialization issue in template XML upload |
Severity: Moderate |
Versions Affected: Apache NiFi 1.0.0 - 1.3.0; HDF 2.x, 3.0.0 - 3.0.1.1 |
Description: Any authenticated user (valid client certificate but without ACL permissions) could upload a template which contained malicious code and caused a denial of service via Java deserialization attack. |
Mitigation: The fix to properly handle Java deserialization was applied on the Apache NiFi 1.4.0 release. HDF users should upgrade to HDF 3.1.0. |
Known Issues
Hortonworks Bug ID |
Apache JIRA |
Component |
Summary |
---|---|---|---|
NIFI-4884 | NiFi and NiFi Registry |
Issue: NiFi and NiFi Registry are unable to import or change version when Processors are configured with CRON Scheduling Strategy. Associated error message: During an import or upgrade flow version process, a dialog displays a message indicating that a CRON expression is not a valid time duration or that a time duration is not a valid CRON expression. Problem: When updating the flow during an import or change version request, the properties are applied in the incorrect order. Because it is attempting to set the Run Schedule prior to configuring the Scheduling Strategy, when the Scheduling Strategy is changes it may fail to interpret the Run Schedule. This happens because the Processor is currently expecting the Run Schedule to be in a particular format based on its current Scheduling Strategy, but it is given the value that satisfies the new Scheduling Strategy. This happens during import when the incoming flow is configured with CRON driven scheduling. This is because the default Scheduling Strategy is Timer driven. This also happens during any flow version changes when the incoming version of the flow changes a Processor Schedule Strategy from CRON driven to Timer driven or vice versa. Workaround: To work around this issue, only configure the components in the flow with a Timer driven or Event driven Scheduling Strategy. Using Timer or Event driven scheduling does not exhibit this known issue since Timer driven is the default Scheduling Strategy and Event driven does not attempt to interpret the Run Schedule. | |
BUG-95279 | NIFI-4818 |
Issue: ReportLineageToAtlas may report wrong Hive database name and Kafka topic cluster name Problem: If the “Hive connection URL” of HiveConnectionPool Controller Service has parameters other than a database name, ReportLineageToAtlas may report incorrect database name for the Atlas hive_table entities representing lineage from/to PutHiveQL or SelectHiveQL. Even if there are more than one hostname:port values defined in “Hive connection URL” of HiveConnectionPool, or “Kafka Brokers” of PublishKafka or ConsumeKafka, ReportLineageToAtlas only uses the first hostname to analyze cluster names. Workaround: To work around this issue, remove parameters from HiveConnectionPool “Hive connection URL” if possible. E.g. "CJD:hive2://hive host:10000/byname". In order to map a cluster name (other than the default cluster name) from hostnames in “Hive connection URLs” or “Kafka Brokers”, define Regular Expressions which match with the first hostname. | |
BUG-95279 | NIFI-4818 |
Issue: ReportLineageToAtlas may report wrong Hive database name and Kafka topic cluster name Problem: If the “Hive connection URL” of HiveConnectionPool Controller Service has parameters other than a database name, ReportLineageToAtlas may report incorrect database name for the Atlas hive_table entities representing lineage from/to PutHiveQL or SelectHiveQL. Even if there are more than one hostname:port values defined in “Hive connection URL” of HiveConnectionPool, or “Kafka Brokers” of PublishKafka or ConsumeKafka, ReportLineageToAtlas only uses the first hostname to analyze cluster names. Workaround: To work around this issue, remove parameters from HiveConnectionPool “Hive connection URL” if possible. E.g. "CJD:hive2://hive host:10000/byname". In order to map a cluster name (other than the default cluster name) from hostnames in “Hive connection URLs” or “Kafka Brokers”, define Regular Expressions which match with the first hostname. | |
BUG-95345 | SAM |
Issue: If the total character count of your node names assigned to services exceeds 255 characters, your upgrade fails. Workaround: There is no workaround for this issue. | |
BUG-94807 | SAM |
Issue: When using SAM's Aggregator processor, you cannot add nested fields to the Select Key field. Workaround: To work around this issue, create only top level values for Select Key. You can add as many top level Select Key fields as is needed. | |
BUG-94090 | SAM |
Issue: You may be unable to import some topologies from HDF 3.0.x Problem: When you have a new HDF 3.1.0 installation, you cannot import topologies created in HDF 3.0.x. Workaround: To work around this issue, you can upgrade your existing version of HDF to HDF 3.1.0. | |
NiFi |
Issue: When you have upgraded HDF services on your HDP cluster, the NiFi version number displayed in Ambari does not change from NiFi 1.2.0 to NiFi 1.5.0. Workaround: After performing the upgrade scenario for HDF services on an HDP cluster as documented in the Ambari Managed HDF Upgrade documentation, disregard the NiFi version displayed in Ambari. NiFi has been successfully upgraded. | ||
NiFi |
Issue: Unable to load native library. Associated error message: You may encounter an error message similar to one of the following:
Problem: Native libraries can only be loaded into the Java runtime by a single ClassLoader. Once a ClassLoader loads a native library, subsequent attempts to load that same native library by a different ClassLoader will fail. If multiple components in NiFi attempt to load a native library, it will fail to do so and the components may fail to execute if it attempts to use those native libraries. Whether those components fail ultimately depend on the order they are loaded and executed which is not guaranteed by NiFi. This will occur most frequently with components that leverage the native Hadoop library. These components include but are not limited to:
Workaround: This issue is most frequently encountered using an HDFS Processor with a Compression Codec that Hadoop leverages native libraries to implement. In this scenario, the dataflow can be updated to utilize a CompressContent Processor before/after the HDFS Processor to apply the desired compression. The CompressContent does not leverage native libraries and will not be susceptible to this known issue. | ||
BUG-94989 | Storm/Ambari |
Issue: When performing an Ambari managed rolling upgrade, you may encounter an inaccurate warning message indicating that Storm topologies need to be stopped. Associated error message:
Workaround: You may safely ignore this error message. Storm topologies do not need to be stopped before performing a rolling upgrade. | |
KNOX-1108 | NiFi/Knox |
Issue: NiFi/Knox integration does not support HA. In NiFiHaDispatch, executeRequest is overridden and does not have the try/catch block in DefaultHaDispatch's executeRequest method which is used to catch exceptions and begin the fail over process. Workaround: There is no workaround for this issue. | |
BUG-90903 | N/A | NiFi/Knox |
Issue: Knox HA failover for NiFi is not supported. Workaround: There is no workaround for this issue. |
BUG-63132 | N/A | Storm |
Summary: Solr bolt does not run in a Kerberos environment. Associated error message: The following is an example:
Workaround: None at this time. |
Third-Party Licenses
HDF 3.1.0 deploys numerous third-party licenses and dependencies, all of which are compatible with the Apache software license. For complete third-party license information, see the licenses and notice files contained within the distribution.