-
Processors
-
AttributeRollingWindow 2.3.0.4.10.0.0-147
-
AttributesToCSV 2.3.0.4.10.0.0-147
-
AttributesToJSON 2.3.0.4.10.0.0-147
-
CalculateParquetOffsets 2.3.0.4.10.0.0-147
-
CalculateParquetRowGroupOffsets 2.3.0.4.10.0.0-147
-
CalculateRecordStats 2.3.0.4.10.0.0-147
-
CaptureChangeDebeziumDB2 2.3.0.4.10.0.0-147
-
CaptureChangeDebeziumMongoDB 2.3.0.4.10.0.0-147
-
CaptureChangeDebeziumMySQL 2.3.0.4.10.0.0-147
-
CaptureChangeDebeziumOracle 2.3.0.4.10.0.0-147
-
CaptureChangeDebeziumPostgreSQL 2.3.0.4.10.0.0-147
-
CaptureChangeDebeziumSQLServer 2.3.0.4.10.0.0-147
-
CaptureChangeMySQL 2.3.0.4.10.0.0-147
-
CompressContent 2.3.0.4.10.0.0-147
-
ConnectWebSocket 2.3.0.4.10.0.0-147
-
ConsumeAMQP 2.3.0.4.10.0.0-147
-
ConsumeAzureEventHub 2.3.0.4.10.0.0-147
-
ConsumeBoxEnterpriseEvents 2.3.0.4.10.0.0-147
-
ConsumeBoxEvents 2.3.0.4.10.0.0-147
-
ConsumeElasticsearch 2.3.0.4.10.0.0-147
-
ConsumeGCPubSub 2.3.0.4.10.0.0-147
-
ConsumeIMAP 2.3.0.4.10.0.0-147
-
ConsumeJMS 2.3.0.4.10.0.0-147
-
ConsumeKafka 2.3.0.4.10.0.0-147
-
ConsumeKafka_2_6 2.3.0.4.10.0.0-147
-
ConsumeKafka2CDP 2.3.0.4.10.0.0-147
-
ConsumeKafka2RecordCDP 2.3.0.4.10.0.0-147
-
ConsumeKafkaRecord_2_6 2.3.0.4.10.0.0-147
-
ConsumeKinesisStream 2.3.0.4.10.0.0-147
-
ConsumeMQTT 2.3.0.4.10.0.0-147
-
ConsumePLC 2.3.0.4.10.0.0-147
-
ConsumePOP3 2.3.0.4.10.0.0-147
-
ConsumeSlack 2.3.0.4.10.0.0-147
-
ConsumeTwitter 2.3.0.4.10.0.0-147
-
ConsumeWindowsEventLog 2.3.0.4.10.0.0-147
-
ControlRate 2.3.0.4.10.0.0-147
-
ConvertAvroToParquet 2.3.0.4.10.0.0-147
-
ConvertCharacterSet 2.3.0.4.10.0.0-147
-
ConvertProtobuf 2.3.0.4.10.0.0-147
-
ConvertRecord 2.3.0.4.10.0.0-147
-
CopyAzureBlobStorage_v12 2.3.0.4.10.0.0-147
-
CopyS3Object 2.3.0.4.10.0.0-147
-
CountText 2.3.0.4.10.0.0-147
-
CreateHadoopSequenceFile 2.3.0.4.10.0.0-147
-
CryptographicHashContent 2.3.0.4.10.0.0-147
-
DebugFlow 2.3.0.4.10.0.0-147
-
DecryptContentAge 2.3.0.4.10.0.0-147
-
DecryptContentPGP 2.3.0.4.10.0.0-147
-
DeduplicateRecord 2.3.0.4.10.0.0-147
-
DeleteAzureBlobStorage_v12 2.3.0.4.10.0.0-147
-
DeleteAzureDataLakeStorage 2.3.0.4.10.0.0-147
-
DeleteByQueryElasticsearch 2.3.0.4.10.0.0-147
-
DeleteCDPObjectStore 2.3.0.4.10.0.0-147
-
DeleteDynamoDB 2.3.0.4.10.0.0-147
-
DeleteFile 2.3.0.4.10.0.0-147
-
DeleteGCSObject 2.3.0.4.10.0.0-147
-
DeleteGridFS 2.3.0.4.10.0.0-147
-
DeleteHBaseCells 2.3.0.4.10.0.0-147
-
DeleteHBaseRow 2.3.0.4.10.0.0-147
-
DeleteHDFS 2.3.0.4.10.0.0-147
-
DeleteMongo 2.3.0.4.10.0.0-147
-
DeleteS3Object 2.3.0.4.10.0.0-147
-
DeleteSFTP 2.3.0.4.10.0.0-147
-
DeleteSQS 2.3.0.4.10.0.0-147
-
DetectDuplicate 2.3.0.4.10.0.0-147
-
DistributeLoad 2.3.0.4.10.0.0-147
-
DuplicateFlowFile 2.3.0.4.10.0.0-147
-
EncodeContent 2.3.0.4.10.0.0-147
-
EncryptContentAge 2.3.0.4.10.0.0-147
-
EncryptContentPGP 2.3.0.4.10.0.0-147
-
EnforceOrder 2.3.0.4.10.0.0-147
-
EvaluateJsonPath 2.3.0.4.10.0.0-147
-
EvaluateXPath 2.3.0.4.10.0.0-147
-
EvaluateXQuery 2.3.0.4.10.0.0-147
-
ExecuteGraphQuery 2.3.0.4.10.0.0-147
-
ExecuteGraphQueryRecord 2.3.0.4.10.0.0-147
-
ExecuteGroovyScript 2.3.0.4.10.0.0-147
-
ExecuteProcess 2.3.0.4.10.0.0-147
-
ExecuteScript 2.3.0.4.10.0.0-147
-
ExecuteSparkInteractive 2.3.0.4.10.0.0-147
-
ExecuteSQL 2.3.0.4.10.0.0-147
-
ExecuteSQLRecord 2.3.0.4.10.0.0-147
-
ExecuteStreamCommand 2.3.0.4.10.0.0-147
-
ExtractAvroMetadata 2.3.0.4.10.0.0-147
-
ExtractDocumentText 2.3.0.4.10.0.0-147
-
ExtractEmailAttachments 2.3.0.4.10.0.0-147
-
ExtractEmailHeaders 2.3.0.4.10.0.0-147
-
ExtractGrok 2.3.0.4.10.0.0-147
-
ExtractHL7Attributes 2.3.0.4.10.0.0-147
-
ExtractImageMetadata 2.3.0.4.10.0.0-147
-
ExtractMediaMetadata 2.3.0.4.10.0.0-147
-
ExtractRecordSchema 2.3.0.4.10.0.0-147
-
ExtractText 2.3.0.4.10.0.0-147
-
FetchAzureBlobStorage_v12 2.3.0.4.10.0.0-147
-
FetchAzureDataLakeStorage 2.3.0.4.10.0.0-147
-
FetchBoxFile 2.3.0.4.10.0.0-147
-
FetchBoxFileInfo 2.3.0.4.10.0.0-147
-
FetchBoxFileRepresentation 2.3.0.4.10.0.0-147
-
FetchCDPObjectStore 2.3.0.4.10.0.0-147
-
FetchDistributedMapCache 2.3.0.4.10.0.0-147
-
FetchDropbox 2.3.0.4.10.0.0-147
-
FetchFile 2.3.0.4.10.0.0-147
-
FetchFTP 2.3.0.4.10.0.0-147
-
FetchGCSObject 2.3.0.4.10.0.0-147
-
FetchGoogleDrive 2.3.0.4.10.0.0-147
-
FetchGridFS 2.3.0.4.10.0.0-147
-
FetchHBaseRow 2.3.0.4.10.0.0-147
-
FetchHDFS 2.3.0.4.10.0.0-147
-
FetchParquet 2.3.0.4.10.0.0-147
-
FetchPLC 2.3.0.4.10.0.0-147
-
FetchS3Object 2.3.0.4.10.0.0-147
-
FetchSFTP 2.3.0.4.10.0.0-147
-
FetchSmb 2.3.0.4.10.0.0-147
-
FilterAttribute 2.3.0.4.10.0.0-147
-
FlattenJson 2.3.0.4.10.0.0-147
-
ForkEnrichment 2.3.0.4.10.0.0-147
-
ForkRecord 2.3.0.4.10.0.0-147
-
GenerateFlowFile 2.3.0.4.10.0.0-147
-
GenerateRecord 2.3.0.4.10.0.0-147
-
GenerateTableFetch 2.3.0.4.10.0.0-147
-
GeoEnrichIP 2.3.0.4.10.0.0-147
-
GeoEnrichIPRecord 2.3.0.4.10.0.0-147
-
GeohashRecord 2.3.0.4.10.0.0-147
-
GetAsanaObject 2.3.0.4.10.0.0-147
-
GetAwsPollyJobStatus 2.3.0.4.10.0.0-147
-
GetAwsTextractJobStatus 2.3.0.4.10.0.0-147
-
GetAwsTranscribeJobStatus 2.3.0.4.10.0.0-147
-
GetAwsTranslateJobStatus 2.3.0.4.10.0.0-147
-
GetAzureEventHub 2.3.0.4.10.0.0-147
-
GetAzureQueueStorage_v12 2.3.0.4.10.0.0-147
-
GetBoxFileCollaborators 2.3.0.4.10.0.0-147
-
GetBoxGroupMembers 2.3.0.4.10.0.0-147
-
GetCouchbaseKey 2.3.0.4.10.0.0-147
-
GetDynamoDB 2.3.0.4.10.0.0-147
-
GetElasticsearch 2.3.0.4.10.0.0-147
-
GetFile 2.3.0.4.10.0.0-147
-
GetFileResource 2.3.0.4.10.0.0-147
-
GetFTP 2.3.0.4.10.0.0-147
-
GetGcpVisionAnnotateFilesOperationStatus 2.3.0.4.10.0.0-147
-
GetGcpVisionAnnotateImagesOperationStatus 2.3.0.4.10.0.0-147
-
GetHBase 2.3.0.4.10.0.0-147
-
GetHDFS 2.3.0.4.10.0.0-147
-
GetHDFSEvents 2.3.0.4.10.0.0-147
-
GetHDFSFileInfo 2.3.0.4.10.0.0-147
-
GetHDFSSequenceFile 2.3.0.4.10.0.0-147
-
GetHubSpot 2.3.0.4.10.0.0-147
-
GetJiraIssue 2.3.0.4.10.0.0-147
-
GetMongo 2.3.0.4.10.0.0-147
-
GetMongoRecord 2.3.0.4.10.0.0-147
-
GetS3ObjectMetadata 2.3.0.4.10.0.0-147
-
GetS3ObjectTags 2.3.0.4.10.0.0-147
-
GetSFTP 2.3.0.4.10.0.0-147
-
GetShopify 2.3.0.4.10.0.0-147
-
GetSlackReaction 2.3.0.4.10.0.0-147
-
GetSmbFile 2.3.0.4.10.0.0-147
-
GetSNMP 2.3.0.4.10.0.0-147
-
GetSnowflakeIngestStatus 2.3.0.4.10.0.0-147
-
GetSolr 2.3.0.4.10.0.0-147
-
GetSplunk 2.3.0.4.10.0.0-147
-
GetSQS 2.3.0.4.10.0.0-147
-
GetTCP 2.3.0.4.10.0.0-147
-
GetWorkdayReport 2.3.0.4.10.0.0-147
-
GetZendesk 2.3.0.4.10.0.0-147
-
HandleHttpRequest 2.3.0.4.10.0.0-147
-
HandleHttpResponse 2.3.0.4.10.0.0-147
-
IdentifyMimeType 2.3.0.4.10.0.0-147
-
InvokeGRPC 2.3.0.4.10.0.0-147
-
InvokeHTTP 2.3.0.4.10.0.0-147
-
InvokeScriptedProcessor 2.3.0.4.10.0.0-147
-
ISPEnrichIP 2.3.0.4.10.0.0-147
-
JoinEnrichment 2.3.0.4.10.0.0-147
-
JoltTransformJSON 2.3.0.4.10.0.0-147
-
JoltTransformRecord 2.3.0.4.10.0.0-147
-
JSLTTransformJSON 2.3.0.4.10.0.0-147
-
JsonQueryElasticsearch 2.3.0.4.10.0.0-147
-
ListAzureBlobStorage_v12 2.3.0.4.10.0.0-147
-
ListAzureDataLakeStorage 2.3.0.4.10.0.0-147
-
ListBoxFile 2.3.0.4.10.0.0-147
-
ListBoxFileInfo 2.3.0.4.10.0.0-147
-
ListCDPObjectStore 2.3.0.4.10.0.0-147
-
ListDatabaseTables 2.3.0.4.10.0.0-147
-
ListDropbox 2.3.0.4.10.0.0-147
-
ListenBeats 2.3.0.4.10.0.0-147
-
ListenFTP 2.3.0.4.10.0.0-147
-
ListenGRPC 2.3.0.4.10.0.0-147
-
ListenHTTP 2.3.0.4.10.0.0-147
-
ListenNetFlow 2.3.0.4.10.0.0-147
-
ListenOTLP 2.3.0.4.10.0.0-147
-
ListenSlack 2.3.0.4.10.0.0-147
-
ListenSyslog 2.3.0.4.10.0.0-147
-
ListenTCP 2.3.0.4.10.0.0-147
-
ListenTrapSNMP 2.3.0.4.10.0.0-147
-
ListenUDP 2.3.0.4.10.0.0-147
-
ListenUDPRecord 2.3.0.4.10.0.0-147
-
ListenWebSocket 2.3.0.4.10.0.0-147
-
ListFile 2.3.0.4.10.0.0-147
-
ListFTP 2.3.0.4.10.0.0-147
-
ListGCSBucket 2.3.0.4.10.0.0-147
-
ListGoogleDrive 2.3.0.4.10.0.0-147
-
ListHBaseRegions 2.3.0.4.10.0.0-147
-
ListHDFS 2.3.0.4.10.0.0-147
-
ListS3 2.3.0.4.10.0.0-147
-
ListSFTP 2.3.0.4.10.0.0-147
-
ListSmb 2.3.0.4.10.0.0-147
-
LogAttribute 2.3.0.4.10.0.0-147
-
LogMessage 2.3.0.4.10.0.0-147
-
LookupAttribute 2.3.0.4.10.0.0-147
-
LookupRecord 2.3.0.4.10.0.0-147
-
MergeContent 2.3.0.4.10.0.0-147
-
MergeRecord 2.3.0.4.10.0.0-147
-
ModifyBytes 2.3.0.4.10.0.0-147
-
ModifyCompression 2.3.0.4.10.0.0-147
-
MonitorActivity 2.3.0.4.10.0.0-147
-
MoveAzureDataLakeStorage 2.3.0.4.10.0.0-147
-
MoveHDFS 2.3.0.4.10.0.0-147
-
Notify 2.3.0.4.10.0.0-147
-
PackageFlowFile 2.3.0.4.10.0.0-147
-
PaginatedJsonQueryElasticsearch 2.3.0.4.10.0.0-147
-
ParseEvtx 2.3.0.4.10.0.0-147
-
ParseNetflowv5 2.3.0.4.10.0.0-147
-
ParseSyslog 2.3.0.4.10.0.0-147
-
ParseSyslog5424 2.3.0.4.10.0.0-147
-
PartitionRecord 2.3.0.4.10.0.0-147
-
PublishAMQP 2.3.0.4.10.0.0-147
-
PublishGCPubSub 2.3.0.4.10.0.0-147
-
PublishJMS 2.3.0.4.10.0.0-147
-
PublishKafka 2.3.0.4.10.0.0-147
-
PublishKafka_2_6 2.3.0.4.10.0.0-147
-
PublishKafka2CDP 2.3.0.4.10.0.0-147
-
PublishKafka2RecordCDP 2.3.0.4.10.0.0-147
-
PublishKafkaRecord_2_6 2.3.0.4.10.0.0-147
-
PublishMQTT 2.3.0.4.10.0.0-147
-
PublishSlack 2.3.0.4.10.0.0-147
-
PutAccumuloRecord 2.3.0.4.10.0.0-147
-
PutAzureBlobStorage_v12 2.3.0.4.10.0.0-147
-
PutAzureCosmosDBRecord 2.3.0.4.10.0.0-147
-
PutAzureDataExplorer 2.3.0.4.10.0.0-147
-
PutAzureDataLakeStorage 2.3.0.4.10.0.0-147
-
PutAzureEventHub 2.3.0.4.10.0.0-147
-
PutAzureQueueStorage_v12 2.3.0.4.10.0.0-147
-
PutBigQuery 2.3.0.4.10.0.0-147
-
PutBoxFile 2.3.0.4.10.0.0-147
-
PutCassandraQL 2.3.0.4.10.0.0-147
-
PutCassandraRecord 2.3.0.4.10.0.0-147
-
PutCDPObjectStore 2.3.0.4.10.0.0-147
-
PutClouderaHiveQL 2.3.0.4.10.0.0-147
-
PutClouderaHiveStreaming 2.3.0.4.10.0.0-147
-
PutClouderaORC 2.3.0.4.10.0.0-147
-
PutCloudWatchMetric 2.3.0.4.10.0.0-147
-
PutCouchbaseKey 2.3.0.4.10.0.0-147
-
PutDatabaseRecord 2.3.0.4.10.0.0-147
-
PutDistributedMapCache 2.3.0.4.10.0.0-147
-
PutDropbox 2.3.0.4.10.0.0-147
-
PutDynamoDB 2.3.0.4.10.0.0-147
-
PutDynamoDBRecord 2.3.0.4.10.0.0-147
-
PutElasticsearchJson 2.3.0.4.10.0.0-147
-
PutElasticsearchRecord 2.3.0.4.10.0.0-147
-
PutEmail 2.3.0.4.10.0.0-147
-
PutFile 2.3.0.4.10.0.0-147
-
PutFTP 2.3.0.4.10.0.0-147
-
PutGCSObject 2.3.0.4.10.0.0-147
-
PutGoogleDrive 2.3.0.4.10.0.0-147
-
PutGridFS 2.3.0.4.10.0.0-147
-
PutHBaseCell 2.3.0.4.10.0.0-147
-
PutHBaseJSON 2.3.0.4.10.0.0-147
-
PutHBaseRecord 2.3.0.4.10.0.0-147
-
PutHDFS 2.3.0.4.10.0.0-147
-
PutIceberg 2.3.0.4.10.0.0-147
-
PutIcebergCDC 2.3.0.4.10.0.0-147
-
PutIoTDBRecord 2.3.0.4.10.0.0-147
-
PutJiraIssue 2.3.0.4.10.0.0-147
-
PutKinesisFirehose 2.3.0.4.10.0.0-147
-
PutKinesisStream 2.3.0.4.10.0.0-147
-
PutKudu 2.3.0.4.10.0.0-147
-
PutLambda 2.3.0.4.10.0.0-147
-
PutMongo 2.3.0.4.10.0.0-147
-
PutMongoBulkOperations 2.3.0.4.10.0.0-147
-
PutMongoRecord 2.3.0.4.10.0.0-147
-
PutParquet 2.3.0.4.10.0.0-147
-
PutPLC 2.3.0.4.10.0.0-147
-
PutRecord 2.3.0.4.10.0.0-147
-
PutRedisHashRecord 2.3.0.4.10.0.0-147
-
PutS3Object 2.3.0.4.10.0.0-147
-
PutSalesforceObject 2.3.0.4.10.0.0-147
-
PutSFTP 2.3.0.4.10.0.0-147
-
PutSmbFile 2.3.0.4.10.0.0-147
-
PutSnowflakeInternalStage 2.3.0.4.10.0.0-147
-
PutSNS 2.3.0.4.10.0.0-147
-
PutSolrContentStream 2.3.0.4.10.0.0-147
-
PutSolrRecord 2.3.0.4.10.0.0-147
-
PutSplunk 2.3.0.4.10.0.0-147
-
PutSplunkHTTP 2.3.0.4.10.0.0-147
-
PutSQL 2.3.0.4.10.0.0-147
-
PutSQS 2.3.0.4.10.0.0-147
-
PutSyslog 2.3.0.4.10.0.0-147
-
PutTCP 2.3.0.4.10.0.0-147
-
PutUDP 2.3.0.4.10.0.0-147
-
PutWebSocket 2.3.0.4.10.0.0-147
-
PutZendeskTicket 2.3.0.4.10.0.0-147
-
QueryAirtableTable 2.3.0.4.10.0.0-147
-
QueryAzureDataExplorer 2.3.0.4.10.0.0-147
-
QueryCassandra 2.3.0.4.10.0.0-147
-
QueryDatabaseTable 2.3.0.4.10.0.0-147
-
QueryDatabaseTableRecord 2.3.0.4.10.0.0-147
-
QueryIoTDBRecord 2.3.0.4.10.0.0-147
-
QueryRecord 2.3.0.4.10.0.0-147
-
QuerySalesforceObject 2.3.0.4.10.0.0-147
-
QuerySolr 2.3.0.4.10.0.0-147
-
QuerySplunkIndexingStatus 2.3.0.4.10.0.0-147
-
RemoveRecordField 2.3.0.4.10.0.0-147
-
RenameRecordField 2.3.0.4.10.0.0-147
-
ReplaceText 2.3.0.4.10.0.0-147
-
ReplaceTextWithMapping 2.3.0.4.10.0.0-147
-
ResizeImage 2.3.0.4.10.0.0-147
-
RetryFlowFile 2.3.0.4.10.0.0-147
-
RouteHL7 2.3.0.4.10.0.0-147
-
RouteOnAttribute 2.3.0.4.10.0.0-147
-
RouteOnContent 2.3.0.4.10.0.0-147
-
RouteText 2.3.0.4.10.0.0-147
-
RunMongoAggregation 2.3.0.4.10.0.0-147
-
SampleRecord 2.3.0.4.10.0.0-147
-
SawmillTransformJSON 2.3.0.4.10.0.0-147
-
SawmillTransformRecord 2.3.0.4.10.0.0-147
-
ScanAccumulo 2.3.0.4.10.0.0-147
-
ScanAttribute 2.3.0.4.10.0.0-147
-
ScanContent 2.3.0.4.10.0.0-147
-
ScanHBase 2.3.0.4.10.0.0-147
-
ScriptedFilterRecord 2.3.0.4.10.0.0-147
-
ScriptedPartitionRecord 2.3.0.4.10.0.0-147
-
ScriptedTransformRecord 2.3.0.4.10.0.0-147
-
ScriptedValidateRecord 2.3.0.4.10.0.0-147
-
SearchElasticsearch 2.3.0.4.10.0.0-147
-
SegmentContent 2.3.0.4.10.0.0-147
-
SelectClouderaHiveQL 2.3.0.4.10.0.0-147
-
SendTrapSNMP 2.3.0.4.10.0.0-147
-
SetSNMP 2.3.0.4.10.0.0-147
-
SignContentPGP 2.3.0.4.10.0.0-147
-
SplitAvro 2.3.0.4.10.0.0-147
-
SplitContent 2.3.0.4.10.0.0-147
-
SplitExcel 2.3.0.4.10.0.0-147
-
SplitJson 2.3.0.4.10.0.0-147
-
SplitPCAP 2.3.0.4.10.0.0-147
-
SplitRecord 2.3.0.4.10.0.0-147
-
SplitText 2.3.0.4.10.0.0-147
-
SplitXml 2.3.0.4.10.0.0-147
-
StartAwsPollyJob 2.3.0.4.10.0.0-147
-
StartAwsTextractJob 2.3.0.4.10.0.0-147
-
StartAwsTranscribeJob 2.3.0.4.10.0.0-147
-
StartAwsTranslateJob 2.3.0.4.10.0.0-147
-
StartGcpVisionAnnotateFilesOperation 2.3.0.4.10.0.0-147
-
StartGcpVisionAnnotateImagesOperation 2.3.0.4.10.0.0-147
-
StartSnowflakeIngest 2.3.0.4.10.0.0-147
-
TagS3Object 2.3.0.4.10.0.0-147
-
TailFile 2.3.0.4.10.0.0-147
-
TransformXml 2.3.0.4.10.0.0-147
-
TriggerClouderaHiveMetaStoreEvent 2.3.0.4.10.0.0-147
-
UnpackContent 2.3.0.4.10.0.0-147
-
UpdateAttribute 2.3.0.4.10.0.0-147
-
UpdateByQueryElasticsearch 2.3.0.4.10.0.0-147
-
UpdateClouderaHiveTable 2.3.0.4.10.0.0-147
-
UpdateCounter 2.3.0.4.10.0.0-147
-
UpdateDatabaseTable 2.3.0.4.10.0.0-147
-
UpdateDeltaLakeTable 2.3.0.4.10.0.0-147
-
UpdateJiraIssue 2.3.0.4.10.0.0-147
-
UpdateRecord 2.3.0.4.10.0.0-147
-
ValidateCsv 2.3.0.4.10.0.0-147
-
ValidateJson 2.3.0.4.10.0.0-147
-
ValidateRecord 2.3.0.4.10.0.0-147
-
ValidateXml 2.3.0.4.10.0.0-147
-
VerifyContentMAC 2.3.0.4.10.0.0-147
-
VerifyContentPGP 2.3.0.4.10.0.0-147
-
Wait 2.3.0.4.10.0.0-147
-
-
Controller Services
-
AccumuloService 2.3.0.4.10.0.0-147
-
ActiveMQJMSConnectionFactoryProvider 2.3.0.4.10.0.0-147
-
ADLSCredentialsControllerService 2.3.0.4.10.0.0-147
-
ADLSCredentialsControllerServiceLookup 2.3.0.4.10.0.0-147
-
ADLSIDBrokerCloudCredentialsProviderControllerService 2.3.0.4.10.0.0-147
-
AmazonGlueSchemaRegistry 2.3.0.4.10.0.0-147
-
ApicurioSchemaRegistry 2.3.0.4.10.0.0-147
-
AvroReader 2.3.0.4.10.0.0-147
-
AvroRecordSetWriter 2.3.0.4.10.0.0-147
-
AvroSchemaRegistry 2.3.0.4.10.0.0-147
-
AWSCredentialsProviderControllerService 2.3.0.4.10.0.0-147
-
AWSIDBrokerCloudCredentialsProviderControllerService 2.3.0.4.10.0.0-147
-
AzureBlobIDBrokerCloudCredentialsProviderControllerService 2.3.0.4.10.0.0-147
-
AzureBlobStorageFileResourceService 2.3.0.4.10.0.0-147
-
AzureCosmosDBClientService 2.3.0.4.10.0.0-147
-
AzureDataLakeStorageFileResourceService 2.3.0.4.10.0.0-147
-
AzureEventHubRecordSink 2.3.0.4.10.0.0-147
-
AzureServiceBusJMSConnectionFactoryProvider 2.3.0.4.10.0.0-147
-
AzureStorageCredentialsControllerService_v12 2.3.0.4.10.0.0-147
-
AzureStorageCredentialsControllerServiceLookup_v12 2.3.0.4.10.0.0-147
-
CassandraDistributedMapCache 2.3.0.4.10.0.0-147
-
CassandraSessionProvider 2.3.0.4.10.0.0-147
-
CdpCredentialsProviderControllerService 2.3.0.4.10.0.0-147
-
CdpOauth2AccessTokenProviderControllerService 2.3.0.4.10.0.0-147
-
CEFReader 2.3.0.4.10.0.0-147
-
CiscoEmblemSyslogMessageReader 2.3.0.4.10.0.0-147
-
ClouderaAttributeSchemaReferenceReader 2.3.0.4.10.0.0-147
-
ClouderaAttributeSchemaReferenceWriter 2.3.0.4.10.0.0-147
-
ClouderaEncodedSchemaReferenceReader 2.3.0.4.10.0.0-147
-
ClouderaEncodedSchemaReferenceWriter 2.3.0.4.10.0.0-147
-
ClouderaHiveConnectionPool 2.3.0.4.10.0.0-147
-
ClouderaSchemaRegistry 2.3.0.4.10.0.0-147
-
CMLLookupService 2.3.0.4.10.0.0-147
-
ConfluentEncodedSchemaReferenceReader 2.3.0.4.10.0.0-147
-
ConfluentEncodedSchemaReferenceWriter 2.3.0.4.10.0.0-147
-
ConfluentSchemaRegistry 2.3.0.4.10.0.0-147
-
CouchbaseClusterService 2.3.0.4.10.0.0-147
-
CouchbaseKeyValueLookupService 2.3.0.4.10.0.0-147
-
CouchbaseMapCacheClient 2.3.0.4.10.0.0-147
-
CouchbaseRecordLookupService 2.3.0.4.10.0.0-147
-
CSVReader 2.3.0.4.10.0.0-147
-
CSVRecordLookupService 2.3.0.4.10.0.0-147
-
CSVRecordSetWriter 2.3.0.4.10.0.0-147
-
DatabaseRecordLookupService 2.3.0.4.10.0.0-147
-
DatabaseRecordSink 2.3.0.4.10.0.0-147
-
DatabaseTableSchemaRegistry 2.3.0.4.10.0.0-147
-
DBCPConnectionPool 2.3.0.4.10.0.0-147
-
DBCPConnectionPoolLookup 2.3.0.4.10.0.0-147
-
DeveloperBoxClientService 2.3.0.4.10.0.0-147
-
DistributedMapCacheLookupService 2.3.0.4.10.0.0-147
-
EBCDICRecordReader 2.3.0.4.10.0.0-147
-
ElasticSearchClientServiceImpl 2.3.0.4.10.0.0-147
-
ElasticSearchLookupService 2.3.0.4.10.0.0-147
-
ElasticSearchStringLookupService 2.3.0.4.10.0.0-147
-
EmailRecordSink 2.3.0.4.10.0.0-147
-
EmbeddedHazelcastCacheManager 2.3.0.4.10.0.0-147
-
ExcelReader 2.3.0.4.10.0.0-147
-
ExternalHazelcastCacheManager 2.3.0.4.10.0.0-147
-
FreeFormTextRecordSetWriter 2.3.0.4.10.0.0-147
-
GCPCredentialsControllerService 2.3.0.4.10.0.0-147
-
GCSFileResourceService 2.3.0.4.10.0.0-147
-
GenericPLC4XConnectionPool 2.3.0.4.10.0.0-147
-
GrokReader 2.3.0.4.10.0.0-147
-
HadoopCatalogService 2.3.0.4.10.0.0-147
-
HadoopDBCPConnectionPool 2.3.0.4.10.0.0-147
-
HazelcastMapCacheClient 2.3.0.4.10.0.0-147
-
HBase_2_ClientMapCacheService 2.3.0.4.10.0.0-147
-
HBase_2_ClientService 2.3.0.4.10.0.0-147
-
HBase_2_RecordLookupService 2.3.0.4.10.0.0-147
-
HikariCPConnectionPool 2.3.0.4.10.0.0-147
-
HiveCatalogService 2.3.0.4.10.0.0-147
-
HttpRecordSink 2.3.0.4.10.0.0-147
-
ImpalaConnectionPool 2.3.0.4.10.0.0-147
-
IPFIXReader 2.3.0.4.10.0.0-147
-
IPLookupService 2.3.0.4.10.0.0-147
-
JASN1Reader 2.3.0.4.10.0.0-147
-
JdbcCatalogService 2.3.0.4.10.0.0-147
-
JettyWebSocketClient 2.3.0.4.10.0.0-147
-
JettyWebSocketServer 2.3.0.4.10.0.0-147
-
JiraRecordSink 2.3.0.4.10.0.0-147
-
JMSConnectionFactoryProvider 2.3.0.4.10.0.0-147
-
JndiJmsConnectionFactoryProvider 2.3.0.4.10.0.0-147
-
JsonConfigBasedBoxClientService 2.3.0.4.10.0.0-147
-
JsonPathReader 2.3.0.4.10.0.0-147
-
JsonRecordSetWriter 2.3.0.4.10.0.0-147
-
JsonTreeReader 2.3.0.4.10.0.0-147
-
Kafka3ConnectionService 2.3.0.4.10.0.0-147
-
KafkaRecordSink_2_6 2.3.0.4.10.0.0-147
-
KerberosKeytabUserService 2.3.0.4.10.0.0-147
-
KerberosPasswordUserService 2.3.0.4.10.0.0-147
-
KerberosTicketCacheUserService 2.3.0.4.10.0.0-147
-
KuduLookupService 2.3.0.4.10.0.0-147
-
LivySessionController 2.3.0.4.10.0.0-147
-
LoggingRecordSink 2.3.0.4.10.0.0-147
-
MapCacheClientService 2.3.0.4.10.0.0-147
-
MapCacheServer 2.3.0.4.10.0.0-147
-
MongoDBControllerService 2.3.0.4.10.0.0-147
-
MongoDBLookupService 2.3.0.4.10.0.0-147
-
Neo4JCypherClientService 2.3.0.4.10.0.0-147
-
ParquetReader 2.3.0.4.10.0.0-147
-
ParquetRecordSetWriter 2.3.0.4.10.0.0-147
-
PEMEncodedSSLContextProvider 2.3.0.4.10.0.0-147
-
PhoenixThickConnectionPool 2.3.0.4.10.0.0-147
-
PhoenixThinConnectionPool 2.3.0.4.10.0.0-147
-
PostgreSQLConnectionPool 2.3.0.4.10.0.0-147
-
PropertiesFileLookupService 2.3.0.4.10.0.0-147
-
ProtobufReader 2.3.0.4.10.0.0-147
-
ProxyPLC4XConnectionPool 2.3.0.4.10.0.0-147
-
RabbitMQJMSConnectionFactoryProvider 2.3.0.4.10.0.0-147
-
ReaderLookup 2.3.0.4.10.0.0-147
-
RecordSetWriterLookup 2.3.0.4.10.0.0-147
-
RecordSinkServiceLookup 2.3.0.4.10.0.0-147
-
RedisConnectionPoolService 2.3.0.4.10.0.0-147
-
RedisDistributedMapCacheClientService 2.3.0.4.10.0.0-147
-
RedshiftConnectionPool 2.3.0.4.10.0.0-147
-
RESTCatalogService 2.3.0.4.10.0.0-147
-
RestLookupService 2.3.0.4.10.0.0-147
-
S3FileResourceService 2.3.0.4.10.0.0-147
-
ScriptedLookupService 2.3.0.4.10.0.0-147
-
ScriptedReader 2.3.0.4.10.0.0-147
-
ScriptedRecordSetWriter 2.3.0.4.10.0.0-147
-
ScriptedRecordSink 2.3.0.4.10.0.0-147
-
SetCacheClientService 2.3.0.4.10.0.0-147
-
SetCacheServer 2.3.0.4.10.0.0-147
-
SimpleCsvFileLookupService 2.3.0.4.10.0.0-147
-
SimpleDatabaseLookupService 2.3.0.4.10.0.0-147
-
SimpleKeyValueLookupService 2.3.0.4.10.0.0-147
-
SimpleRedisDistributedMapCacheClientService 2.3.0.4.10.0.0-147
-
SimpleScriptedLookupService 2.3.0.4.10.0.0-147
-
SiteToSiteReportingRecordSink 2.3.0.4.10.0.0-147
-
SlackRecordSink 2.3.0.4.10.0.0-147
-
SmbjClientProviderService 2.3.0.4.10.0.0-147
-
SnowflakeComputingConnectionPool 2.3.0.4.10.0.0-147
-
StandardAsanaClientProviderService 2.3.0.4.10.0.0-147
-
StandardAzureCredentialsControllerService 2.3.0.4.10.0.0-147
-
StandardDatabaseDialectService 2.3.0.4.10.0.0-147
-
StandardDropboxCredentialService 2.3.0.4.10.0.0-147
-
StandardFileResourceService 2.3.0.4.10.0.0-147
-
StandardHashiCorpVaultClientService 2.3.0.4.10.0.0-147
-
StandardHttpContextMap 2.3.0.4.10.0.0-147
-
StandardJiraCredentialService 2.3.0.4.10.0.0-147
-
StandardJsonSchemaRegistry 2.3.0.4.10.0.0-147
-
StandardKustoIngestService 2.3.0.4.10.0.0-147
-
StandardKustoQueryService 2.3.0.4.10.0.0-147
-
StandardOauth2AccessTokenProvider 2.3.0.4.10.0.0-147
-
StandardPGPPrivateKeyService 2.3.0.4.10.0.0-147
-
StandardPGPPublicKeyService 2.3.0.4.10.0.0-147
-
StandardPLC4XConnectionPool 2.3.0.4.10.0.0-147
-
StandardPrivateKeyService 2.3.0.4.10.0.0-147
-
StandardProxyConfigurationService 2.3.0.4.10.0.0-147
-
StandardRestrictedSSLContextService 2.3.0.4.10.0.0-147
-
StandardS3EncryptionService 2.3.0.4.10.0.0-147
-
StandardSnowflakeIngestManagerProviderService 2.3.0.4.10.0.0-147
-
StandardSSLContextService 2.3.0.4.10.0.0-147
-
StandardWebClientServiceProvider 2.3.0.4.10.0.0-147
-
Syslog5424Reader 2.3.0.4.10.0.0-147
-
SyslogReader 2.3.0.4.10.0.0-147
-
TinkerpopClientService 2.3.0.4.10.0.0-147
-
UDPEventRecordSink 2.3.0.4.10.0.0-147
-
VolatileSchemaCache 2.3.0.4.10.0.0-147
-
WindowsEventLogReader 2.3.0.4.10.0.0-147
-
XMLFileLookupService 2.3.0.4.10.0.0-147
-
XMLReader 2.3.0.4.10.0.0-147
-
XMLRecordSetWriter 2.3.0.4.10.0.0-147
-
YamlTreeReader 2.3.0.4.10.0.0-147
-
ZendeskRecordSink 2.3.0.4.10.0.0-147
-
-
Reporting Tasks
-
AzureLogAnalyticsProvenanceReportingTask 2.3.0.4.10.0.0-147
-
AzureLogAnalyticsReportingTask 2.3.0.4.10.0.0-147
-
ControllerStatusReportingTask 2.3.0.4.10.0.0-147
-
MonitorDiskUsage 2.3.0.4.10.0.0-147
-
MonitorMemory 2.3.0.4.10.0.0-147
-
QueryNiFiReportingTask 2.3.0.4.10.0.0-147
-
ReportLineageToAtlas 2.3.0.4.10.0.0-147
-
ScriptedReportingTask 2.3.0.4.10.0.0-147
-
SiteToSiteBulletinReportingTask 2.3.0.4.10.0.0-147
-
SiteToSiteMetricsReportingTask 2.3.0.4.10.0.0-147
-
SiteToSiteProvenanceReportingTask 2.3.0.4.10.0.0-147
-
SiteToSiteStatusReportingTask 2.3.0.4.10.0.0-147
-
-
Parameter Providers
-
AwsSecretsManagerParameterProvider 2.3.0.4.10.0.0-147
-
AzureKeyVaultSecretsParameterProvider 2.3.0.4.10.0.0-147
-
CyberArkConjurParameterProvider 2.3.0.4.10.0.0-147
-
DatabaseParameterProvider 2.3.0.4.10.0.0-147
-
EnvironmentVariableParameterProvider 2.3.0.4.10.0.0-147
-
GcpSecretManagerParameterProvider 2.3.0.4.10.0.0-147
-
HashiCorpVaultParameterProvider 2.3.0.4.10.0.0-147
-
KubernetesSecretParameterProvider 2.3.0.4.10.0.0-147
-
OnePasswordParameterProvider 2.3.0.4.10.0.0-147
-
PropertiesFileParameterProvider 2.3.0.4.10.0.0-147
-
-
Flow Analysis Rules
-
DisallowComponentType 2.3.0.4.10.0.0-147
-
DisallowConsecutiveConnectionsWithRoundRobinLB 2.3.0.4.10.0.0-147
-
DisallowDeadEnd 2.3.0.4.10.0.0-147
-
DisallowDeprecatedProcessor 2.3.0.4.10.0.0-147
-
DisallowExtractTextForFullContent 2.3.0.4.10.0.0-147
-
RecommendRecordProcessor 2.3.0.4.10.0.0-147
-
RequireHandleHttpResponseAfterHandleHttpRequest 2.3.0.4.10.0.0-147
-
RequireMergeBeforePutIceberg 2.3.0.4.10.0.0-147
-
RestrictBackpressureSettings 2.3.0.4.10.0.0-147
-
RestrictComponentNaming 2.3.0.4.10.0.0-147
-
RestrictConcurrentTasksVsThreadPoolSizeInProcessors 2.3.0.4.10.0.0-147
-
RestrictFlowFileExpiration 2.3.0.4.10.0.0-147
-
RestrictProcessorConcurrency 2.3.0.4.10.0.0-147
-
RestrictSchedulingForListProcessors 2.3.0.4.10.0.0-147
-
RestrictThreadPoolSize 2.3.0.4.10.0.0-147
-
RestrictYieldDurationForConsumeKafkaProcessors 2.3.0.4.10.0.0-147
-
ConsumeKafka2RecordCDP 2.3.0.4.10.0.0-147
- Bundle
- com.cloudera | nifi-cdf-kafka-2-nar
- Description
- Consumes messages from Apache Kafka specifically built against the Kafka 2.5.0.7.1.7.1000-141 Consumer API. The complementary NiFi processor for sending messages is PublishKafka2RecordCDP. Please note that, at this time, the Processor assumes that all records that are retrieved from a given partition have the same schema. If any of the Kafka messages are pulled but cannot be parsed or written with the configured Record Reader or Record Writer, the contents of the message will be written to a separate FlowFile, and that FlowFile will be transferred to the 'parse.failure' relationship. Otherwise, each FlowFile is sent to the 'success' relationship and may contain many individual messages within the single FlowFile. A 'record.count' attribute is added to indicate how many messages are contained in the FlowFile. No two Kafka messages will be placed into the same FlowFile if they have different schemas, or if they have different values for a message header that is included by the <Headers to Add as Attributes> property.
- Tags
- 2.5.0.7.1.7.1000-141, Consume, Get, Ingest, Ingress, Kafka, PubSub, Record, Topic, avro, csv, json
- Input Requirement
- FORBIDDEN
- Supports Sensitive Dynamic Properties
- false
-
Additional Details for ConsumeKafka2RecordCDP 2.3.0.4.10.0.0-147
ConsumeKafka2RecordCDP
Description
This Processor polls Apache Kafka for data using KafkaConsumer API available with Kafka CLOUDERA_KAFKA_VERSION. When a message is received from Kafka, the message will be deserialized using the configured Record Reader, and then written to a FlowFile by serializing the message with the configured Record Writer.
Consumer Partition Assignment
By default, this processor will subscribe to one or more Kafka topics in such a way that the topics to consume from are randomly assigned to the nodes in the NiFi cluster. Consider a scenario where a single Kafka topic has 8 partitions and the consuming NiFi cluster has 3 nodes. In this scenario, Node 1 may be assigned partitions 0, 1, and 2. Node 2 may be assigned partitions 3, 4, and 5. Node 3 will then be assigned partitions 6 and 7.
In this scenario, if Node 3 somehow fails or stops pulling data from Kafka, partitions 6 and 7 may then be reassigned to the other two nodes. For most use cases, this is desirable. It provides fault tolerance and allows the remaining nodes to pick up the slack. However, there are cases where this is undesirable.
One such case is when using NiFi to consume Change Data Capture (CDC) data from Kafka. Consider again the above scenario. Consider that Node 3 has pulled 1,000 messages from Kafka but has not yet delivered them to their final destination. NiFi is then stopped and restarted, and that takes 15 minutes to complete. In the meantime, Partitions 6 and 7 have been reassigned to the other nodes. Those nodes then proceeded to pull data from Kafka and deliver it to the desired destination. After 15 minutes, Node 3 rejoins the cluster and then continues to deliver its 1,000 messages that it has already pulled from Kafka to the destination system. Now, those records have been delivered out of order.
The solution for this, then, is to assign partitions statically instead of dynamically. In this way, we can assign Partitions 6 and 7 to Node 3 specifically. Then, if Node 3 is restarted, the other nodes will not pull data from Partitions 6 and 7. The data will remain queued in Kafka until Node 3 is restarted. By using this approach, we can ensure that the data that already was pulled can be processed (assuming First In First Out Prioritizers are used) before newer messages are handled.
In order to provide a static mapping of node to Kafka partition(s), one or more user-defined properties must be added using the naming scheme
partitions.<hostname>
with the value being a comma-separated list of Kafka partitions to use. For example,partitions.nifi-01=0, 3, 6, 9
,partitions.nifi-02=1, 4, 7, 10
, andpartitions.nifi-03=2, 5, 8, 11
. The hostname that is used can be the fully qualified hostname, the “simple” hostname, or the IP address. There must be an entry for each node in the cluster, or the Processor will become invalid. If it is desirable for a node to not have any partitions assigned to it, a Property may be added for the hostname with an empty string as the value.NiFi cannot readily validate that all Partitions have been assigned before the Processor is scheduled to run. However, it can validate that no partitions have been skipped. As such, if partitions 0, 1, and 3 are assigned but not partition 2, the Processor will not be valid. However, if partitions 0, 1, and 2 are assigned, the Processor will become valid, even if there are 4 partitions on the Topic. When the Processor is started, the Processor will immediately start to fail, logging errors, and avoid pulling any data until the Processor is updated to account for all partitions. Once running, if the number of partitions is changed, the Processor will continue to run but not pull data from the newly added partitions. Once stopped, it will begin to error until all partitions have been assigned. Additionally, if partitions that are assigned do not exist (e.g., partitions 0, 1, 2, 3, 4, 5, 6, and 7 are assigned, but the Topic has only 4 partitions), then the Processor will begin to log errors on startup and will not pull data.
In order to use a static mapping of Kafka partitions, the “Topic Name Format” must be set to “names” rather than “pattern.” Additionally, all Topics that are to be consumed must have the same number of partitions. If multiple Topics are to be consumed and have a different number of partitions, multiple Processors must be used so that each Processor consumes only from Topics with the same number of partitions.
Security Configuration:
The Security Protocol property allows the user to specify the protocol for communicating with the Kafka broker. The following sections describe each of the protocols in further detail.
PLAINTEXT
This option provides an unsecured connection to the broker, with no client authentication and no encryption. In order to use this option the broker must be configured with a listener of the form:
PLAINTEXT://host.name:port
SSL
This option provides an encrypted connection to the broker, with optional client authentication. In order to use this option the broker must be configured with a listener of the form:
SSL://host.name:port
In addition, the processor must have an SSL Context Service selected.
If the broker specifies ssl.client.auth=none, or does not specify ssl.client.auth, then the client will not be required to present a certificate. In this case, the SSL Context Service selected may specify only a truststore containing the public key of the certificate authority used to sign the broker’s key.
If the broker specifies ssl.client.auth=required then the client will be required to present a certificate. In this case, the SSL Context Service must also specify a keystore containing a client key, in addition to a truststore as described above.
SASL_PLAINTEXT
This option uses SASL with a PLAINTEXT transport layer to authenticate to the broker. In order to use this option the broker must be configured with a listener of the form:
SASL_PLAINTEXT://host.name:port
In addition, the Kerberos Service Name must be specified in the processor.
SASL_PLAINTEXT - GSSAPI
If the SASL mechanism is GSSAPI, then the client must provide a JAAS configuration to authenticate.
An example of the JAAS config file would be the following:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/path/to/nifi.keytab" serviceName="kafka" principal="nifi@YOURREALM.COM"; };
NOTE: The serviceName in the JAAS file must match the Kerberos Service Name in the processor.
The JAAS configuration can be provided by either of below ways
- specify the java.security.auth.login.config system property in NiFi’s bootstrap.conf. This limits you to use only one user credential across the cluster.
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
- add user attribute ‘sasl.jaas.config’ in the processor configurations. This method allows one to have multiple consumers with different user credentials or gives flexibility to consume from multiple kafka clusters.
sasl.jaas.config : com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/path/to/nifi.keytab" serviceName="kafka" principal="nifi@YOURREALM.COM";
Alternatively, the JAAS configuration when using GSSAPI can be provided by specifying the Kerberos Principal and Kerberos Keytab directly in the processor properties. This will dynamically create a JAAS configuration like above, and will take precedence over the java.security.auth.login.config system property.
SASL_PLAINTEXT - PLAIN
If the SASL mechanism is PLAIN, then client must provide a JAAS configuration to authenticate, but the JAAS configuration must use Kafka’s PlainLoginModule. An example of the JAAS config file would be the following:
KafkaClient { org.apache.kafka.common.security.plain.PlainLoginModule required username="nifi" password="nifi-password"; };
The JAAS configuration can be provided by either of below ways
- specify the java.security.auth.login.config system property in NiFi’s bootstrap.conf. This limits you to use only one user credential across the cluster.
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
- add user attribute ‘sasl.jaas.config’ in the processor configurations. This method allows one to have multiple consumers with different user credentials or gives flexibility to consume from multiple kafka clusters.
sasl.jaas.config : org.apache.kafka.common.security.plain.PlainLoginModule required username="nifi" password="nifi-password";
NOTE: The dynamic properties of this processor are not secured and as a result the password entered when utilizing sasl.jaas.config will be stored in the flow.xml.gz file in plain-text, and will be saved to NiFi Registry if using versioned flows.
NOTE: It is not recommended to use a SASL mechanism of PLAIN with SASL_PLAINTEXT, as it would transmit the username and password unencrypted.
NOTE: The Kerberos Service Name is not required for SASL mechanism of PLAIN. However, processor warns saying this attribute has to be filled with non empty string. You can choose to fill any random string, such as “null”.
NOTE: Using the PlainLoginModule will cause it be registered in the JVM’s static list of Providers, making it visible to components in other NARs that may access the providers. There is currently a known issue where Kafka processors using the PlainLoginModule will cause HDFS processors with Keberos to no longer work.
SASL_PLAINTEXT - SCRAM
If the SASL mechanism is SCRAM, then client must provide a JAAS configuration to authenticate, but the JAAS configuration must use Kafka’s ScramLoginModule. Ensure that you add user defined attribute ‘sasl.mechanism’ and assign ‘SCRAM-SHA-256’ or ‘SCRAM-SHA-512’ based on kafka broker configurations. An example of the JAAS config file would be the following:
KafkaClient { org.apache.kafka.common.security.scram.ScramLoginModule required username="nifi" password="nifi-password"; };
The JAAS configuration can be provided by either of below ways
- specify the java.security.auth.login.config system property in NiFi’s bootstrap.conf. This limits you to use only one user credential across the cluster.
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
- add user attribute ‘sasl.jaas.config’ in the processor configurations. This method allows one to have multiple consumers with different user credentials or gives flexibility to consume from multiple kafka clusters.
sasl.jaas.config : org.apache.kafka.common.security.scram.ScramLoginModule required username="nifi" password="nifi-password";
NOTE: The dynamic properties of this processor are not secured and as a result the password entered when utilizing sasl.jaas.config will be stored in the flow.xml.gz file in plain-text, and will be saved to NiFi Registry if using versioned flows.
NOTE: The Kerberos Service Name is not required for SASL mechanism of SCRAM-SHA-256 or SCRAM-SHA-512. However, processor warns saying this attribute has to be filled with non empty string. You can choose to fill any random string, such as “null”.
SASL_SSL
This option uses SASL with an SSL/TLS transport layer to authenticate to the broker. In order to use this option the broker must be configured with a listener of the form:
SASL_SSL://host.name:port
See the SASL_PLAINTEXT section for a description of how to provide the proper JAAS configuration depending on the SASL mechanism (GSSAPI or PLAIN).
See the SSL section for a description of how to configure the SSL Context Service based on the ssl.client.auth property.
-
Offset Reset
Allows you to manage the condition when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted). Corresponds to Kafka's 'auto.offset.reset' property.
- Display Name
- Offset Reset
- Description
- Allows you to manage the condition when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted). Corresponds to Kafka's 'auto.offset.reset' property.
- API Name
- auto.offset.reset
- Default Value
- latest
- Allowable Values
-
- earliest
- latest
- none
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
AWS Profile Name
The Amazon Web Services Profile to select when multiple profiles are available.
- Display Name
- AWS Profile Name
- Description
- The Amazon Web Services Profile to select when multiple profiles are available.
- API Name
- aws.profile.name
- Expression Language Scope
- Environment variables and FlowFile Attributes
- Sensitive
- false
- Required
- false
- Dependencies
-
- SASL Mechanism is set to any of [AWS_MSK_IAM]
-
Kafka Brokers
Comma-separated list of Kafka Brokers in the format host:port
- Display Name
- Kafka Brokers
- Description
- Comma-separated list of Kafka Brokers in the format host:port
- API Name
- bootstrap.servers
- Default Value
- localhost:9092
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- true
-
Commit Offsets
Specifies whether or not this Processor should commit the offsets to Kafka after receiving messages. This value should be false when a PublishKafkaRecord processor is expected to commit the offsets using Exactly Once semantics, and should be reserved for dataflows that are designed to run within Stateless NiFi. See Processor's Usage / Additional Details for more information. Note that setting this value to false can lead to significant data duplication or potentially even data loss if the dataflow is not properly configured.
- Display Name
- Commit Offsets
- Description
- Specifies whether or not this Processor should commit the offsets to Kafka after receiving messages. This value should be false when a PublishKafkaRecord processor is expected to commit the offsets using Exactly Once semantics, and should be reserved for dataflows that are designed to run within Stateless NiFi. See Processor's Usage / Additional Details for more information. Note that setting this value to false can lead to significant data duplication or potentially even data loss if the dataflow is not properly configured.
- API Name
- Commit Offsets
- Default Value
- true
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Communications Timeout
Specifies the timeout that the consumer should use when communicating with the Kafka Broker
- Display Name
- Communications Timeout
- Description
- Specifies the timeout that the consumer should use when communicating with the Kafka Broker
- API Name
- Communications Timeout
- Default Value
- 60 secs
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Group ID
A Group ID is used to identify consumers that are within the same consumer group. Corresponds to Kafka's 'group.id' property.
- Display Name
- Group ID
- Description
- A Group ID is used to identify consumers that are within the same consumer group. Corresponds to Kafka's 'group.id' property.
- API Name
- group.id
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- true
-
Headers to Add as Attributes (Regex)
A Regular Expression that is matched against all message headers. Any message header whose name matches the regex will be added to the FlowFile as an Attribute. If not specified, no Header values will be added as FlowFile attributes. If two messages have a different value for the same header and that header is selected by the provided regex, then those two messages must be added to different FlowFiles. As a result, users should be cautious about using a regex like ".*" if messages are expected to have header values that are unique per message, such as an identifier or timestamp, because it will prevent NiFi from bundling the messages together efficiently.
- Display Name
- Headers to Add as Attributes (Regex)
- Description
- A Regular Expression that is matched against all message headers. Any message header whose name matches the regex will be added to the FlowFile as an Attribute. If not specified, no Header values will be added as FlowFile attributes. If two messages have a different value for the same header and that header is selected by the provided regex, then those two messages must be added to different FlowFiles. As a result, users should be cautious about using a regex like ".*" if messages are expected to have header values that are unique per message, such as an identifier or timestamp, because it will prevent NiFi from bundling the messages together efficiently.
- API Name
- header-name-regex
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
- Dependencies
-
- Output Strategy is set to any of [USE_VALUE]
-
Honor Transactions
Specifies whether or not NiFi should honor transactional guarantees when communicating with Kafka. If false, the Processor will use an "isolation level" of read_uncomitted. This means that messages will be received as soon as they are written to Kafka but will be pulled, even if the producer cancels the transactions. If this value is true, NiFi will not receive any messages for which the producer's transaction was canceled, but this can result in some latency since the consumer must wait for the producer to finish its entire transaction instead of pulling as the messages become available.
- Display Name
- Honor Transactions
- Description
- Specifies whether or not NiFi should honor transactional guarantees when communicating with Kafka. If false, the Processor will use an "isolation level" of read_uncomitted. This means that messages will be received as soon as they are written to Kafka but will be pulled, even if the producer cancels the transactions. If this value is true, NiFi will not receive any messages for which the producer's transaction was canceled, but this can result in some latency since the consumer must wait for the producer to finish its entire transaction instead of pulling as the messages become available.
- API Name
- honor-transactions
- Default Value
- true
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
interceptor.classes
Specifies the value for 'interceptor.classes' Kafka Configuration.
- Display Name
- interceptor.classes
- Description
- Specifies the value for 'interceptor.classes' Kafka Configuration.
- API Name
- interceptor.classes
- Default Value
- com.hortonworks.smm.kafka.monitoring.interceptors.MonitoringConsumerInterceptor
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- false
-
Kerberos User Service
Service supporting user authentication with Kerberos
- Display Name
- Kerberos User Service
- Description
- Service supporting user authentication with Kerberos
- API Name
- kerberos-user-service
- Service Interface
- org.apache.nifi.kerberos.SelfContainedKerberosUserService
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Key Attribute Encoding
If the <Separate By Key> property is set to true, FlowFiles that are emitted have an attribute named 'kafka.key'. This property dictates how the value of the attribute should be encoded.
- Display Name
- Key Attribute Encoding
- Description
- If the <Separate By Key> property is set to true, FlowFiles that are emitted have an attribute named 'kafka.key'. This property dictates how the value of the attribute should be encoded.
- API Name
- key-attribute-encoding
- Default Value
- utf-8
- Allowable Values
-
- UTF-8 Encoded
- Hex Encoded
- Do Not Add Key as Attribute
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
- Dependencies
-
- Output Strategy is set to any of [USE_VALUE]
-
Key Format
Specifies how to represent the Kafka Record's Key in the output
- Display Name
- Key Format
- Description
- Specifies how to represent the Kafka Record's Key in the output
- API Name
- key-format
- Default Value
- byte-array
- Allowable Values
-
- String
- Byte Array
- Record
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
- Dependencies
-
- Output Strategy is set to any of [USE_WRAPPER]
-
Key Record Reader
The Record Reader to use for parsing the Kafka Record's key into a Record
- Display Name
- Key Record Reader
- Description
- The Record Reader to use for parsing the Kafka Record's key into a Record
- API Name
- key-record-reader
- Service Interface
- org.apache.nifi.serialization.RecordReaderFactory
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
- Dependencies
-
- Key Format is set to any of [record]
-
Max Uncommitted Time
Specifies the maximum amount of time allowed to pass before offsets must be committed. This value impacts how often offsets will be committed. Committing offsets less often increases throughput but also increases the window of potential data duplication in the event of a rebalance or JVM restart between commits. This value is also related to maximum poll records and the use of a message demarcator. When using a message demarcator we can have far more uncommitted messages than when we're not as there is much less for us to keep track of in memory.
- Display Name
- Max Uncommitted Time
- Description
- Specifies the maximum amount of time allowed to pass before offsets must be committed. This value impacts how often offsets will be committed. Committing offsets less often increases throughput but also increases the window of potential data duplication in the event of a rebalance or JVM restart between commits. This value is also related to maximum poll records and the use of a message demarcator. When using a message demarcator we can have far more uncommitted messages than when we're not as there is much less for us to keep track of in memory.
- API Name
- max-uncommit-offset-wait
- Default Value
- 1 secs
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
- Dependencies
-
- Commit Offsets is set to any of [true]
-
Max Poll Records
Specifies the maximum number of records Kafka should return in a single poll.
- Display Name
- Max Poll Records
- Description
- Specifies the maximum number of records Kafka should return in a single poll.
- API Name
- max.poll.records
- Default Value
- 10000
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Message Header Encoding
Any message header that is found on a Kafka message will be added to the outbound FlowFile as an attribute. This property indicates the Character Encoding to use for deserializing the headers.
- Display Name
- Message Header Encoding
- Description
- Any message header that is found on a Kafka message will be added to the outbound FlowFile as an attribute. This property indicates the Character Encoding to use for deserializing the headers.
- API Name
- message-header-encoding
- Default Value
- UTF-8
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Output Strategy
The format used to output the Kafka record into a FlowFile record.
- Display Name
- Output Strategy
- Description
- The format used to output the Kafka record into a FlowFile record.
- API Name
- output-strategy
- Default Value
- USE_VALUE
- Allowable Values
-
- Use Content as Value
- Use Wrapper
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Value Record Reader
The Record Reader to use for incoming FlowFiles
- Display Name
- Value Record Reader
- Description
- The Record Reader to use for incoming FlowFiles
- API Name
- record-reader
- Service Interface
- org.apache.nifi.serialization.RecordReaderFactory
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Record Value Writer
The Record Writer to use in order to serialize the data before sending to Kafka
- Display Name
- Record Value Writer
- Description
- The Record Writer to use in order to serialize the data before sending to Kafka
- API Name
- record-writer
- Service Interface
- org.apache.nifi.serialization.RecordSetWriterFactory
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Kerberos Service Name
The service name that matches the primary name of the Kafka server configured in the broker JAAS configuration
- Display Name
- Kerberos Service Name
- Description
- The service name that matches the primary name of the Kafka server configured in the broker JAAS configuration
- API Name
- sasl.kerberos.service.name
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- false
-
SASL Mechanism
SASL mechanism used for authentication. Corresponds to Kafka Client sasl.mechanism property
- Display Name
- SASL Mechanism
- Description
- SASL mechanism used for authentication. Corresponds to Kafka Client sasl.mechanism property
- API Name
- sasl.mechanism
- Default Value
- GSSAPI
- Allowable Values
-
- GSSAPI
- PLAIN
- SCRAM-SHA-256
- SCRAM-SHA-512
- AWS_MSK_IAM
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Password
Password provided with configured username when using PLAIN or SCRAM SASL Mechanisms
- Display Name
- Password
- Description
- Password provided with configured username when using PLAIN or SCRAM SASL Mechanisms
- API Name
- sasl.password
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- true
- Required
- false
- Dependencies
-
- SASL Mechanism is set to any of [PLAIN, SCRAM-SHA-512, SCRAM-SHA-256]
-
Token Authentication
Enables or disables Token authentication when using SCRAM SASL Mechanisms
- Display Name
- Token Authentication
- Description
- Enables or disables Token authentication when using SCRAM SASL Mechanisms
- API Name
- sasl.token.auth
- Default Value
- false
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
- Dependencies
-
- SASL Mechanism is set to any of [SCRAM-SHA-512, SCRAM-SHA-256]
-
Username
Username provided with configured password when using PLAIN or SCRAM SASL Mechanisms
- Display Name
- Username
- Description
- Username provided with configured password when using PLAIN or SCRAM SASL Mechanisms
- API Name
- sasl.username
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- false
- Dependencies
-
- SASL Mechanism is set to any of [PLAIN, SCRAM-SHA-512, SCRAM-SHA-256]
-
Security Protocol
Security protocol used to communicate with brokers. Corresponds to Kafka Client security.protocol property
- Display Name
- Security Protocol
- Description
- Security protocol used to communicate with brokers. Corresponds to Kafka Client security.protocol property
- API Name
- security.protocol
- Default Value
- PLAINTEXT
- Allowable Values
-
- PLAINTEXT
- SSL
- SASL_PLAINTEXT
- SASL_SSL
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Separate By Key
If true, two Records will only be added to the same FlowFile if both of the Kafka Messages have identical keys.
- Display Name
- Separate By Key
- Description
- If true, two Records will only be added to the same FlowFile if both of the Kafka Messages have identical keys.
- API Name
- separate-by-key
- Default Value
- false
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
SSL Context Service
Service supporting SSL communication with Kafka brokers
- Display Name
- SSL Context Service
- Description
- Service supporting SSL communication with Kafka brokers
- API Name
- ssl.context.service
- Service Interface
- org.apache.nifi.ssl.SSLContextService
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Topic Name(s)
The name of the Kafka Topic(s) to pull from. More than one can be supplied if comma separated.
- Display Name
- Topic Name(s)
- Description
- The name of the Kafka Topic(s) to pull from. More than one can be supplied if comma separated.
- API Name
- topic
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- true
-
Topic Name Format
Specifies whether the Topic(s) provided are a comma separated list of names or a single regular expression
- Display Name
- Topic Name Format
- Description
- Specifies whether the Topic(s) provided are a comma separated list of names or a single regular expression
- API Name
- topic_type
- Default Value
- names
- Allowable Values
-
- names
- pattern
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
The name of a Kafka configuration property.
These properties will be added on the Kafka configuration after loading any provided configuration properties. In the event a dynamic property represents a property that was already set, its value will be ignored and WARN message logged. For the list of available Kafka properties please refer to: http://kafka.apache.org/documentation.html#configuration.
- Name
- The name of a Kafka configuration property.
- Description
- These properties will be added on the Kafka configuration after loading any provided configuration properties. In the event a dynamic property represents a property that was already set, its value will be ignored and WARN message logged. For the list of available Kafka properties please refer to: http://kafka.apache.org/documentation.html#configuration.
- Value
- The value of a given Kafka configuration property.
- Expression Language Scope
- ENVIRONMENT
Name | Description |
---|---|
success | FlowFiles received from Kafka. Depending on demarcation strategy it is a flow file per message or a bundle of messages grouped by topic and partition. |
parse.failure | If a message from Kafka cannot be parsed using the configured Record Reader, the contents of the message will be routed to this Relationship as its own individual FlowFile. |
Name | Description |
---|---|
record.count | The number of records received |
mime.type | The MIME Type that is provided by the configured Record Writer |
kafka.partition | The partition of the topic the records are from |
kafka.timestamp | The timestamp of the message in the partition of the topic. |
kafka.topic | The topic records are from |