-
Processors
-
AttributeRollingWindow 2.4.0.4.3.3.0-40
-
AttributesToCSV 2.4.0.4.3.3.0-40
-
AttributesToJSON 2.4.0.4.3.3.0-40
-
CalculateParquetOffsets 2.4.0.4.3.3.0-40
-
CalculateParquetRowGroupOffsets 2.4.0.4.3.3.0-40
-
CalculateRecordStats 2.4.0.4.3.3.0-40
-
CaptureChangeDebeziumDB2 2.4.0.4.3.3.0-40
-
CaptureChangeDebeziumMongoDB 2.4.0.4.3.3.0-40
-
CaptureChangeDebeziumMySQL 2.4.0.4.3.3.0-40
-
CaptureChangeDebeziumOracle 2.4.0.4.3.3.0-40
-
CaptureChangeDebeziumPostgreSQL 2.4.0.4.3.3.0-40
-
CaptureChangeDebeziumSQLServer 2.4.0.4.3.3.0-40
-
CaptureChangeMySQL 2.4.0.4.3.3.0-40
-
CompressContent 2.4.0.4.3.3.0-40
-
ConnectWebSocket 2.4.0.4.3.3.0-40
-
ConsumeAMQP 2.4.0.4.3.3.0-40
-
ConsumeAzureEventHub 2.4.0.4.3.3.0-40
-
ConsumeBoxEnterpriseEvents 2.4.0.4.3.3.0-40
-
ConsumeBoxEvents 2.4.0.4.3.3.0-40
-
ConsumeElasticsearch 2.4.0.4.3.3.0-40
-
ConsumeGCPubSub 2.4.0.4.3.3.0-40
-
ConsumeIMAP 2.4.0.4.3.3.0-40
-
ConsumeJMS 2.4.0.4.3.3.0-40
-
ConsumeKafka 2.4.0.4.3.3.0-40
-
ConsumeKafka_2_6 2.4.0.4.3.3.0-40
-
ConsumeKafka2CDP 2.4.0.4.3.3.0-40
-
ConsumeKafka2RecordCDP 2.4.0.4.3.3.0-40
-
ConsumeKafkaRecord_2_6 2.4.0.4.3.3.0-40
-
ConsumeKinesisStream 2.4.0.4.3.3.0-40
-
ConsumeMQTT 2.4.0.4.3.3.0-40
-
ConsumePLC 2.4.0.4.3.3.0-40
-
ConsumePOP3 2.4.0.4.3.3.0-40
-
ConsumeSlack 2.4.0.4.3.3.0-40
-
ConsumeTwitter 2.4.0.4.3.3.0-40
-
ConsumeWindowsEventLog 2.4.0.4.3.3.0-40
-
ControlRate 2.4.0.4.3.3.0-40
-
ConvertAvroToParquet 2.4.0.4.3.3.0-40
-
ConvertCharacterSet 2.4.0.4.3.3.0-40
-
ConvertProtobuf 2.4.0.4.3.3.0-40
-
ConvertRecord 2.4.0.4.3.3.0-40
-
CopyAzureBlobStorage_v12 2.4.0.4.3.3.0-40
-
CopyS3Object 2.4.0.4.3.3.0-40
-
CountText 2.4.0.4.3.3.0-40
-
CreateBoxFileMetadataInstance 2.4.0.4.3.3.0-40
-
CreateBoxMetadataTemplate 2.4.0.4.3.3.0-40
-
CreateHadoopSequenceFile 2.4.0.4.3.3.0-40
-
CryptographicHashContent 2.4.0.4.3.3.0-40
-
DebugFlow 2.4.0.4.3.3.0-40
-
DecryptContentAge 2.4.0.4.3.3.0-40
-
DecryptContentPGP 2.4.0.4.3.3.0-40
-
DeduplicateRecord 2.4.0.4.3.3.0-40
-
DeleteAzureBlobStorage_v12 2.4.0.4.3.3.0-40
-
DeleteAzureDataLakeStorage 2.4.0.4.3.3.0-40
-
DeleteBoxFileMetadataInstance 2.4.0.4.3.3.0-40
-
DeleteByQueryElasticsearch 2.4.0.4.3.3.0-40
-
DeleteCDPObjectStore 2.4.0.4.3.3.0-40
-
DeleteDynamoDB 2.4.0.4.3.3.0-40
-
DeleteFile 2.4.0.4.3.3.0-40
-
DeleteGCSObject 2.4.0.4.3.3.0-40
-
DeleteGridFS 2.4.0.4.3.3.0-40
-
DeleteHBaseCells 2.4.0.4.3.3.0-40
-
DeleteHBaseRow 2.4.0.4.3.3.0-40
-
DeleteHDFS 2.4.0.4.3.3.0-40
-
DeleteMongo 2.4.0.4.3.3.0-40
-
DeleteS3Object 2.4.0.4.3.3.0-40
-
DeleteSFTP 2.4.0.4.3.3.0-40
-
DeleteSQS 2.4.0.4.3.3.0-40
-
DetectDuplicate 2.4.0.4.3.3.0-40
-
DistributeLoad 2.4.0.4.3.3.0-40
-
DuplicateFlowFile 2.4.0.4.3.3.0-40
-
EncodeContent 2.4.0.4.3.3.0-40
-
EncryptContentAge 2.4.0.4.3.3.0-40
-
EncryptContentPGP 2.4.0.4.3.3.0-40
-
EnforceOrder 2.4.0.4.3.3.0-40
-
EvaluateJsonPath 2.4.0.4.3.3.0-40
-
EvaluateXPath 2.4.0.4.3.3.0-40
-
EvaluateXQuery 2.4.0.4.3.3.0-40
-
ExecuteGraphQuery 2.4.0.4.3.3.0-40
-
ExecuteGraphQueryRecord 2.4.0.4.3.3.0-40
-
ExecuteGroovyScript 2.4.0.4.3.3.0-40
-
ExecuteProcess 2.4.0.4.3.3.0-40
-
ExecuteScript 2.4.0.4.3.3.0-40
-
ExecuteSparkInteractive 2.4.0.4.3.3.0-40
-
ExecuteSQL 2.4.0.4.3.3.0-40
-
ExecuteSQLRecord 2.4.0.4.3.3.0-40
-
ExecuteStreamCommand 2.4.0.4.3.3.0-40
-
ExtractAvroMetadata 2.4.0.4.3.3.0-40
-
ExtractDocumentText 2.4.0.4.3.3.0-40
-
ExtractEmailAttachments 2.4.0.4.3.3.0-40
-
ExtractEmailHeaders 2.4.0.4.3.3.0-40
-
ExtractGrok 2.4.0.4.3.3.0-40
-
ExtractHL7Attributes 2.4.0.4.3.3.0-40
-
ExtractImageMetadata 2.4.0.4.3.3.0-40
-
ExtractMediaMetadata 2.4.0.4.3.3.0-40
-
ExtractRecordSchema 2.4.0.4.3.3.0-40
-
ExtractStructuredBoxFileMetadata 2.4.0.4.3.3.0-40
-
ExtractText 2.4.0.4.3.3.0-40
-
FetchAzureBlobStorage_v12 2.4.0.4.3.3.0-40
-
FetchAzureDataLakeStorage 2.4.0.4.3.3.0-40
-
FetchBoxFile 2.4.0.4.3.3.0-40
-
FetchBoxFileInfo 2.4.0.4.3.3.0-40
-
FetchBoxFileMetadataInstance 2.4.0.4.3.3.0-40
-
FetchBoxFileRepresentation 2.4.0.4.3.3.0-40
-
FetchCDPObjectStore 2.4.0.4.3.3.0-40
-
FetchDistributedMapCache 2.4.0.4.3.3.0-40
-
FetchDropbox 2.4.0.4.3.3.0-40
-
FetchFile 2.4.0.4.3.3.0-40
-
FetchFTP 2.4.0.4.3.3.0-40
-
FetchGCSObject 2.4.0.4.3.3.0-40
-
FetchGoogleDrive 2.4.0.4.3.3.0-40
-
FetchGridFS 2.4.0.4.3.3.0-40
-
FetchHBaseRow 2.4.0.4.3.3.0-40
-
FetchHDFS 2.4.0.4.3.3.0-40
-
FetchParquet 2.4.0.4.3.3.0-40
-
FetchPLC 2.4.0.4.3.3.0-40
-
FetchS3Object 2.4.0.4.3.3.0-40
-
FetchSFTP 2.4.0.4.3.3.0-40
-
FetchSmb 2.4.0.4.3.3.0-40
-
FilterAttribute 2.4.0.4.3.3.0-40
-
FlattenJson 2.4.0.4.3.3.0-40
-
ForkEnrichment 2.4.0.4.3.3.0-40
-
ForkRecord 2.4.0.4.3.3.0-40
-
GenerateFlowFile 2.4.0.4.3.3.0-40
-
GenerateRecord 2.4.0.4.3.3.0-40
-
GenerateTableFetch 2.4.0.4.3.3.0-40
-
GeoEnrichIP 2.4.0.4.3.3.0-40
-
GeoEnrichIPRecord 2.4.0.4.3.3.0-40
-
GeohashRecord 2.4.0.4.3.3.0-40
-
GetAsanaObject 2.4.0.4.3.3.0-40
-
GetAwsPollyJobStatus 2.4.0.4.3.3.0-40
-
GetAwsTextractJobStatus 2.4.0.4.3.3.0-40
-
GetAwsTranscribeJobStatus 2.4.0.4.3.3.0-40
-
GetAwsTranslateJobStatus 2.4.0.4.3.3.0-40
-
GetAzureEventHub 2.4.0.4.3.3.0-40
-
GetAzureQueueStorage_v12 2.4.0.4.3.3.0-40
-
GetBoxFileCollaborators 2.4.0.4.3.3.0-40
-
GetBoxGroupMembers 2.4.0.4.3.3.0-40
-
GetCouchbaseKey 2.4.0.4.3.3.0-40
-
GetDynamoDB 2.4.0.4.3.3.0-40
-
GetElasticsearch 2.4.0.4.3.3.0-40
-
GetFile 2.4.0.4.3.3.0-40
-
GetFileResource 2.4.0.4.3.3.0-40
-
GetFTP 2.4.0.4.3.3.0-40
-
GetGcpVisionAnnotateFilesOperationStatus 2.4.0.4.3.3.0-40
-
GetGcpVisionAnnotateImagesOperationStatus 2.4.0.4.3.3.0-40
-
GetHBase 2.4.0.4.3.3.0-40
-
GetHDFS 2.4.0.4.3.3.0-40
-
GetHDFSEvents 2.4.0.4.3.3.0-40
-
GetHDFSFileInfo 2.4.0.4.3.3.0-40
-
GetHDFSSequenceFile 2.4.0.4.3.3.0-40
-
GetHubSpot 2.4.0.4.3.3.0-40
-
GetJiraIssue 2.4.0.4.3.3.0-40
-
GetMongo 2.4.0.4.3.3.0-40
-
GetMongoRecord 2.4.0.4.3.3.0-40
-
GetS3ObjectMetadata 2.4.0.4.3.3.0-40
-
GetS3ObjectTags 2.4.0.4.3.3.0-40
-
GetSFTP 2.4.0.4.3.3.0-40
-
GetShopify 2.4.0.4.3.3.0-40
-
GetSlackReaction 2.4.0.4.3.3.0-40
-
GetSmbFile 2.4.0.4.3.3.0-40
-
GetSNMP 2.4.0.4.3.3.0-40
-
GetSnowflakeIngestStatus 2.4.0.4.3.3.0-40
-
GetSolr 2.4.0.4.3.3.0-40
-
GetSplunk 2.4.0.4.3.3.0-40
-
GetSQS 2.4.0.4.3.3.0-40
-
GetTCP 2.4.0.4.3.3.0-40
-
GetWorkdayReport 2.4.0.4.3.3.0-40
-
GetZendesk 2.4.0.4.3.3.0-40
-
HandleHttpRequest 2.4.0.4.3.3.0-40
-
HandleHttpResponse 2.4.0.4.3.3.0-40
-
IdentifyMimeType 2.4.0.4.3.3.0-40
-
InvokeGRPC 2.4.0.4.3.3.0-40
-
InvokeHTTP 2.4.0.4.3.3.0-40
-
InvokeScriptedProcessor 2.4.0.4.3.3.0-40
-
ISPEnrichIP 2.4.0.4.3.3.0-40
-
JoinEnrichment 2.4.0.4.3.3.0-40
-
JoltTransformJSON 2.4.0.4.3.3.0-40
-
JoltTransformRecord 2.4.0.4.3.3.0-40
-
JSLTTransformJSON 2.4.0.4.3.3.0-40
-
JsonQueryElasticsearch 2.4.0.4.3.3.0-40
-
ListAzureBlobStorage_v12 2.4.0.4.3.3.0-40
-
ListAzureDataLakeStorage 2.4.0.4.3.3.0-40
-
ListBoxFile 2.4.0.4.3.3.0-40
-
ListBoxFileInfo 2.4.0.4.3.3.0-40
-
ListBoxFileMetadataInstances 2.4.0.4.3.3.0-40
-
ListBoxFileMetadataTemplates 2.4.0.4.3.3.0-40
-
ListCDPObjectStore 2.4.0.4.3.3.0-40
-
ListDatabaseTables 2.4.0.4.3.3.0-40
-
ListDropbox 2.4.0.4.3.3.0-40
-
ListenBeats 2.4.0.4.3.3.0-40
-
ListenFTP 2.4.0.4.3.3.0-40
-
ListenGRPC 2.4.0.4.3.3.0-40
-
ListenHTTP 2.4.0.4.3.3.0-40
-
ListenNetFlow 2.4.0.4.3.3.0-40
-
ListenOTLP 2.4.0.4.3.3.0-40
-
ListenSlack 2.4.0.4.3.3.0-40
-
ListenSyslog 2.4.0.4.3.3.0-40
-
ListenTCP 2.4.0.4.3.3.0-40
-
ListenTrapSNMP 2.4.0.4.3.3.0-40
-
ListenUDP 2.4.0.4.3.3.0-40
-
ListenUDPRecord 2.4.0.4.3.3.0-40
-
ListenWebSocket 2.4.0.4.3.3.0-40
-
ListFile 2.4.0.4.3.3.0-40
-
ListFTP 2.4.0.4.3.3.0-40
-
ListGCSBucket 2.4.0.4.3.3.0-40
-
ListGoogleDrive 2.4.0.4.3.3.0-40
-
ListHBaseRegions 2.4.0.4.3.3.0-40
-
ListHDFS 2.4.0.4.3.3.0-40
-
ListS3 2.4.0.4.3.3.0-40
-
ListSFTP 2.4.0.4.3.3.0-40
-
ListSmb 2.4.0.4.3.3.0-40
-
LogAttribute 2.4.0.4.3.3.0-40
-
LogMessage 2.4.0.4.3.3.0-40
-
LookupAttribute 2.4.0.4.3.3.0-40
-
LookupRecord 2.4.0.4.3.3.0-40
-
MergeContent 2.4.0.4.3.3.0-40
-
MergeRecord 2.4.0.4.3.3.0-40
-
ModifyBytes 2.4.0.4.3.3.0-40
-
ModifyCompression 2.4.0.4.3.3.0-40
-
MonitorActivity 2.4.0.4.3.3.0-40
-
MoveAzureDataLakeStorage 2.4.0.4.3.3.0-40
-
MoveHDFS 2.4.0.4.3.3.0-40
-
Notify 2.4.0.4.3.3.0-40
-
PackageFlowFile 2.4.0.4.3.3.0-40
-
PaginatedJsonQueryElasticsearch 2.4.0.4.3.3.0-40
-
ParseEvtx 2.4.0.4.3.3.0-40
-
ParseNetflowv5 2.4.0.4.3.3.0-40
-
ParseSyslog 2.4.0.4.3.3.0-40
-
ParseSyslog5424 2.4.0.4.3.3.0-40
-
PartitionRecord 2.4.0.4.3.3.0-40
-
PublishAMQP 2.4.0.4.3.3.0-40
-
PublishGCPubSub 2.4.0.4.3.3.0-40
-
PublishJMS 2.4.0.4.3.3.0-40
-
PublishKafka 2.4.0.4.3.3.0-40
-
PublishKafka_2_6 2.4.0.4.3.3.0-40
-
PublishKafka2CDP 2.4.0.4.3.3.0-40
-
PublishKafka2RecordCDP 2.4.0.4.3.3.0-40
-
PublishKafkaRecord_2_6 2.4.0.4.3.3.0-40
-
PublishMQTT 2.4.0.4.3.3.0-40
-
PublishSlack 2.4.0.4.3.3.0-40
-
PutAccumuloRecord 2.4.0.4.3.3.0-40
-
PutAzureBlobStorage_v12 2.4.0.4.3.3.0-40
-
PutAzureCosmosDBRecord 2.4.0.4.3.3.0-40
-
PutAzureDataExplorer 2.4.0.4.3.3.0-40
-
PutAzureDataLakeStorage 2.4.0.4.3.3.0-40
-
PutAzureEventHub 2.4.0.4.3.3.0-40
-
PutAzureQueueStorage_v12 2.4.0.4.3.3.0-40
-
PutBigQuery 2.4.0.4.3.3.0-40
-
PutBoxFile 2.4.0.4.3.3.0-40
-
PutCassandraQL 2.4.0.4.3.3.0-40
-
PutCassandraRecord 2.4.0.4.3.3.0-40
-
PutCDPObjectStore 2.4.0.4.3.3.0-40
-
PutClouderaHiveQL 2.4.0.4.3.3.0-40
-
PutClouderaHiveStreaming 2.4.0.4.3.3.0-40
-
PutClouderaORC 2.4.0.4.3.3.0-40
-
PutCloudWatchMetric 2.4.0.4.3.3.0-40
-
PutCouchbaseKey 2.4.0.4.3.3.0-40
-
PutDatabaseRecord 2.4.0.4.3.3.0-40
-
PutDistributedMapCache 2.4.0.4.3.3.0-40
-
PutDropbox 2.4.0.4.3.3.0-40
-
PutDynamoDB 2.4.0.4.3.3.0-40
-
PutDynamoDBRecord 2.4.0.4.3.3.0-40
-
PutElasticsearchJson 2.4.0.4.3.3.0-40
-
PutElasticsearchRecord 2.4.0.4.3.3.0-40
-
PutEmail 2.4.0.4.3.3.0-40
-
PutFile 2.4.0.4.3.3.0-40
-
PutFTP 2.4.0.4.3.3.0-40
-
PutGCSObject 2.4.0.4.3.3.0-40
-
PutGoogleDrive 2.4.0.4.3.3.0-40
-
PutGridFS 2.4.0.4.3.3.0-40
-
PutHBaseCell 2.4.0.4.3.3.0-40
-
PutHBaseJSON 2.4.0.4.3.3.0-40
-
PutHBaseRecord 2.4.0.4.3.3.0-40
-
PutHDFS 2.4.0.4.3.3.0-40
-
PutIceberg 2.4.0.4.3.3.0-40
-
PutIcebergCDC 2.4.0.4.3.3.0-40
-
PutIoTDBRecord 2.4.0.4.3.3.0-40
-
PutJiraIssue 2.4.0.4.3.3.0-40
-
PutKinesisFirehose 2.4.0.4.3.3.0-40
-
PutKinesisStream 2.4.0.4.3.3.0-40
-
PutKudu 2.4.0.4.3.3.0-40
-
PutLambda 2.4.0.4.3.3.0-40
-
PutMongo 2.4.0.4.3.3.0-40
-
PutMongoBulkOperations 2.4.0.4.3.3.0-40
-
PutMongoRecord 2.4.0.4.3.3.0-40
-
PutParquet 2.4.0.4.3.3.0-40
-
PutPLC 2.4.0.4.3.3.0-40
-
PutRecord 2.4.0.4.3.3.0-40
-
PutRedisHashRecord 2.4.0.4.3.3.0-40
-
PutS3Object 2.4.0.4.3.3.0-40
-
PutSalesforceObject 2.4.0.4.3.3.0-40
-
PutSFTP 2.4.0.4.3.3.0-40
-
PutSmbFile 2.4.0.4.3.3.0-40
-
PutSnowflakeInternalStage 2.4.0.4.3.3.0-40
-
PutSNS 2.4.0.4.3.3.0-40
-
PutSolrContentStream 2.4.0.4.3.3.0-40
-
PutSolrRecord 2.4.0.4.3.3.0-40
-
PutSplunk 2.4.0.4.3.3.0-40
-
PutSplunkHTTP 2.4.0.4.3.3.0-40
-
PutSQL 2.4.0.4.3.3.0-40
-
PutSQS 2.4.0.4.3.3.0-40
-
PutSyslog 2.4.0.4.3.3.0-40
-
PutTCP 2.4.0.4.3.3.0-40
-
PutUDP 2.4.0.4.3.3.0-40
-
PutWebSocket 2.4.0.4.3.3.0-40
-
PutZendeskTicket 2.4.0.4.3.3.0-40
-
QueryAirtableTable 2.4.0.4.3.3.0-40
-
QueryAzureDataExplorer 2.4.0.4.3.3.0-40
-
QueryCassandra 2.4.0.4.3.3.0-40
-
QueryDatabaseTable 2.4.0.4.3.3.0-40
-
QueryDatabaseTableRecord 2.4.0.4.3.3.0-40
-
QueryIoTDBRecord 2.4.0.4.3.3.0-40
-
QueryRecord 2.4.0.4.3.3.0-40
-
QuerySalesforceObject 2.4.0.4.3.3.0-40
-
QuerySolr 2.4.0.4.3.3.0-40
-
QuerySplunkIndexingStatus 2.4.0.4.3.3.0-40
-
RemoveRecordField 2.4.0.4.3.3.0-40
-
RenameRecordField 2.4.0.4.3.3.0-40
-
ReplaceText 2.4.0.4.3.3.0-40
-
ReplaceTextWithMapping 2.4.0.4.3.3.0-40
-
ResizeImage 2.4.0.4.3.3.0-40
-
RetryFlowFile 2.4.0.4.3.3.0-40
-
RouteHL7 2.4.0.4.3.3.0-40
-
RouteOnAttribute 2.4.0.4.3.3.0-40
-
RouteOnContent 2.4.0.4.3.3.0-40
-
RouteText 2.4.0.4.3.3.0-40
-
RunMongoAggregation 2.4.0.4.3.3.0-40
-
SampleRecord 2.4.0.4.3.3.0-40
-
SawmillTransformJSON 2.4.0.4.3.3.0-40
-
SawmillTransformRecord 2.4.0.4.3.3.0-40
-
ScanAccumulo 2.4.0.4.3.3.0-40
-
ScanAttribute 2.4.0.4.3.3.0-40
-
ScanContent 2.4.0.4.3.3.0-40
-
ScanHBase 2.4.0.4.3.3.0-40
-
ScriptedFilterRecord 2.4.0.4.3.3.0-40
-
ScriptedPartitionRecord 2.4.0.4.3.3.0-40
-
ScriptedTransformRecord 2.4.0.4.3.3.0-40
-
ScriptedValidateRecord 2.4.0.4.3.3.0-40
-
SearchElasticsearch 2.4.0.4.3.3.0-40
-
SegmentContent 2.4.0.4.3.3.0-40
-
SelectClouderaHiveQL 2.4.0.4.3.3.0-40
-
SendTrapSNMP 2.4.0.4.3.3.0-40
-
SetSNMP 2.4.0.4.3.3.0-40
-
SignContentPGP 2.4.0.4.3.3.0-40
-
SplitAvro 2.4.0.4.3.3.0-40
-
SplitContent 2.4.0.4.3.3.0-40
-
SplitExcel 2.4.0.4.3.3.0-40
-
SplitJson 2.4.0.4.3.3.0-40
-
SplitPCAP 2.4.0.4.3.3.0-40
-
SplitRecord 2.4.0.4.3.3.0-40
-
SplitText 2.4.0.4.3.3.0-40
-
SplitXml 2.4.0.4.3.3.0-40
-
StartAwsPollyJob 2.4.0.4.3.3.0-40
-
StartAwsTextractJob 2.4.0.4.3.3.0-40
-
StartAwsTranscribeJob 2.4.0.4.3.3.0-40
-
StartAwsTranslateJob 2.4.0.4.3.3.0-40
-
StartGcpVisionAnnotateFilesOperation 2.4.0.4.3.3.0-40
-
StartGcpVisionAnnotateImagesOperation 2.4.0.4.3.3.0-40
-
StartSnowflakeIngest 2.4.0.4.3.3.0-40
-
TagS3Object 2.4.0.4.3.3.0-40
-
TailFile 2.4.0.4.3.3.0-40
-
TransformXml 2.4.0.4.3.3.0-40
-
TriggerClouderaHiveMetaStoreEvent 2.4.0.4.3.3.0-40
-
UnpackContent 2.4.0.4.3.3.0-40
-
UpdateAttribute 2.4.0.4.3.3.0-40
-
UpdateBoxFileMetadataInstance 2.4.0.4.3.3.0-40
-
UpdateByQueryElasticsearch 2.4.0.4.3.3.0-40
-
UpdateClouderaHiveTable 2.4.0.4.3.3.0-40
-
UpdateCounter 2.4.0.4.3.3.0-40
-
UpdateDatabaseTable 2.4.0.4.3.3.0-40
-
UpdateDeltaLakeTable 2.4.0.4.3.3.0-40
-
UpdateJiraIssue 2.4.0.4.3.3.0-40
-
UpdateRecord 2.4.0.4.3.3.0-40
-
ValidateCsv 2.4.0.4.3.3.0-40
-
ValidateJson 2.4.0.4.3.3.0-40
-
ValidateRecord 2.4.0.4.3.3.0-40
-
ValidateXml 2.4.0.4.3.3.0-40
-
VerifyContentMAC 2.4.0.4.3.3.0-40
-
VerifyContentPGP 2.4.0.4.3.3.0-40
-
Wait 2.4.0.4.3.3.0-40
-
-
Controller Services
-
AccumuloService 2.4.0.4.3.3.0-40
-
ActiveMQJMSConnectionFactoryProvider 2.4.0.4.3.3.0-40
-
ADLSCredentialsControllerService 2.4.0.4.3.3.0-40
-
ADLSCredentialsControllerServiceLookup 2.4.0.4.3.3.0-40
-
ADLSIDBrokerCloudCredentialsProviderControllerService 2.4.0.4.3.3.0-40
-
AmazonGlueSchemaRegistry 2.4.0.4.3.3.0-40
-
AmazonMSKConnectionService 2.4.0.4.3.3.0-40
-
ApicurioSchemaRegistry 2.4.0.4.3.3.0-40
-
AvroReader 2.4.0.4.3.3.0-40
-
AvroRecordSetWriter 2.4.0.4.3.3.0-40
-
AvroSchemaRegistry 2.4.0.4.3.3.0-40
-
AWSCredentialsProviderControllerService 2.4.0.4.3.3.0-40
-
AWSIDBrokerCloudCredentialsProviderControllerService 2.4.0.4.3.3.0-40
-
AzureBlobIDBrokerCloudCredentialsProviderControllerService 2.4.0.4.3.3.0-40
-
AzureBlobStorageFileResourceService 2.4.0.4.3.3.0-40
-
AzureCosmosDBClientService 2.4.0.4.3.3.0-40
-
AzureDataLakeStorageFileResourceService 2.4.0.4.3.3.0-40
-
AzureEventHubRecordSink 2.4.0.4.3.3.0-40
-
AzureServiceBusJMSConnectionFactoryProvider 2.4.0.4.3.3.0-40
-
AzureStorageCredentialsControllerService_v12 2.4.0.4.3.3.0-40
-
AzureStorageCredentialsControllerServiceLookup_v12 2.4.0.4.3.3.0-40
-
CassandraDistributedMapCache 2.4.0.4.3.3.0-40
-
CassandraSessionProvider 2.4.0.4.3.3.0-40
-
CdpCredentialsProviderControllerService 2.4.0.4.3.3.0-40
-
CdpOauth2AccessTokenProviderControllerService 2.4.0.4.3.3.0-40
-
CEFReader 2.4.0.4.3.3.0-40
-
CiscoEmblemSyslogMessageReader 2.4.0.4.3.3.0-40
-
ClouderaAttributeSchemaReferenceReader 2.4.0.4.3.3.0-40
-
ClouderaAttributeSchemaReferenceWriter 2.4.0.4.3.3.0-40
-
ClouderaEncodedSchemaReferenceReader 2.4.0.4.3.3.0-40
-
ClouderaEncodedSchemaReferenceWriter 2.4.0.4.3.3.0-40
-
ClouderaHiveConnectionPool 2.4.0.4.3.3.0-40
-
ClouderaHiveConnectionPoolLookup 2.4.0.4.3.3.0-40
-
ClouderaSchemaRegistry 2.4.0.4.3.3.0-40
-
CMLLookupService 2.4.0.4.3.3.0-40
-
ConfluentEncodedSchemaReferenceReader 2.4.0.4.3.3.0-40
-
ConfluentEncodedSchemaReferenceWriter 2.4.0.4.3.3.0-40
-
ConfluentSchemaRegistry 2.4.0.4.3.3.0-40
-
CouchbaseClusterService 2.4.0.4.3.3.0-40
-
CouchbaseKeyValueLookupService 2.4.0.4.3.3.0-40
-
CouchbaseMapCacheClient 2.4.0.4.3.3.0-40
-
CouchbaseRecordLookupService 2.4.0.4.3.3.0-40
-
CSVReader 2.4.0.4.3.3.0-40
-
CSVRecordLookupService 2.4.0.4.3.3.0-40
-
CSVRecordSetWriter 2.4.0.4.3.3.0-40
-
DatabaseRecordLookupService 2.4.0.4.3.3.0-40
-
DatabaseRecordSink 2.4.0.4.3.3.0-40
-
DatabaseTableSchemaRegistry 2.4.0.4.3.3.0-40
-
DBCPConnectionPool 2.4.0.4.3.3.0-40
-
DBCPConnectionPoolLookup 2.4.0.4.3.3.0-40
-
DeveloperBoxClientService 2.4.0.4.3.3.0-40
-
DistributedMapCacheLookupService 2.4.0.4.3.3.0-40
-
EBCDICRecordReader 2.4.0.4.3.3.0-40
-
ElasticSearchClientServiceImpl 2.4.0.4.3.3.0-40
-
ElasticSearchLookupService 2.4.0.4.3.3.0-40
-
ElasticSearchStringLookupService 2.4.0.4.3.3.0-40
-
EmailRecordSink 2.4.0.4.3.3.0-40
-
EmbeddedHazelcastCacheManager 2.4.0.4.3.3.0-40
-
ExcelReader 2.4.0.4.3.3.0-40
-
ExternalHazelcastCacheManager 2.4.0.4.3.3.0-40
-
FreeFormTextRecordSetWriter 2.4.0.4.3.3.0-40
-
GCPCredentialsControllerService 2.4.0.4.3.3.0-40
-
GCSFileResourceService 2.4.0.4.3.3.0-40
-
GenericPLC4XConnectionPool 2.4.0.4.3.3.0-40
-
GrokReader 2.4.0.4.3.3.0-40
-
HadoopCatalogService 2.4.0.4.3.3.0-40
-
HadoopDBCPConnectionPool 2.4.0.4.3.3.0-40
-
HazelcastMapCacheClient 2.4.0.4.3.3.0-40
-
HBase_2_ClientMapCacheService 2.4.0.4.3.3.0-40
-
HBase_2_ClientService 2.4.0.4.3.3.0-40
-
HBase_2_RecordLookupService 2.4.0.4.3.3.0-40
-
HikariCPConnectionPool 2.4.0.4.3.3.0-40
-
HiveCatalogService 2.4.0.4.3.3.0-40
-
HttpRecordSink 2.4.0.4.3.3.0-40
-
ImpalaConnectionPool 2.4.0.4.3.3.0-40
-
IPFIXReader 2.4.0.4.3.3.0-40
-
IPLookupService 2.4.0.4.3.3.0-40
-
JASN1Reader 2.4.0.4.3.3.0-40
-
JdbcCatalogService 2.4.0.4.3.3.0-40
-
JettyWebSocketClient 2.4.0.4.3.3.0-40
-
JettyWebSocketServer 2.4.0.4.3.3.0-40
-
JiraRecordSink 2.4.0.4.3.3.0-40
-
JMSConnectionFactoryProvider 2.4.0.4.3.3.0-40
-
JndiJmsConnectionFactoryProvider 2.4.0.4.3.3.0-40
-
JsonConfigBasedBoxClientService 2.4.0.4.3.3.0-40
-
JsonPathReader 2.4.0.4.3.3.0-40
-
JsonRecordSetWriter 2.4.0.4.3.3.0-40
-
JsonTreeReader 2.4.0.4.3.3.0-40
-
JWTBearerOAuth2AccessTokenProvider 2.4.0.4.3.3.0-40
-
Kafka3ConnectionService 2.4.0.4.3.3.0-40
-
KafkaRecordSink_2_6 2.4.0.4.3.3.0-40
-
KerberosKeytabUserService 2.4.0.4.3.3.0-40
-
KerberosPasswordUserService 2.4.0.4.3.3.0-40
-
KerberosTicketCacheUserService 2.4.0.4.3.3.0-40
-
KuduLookupService 2.4.0.4.3.3.0-40
-
LivySessionController 2.4.0.4.3.3.0-40
-
LoggingRecordSink 2.4.0.4.3.3.0-40
-
MapCacheClientService 2.4.0.4.3.3.0-40
-
MapCacheServer 2.4.0.4.3.3.0-40
-
MongoDBControllerService 2.4.0.4.3.3.0-40
-
MongoDBLookupService 2.4.0.4.3.3.0-40
-
Neo4JCypherClientService 2.4.0.4.3.3.0-40
-
ParquetReader 2.4.0.4.3.3.0-40
-
ParquetRecordSetWriter 2.4.0.4.3.3.0-40
-
PEMEncodedSSLContextProvider 2.4.0.4.3.3.0-40
-
PhoenixThickConnectionPool 2.4.0.4.3.3.0-40
-
PhoenixThinConnectionPool 2.4.0.4.3.3.0-40
-
PostgreSQLConnectionPool 2.4.0.4.3.3.0-40
-
PropertiesFileLookupService 2.4.0.4.3.3.0-40
-
ProtobufReader 2.4.0.4.3.3.0-40
-
ProxyPLC4XConnectionPool 2.4.0.4.3.3.0-40
-
RabbitMQJMSConnectionFactoryProvider 2.4.0.4.3.3.0-40
-
ReaderLookup 2.4.0.4.3.3.0-40
-
RecordSetWriterLookup 2.4.0.4.3.3.0-40
-
RecordSinkServiceLookup 2.4.0.4.3.3.0-40
-
RedisConnectionPoolService 2.4.0.4.3.3.0-40
-
RedisDistributedMapCacheClientService 2.4.0.4.3.3.0-40
-
RedshiftConnectionPool 2.4.0.4.3.3.0-40
-
RESTCatalogService 2.4.0.4.3.3.0-40
-
RestLookupService 2.4.0.4.3.3.0-40
-
S3FileResourceService 2.4.0.4.3.3.0-40
-
ScriptedLookupService 2.4.0.4.3.3.0-40
-
ScriptedReader 2.4.0.4.3.3.0-40
-
ScriptedRecordSetWriter 2.4.0.4.3.3.0-40
-
ScriptedRecordSink 2.4.0.4.3.3.0-40
-
SetCacheClientService 2.4.0.4.3.3.0-40
-
SetCacheServer 2.4.0.4.3.3.0-40
-
SimpleCsvFileLookupService 2.4.0.4.3.3.0-40
-
SimpleDatabaseLookupService 2.4.0.4.3.3.0-40
-
SimpleKeyValueLookupService 2.4.0.4.3.3.0-40
-
SimpleRedisDistributedMapCacheClientService 2.4.0.4.3.3.0-40
-
SimpleScriptedLookupService 2.4.0.4.3.3.0-40
-
SiteToSiteReportingRecordSink 2.4.0.4.3.3.0-40
-
SlackRecordSink 2.4.0.4.3.3.0-40
-
SmbjClientProviderService 2.4.0.4.3.3.0-40
-
SnowflakeComputingConnectionPool 2.4.0.4.3.3.0-40
-
StandardAsanaClientProviderService 2.4.0.4.3.3.0-40
-
StandardAzureCredentialsControllerService 2.4.0.4.3.3.0-40
-
StandardDatabaseDialectService 2.4.0.4.3.3.0-40
-
StandardDropboxCredentialService 2.4.0.4.3.3.0-40
-
StandardFileResourceService 2.4.0.4.3.3.0-40
-
StandardHashiCorpVaultClientService 2.4.0.4.3.3.0-40
-
StandardHttpContextMap 2.4.0.4.3.3.0-40
-
StandardJiraCredentialService 2.4.0.4.3.3.0-40
-
StandardJsonSchemaRegistry 2.4.0.4.3.3.0-40
-
StandardKustoIngestService 2.4.0.4.3.3.0-40
-
StandardKustoQueryService 2.4.0.4.3.3.0-40
-
StandardOauth2AccessTokenProvider 2.4.0.4.3.3.0-40
-
StandardPGPPrivateKeyService 2.4.0.4.3.3.0-40
-
StandardPGPPublicKeyService 2.4.0.4.3.3.0-40
-
StandardPLC4XConnectionPool 2.4.0.4.3.3.0-40
-
StandardPrivateKeyService 2.4.0.4.3.3.0-40
-
StandardProxyConfigurationService 2.4.0.4.3.3.0-40
-
StandardRestrictedSSLContextService 2.4.0.4.3.3.0-40
-
StandardS3EncryptionService 2.4.0.4.3.3.0-40
-
StandardSnowflakeIngestManagerProviderService 2.4.0.4.3.3.0-40
-
StandardSSLContextService 2.4.0.4.3.3.0-40
-
StandardWebClientServiceProvider 2.4.0.4.3.3.0-40
-
Syslog5424Reader 2.4.0.4.3.3.0-40
-
SyslogReader 2.4.0.4.3.3.0-40
-
TinkerpopClientService 2.4.0.4.3.3.0-40
-
UDPEventRecordSink 2.4.0.4.3.3.0-40
-
VolatileSchemaCache 2.4.0.4.3.3.0-40
-
WindowsEventLogReader 2.4.0.4.3.3.0-40
-
XMLFileLookupService 2.4.0.4.3.3.0-40
-
XMLReader 2.4.0.4.3.3.0-40
-
XMLRecordSetWriter 2.4.0.4.3.3.0-40
-
YamlTreeReader 2.4.0.4.3.3.0-40
-
ZendeskRecordSink 2.4.0.4.3.3.0-40
-
-
Reporting Tasks
-
AzureLogAnalyticsProvenanceReportingTask 2.4.0.4.3.3.0-40
-
AzureLogAnalyticsReportingTask 2.4.0.4.3.3.0-40
-
ControllerStatusReportingTask 2.4.0.4.3.3.0-40
-
MonitorDiskUsage 2.4.0.4.3.3.0-40
-
MonitorMemory 2.4.0.4.3.3.0-40
-
QueryNiFiReportingTask 2.4.0.4.3.3.0-40
-
ReportLineageToAtlas 2.4.0.4.3.3.0-40
-
ScriptedReportingTask 2.4.0.4.3.3.0-40
-
SiteToSiteBulletinReportingTask 2.4.0.4.3.3.0-40
-
SiteToSiteMetricsReportingTask 2.4.0.4.3.3.0-40
-
SiteToSiteProvenanceReportingTask 2.4.0.4.3.3.0-40
-
SiteToSiteStatusReportingTask 2.4.0.4.3.3.0-40
-
-
Parameter Providers
-
AwsSecretsManagerParameterProvider 2.4.0.4.3.3.0-40
-
AzureKeyVaultSecretsParameterProvider 2.4.0.4.3.3.0-40
-
CyberArkConjurParameterProvider 2.4.0.4.3.3.0-40
-
DatabaseParameterProvider 2.4.0.4.3.3.0-40
-
EnvironmentVariableParameterProvider 2.4.0.4.3.3.0-40
-
GcpSecretManagerParameterProvider 2.4.0.4.3.3.0-40
-
HashiCorpVaultParameterProvider 2.4.0.4.3.3.0-40
-
KubernetesSecretParameterProvider 2.4.0.4.3.3.0-40
-
OnePasswordParameterProvider 2.4.0.4.3.3.0-40
-
PropertiesFileParameterProvider 2.4.0.4.3.3.0-40
-
-
Flow Analysis Rules
-
DisallowComponentType 2.4.0.4.3.3.0-40
-
DisallowConsecutiveConnectionsWithRoundRobinLB 2.4.0.4.3.3.0-40
-
DisallowDeadEnd 2.4.0.4.3.3.0-40
-
DisallowDeprecatedProcessor 2.4.0.4.3.3.0-40
-
DisallowExtractTextForFullContent 2.4.0.4.3.3.0-40
-
RecommendRecordProcessor 2.4.0.4.3.3.0-40
-
RequireHandleHttpResponseAfterHandleHttpRequest 2.4.0.4.3.3.0-40
-
RequireMergeBeforePutIceberg 2.4.0.4.3.3.0-40
-
RequireServerSSLContextService 2.4.0.4.3.3.0-40
-
RestrictBackpressureSettings 2.4.0.4.3.3.0-40
-
RestrictComponentNaming 2.4.0.4.3.3.0-40
-
RestrictConcurrentTasksVsThreadPoolSizeInProcessors 2.4.0.4.3.3.0-40
-
RestrictFlowFileExpiration 2.4.0.4.3.3.0-40
-
RestrictProcessorConcurrency 2.4.0.4.3.3.0-40
-
RestrictSchedulingForListProcessors 2.4.0.4.3.3.0-40
-
RestrictThreadPoolSize 2.4.0.4.3.3.0-40
-
RestrictYieldDurationForConsumeKafkaProcessors 2.4.0.4.3.3.0-40
-
PublishKafkaRecord_2_6 2.4.0.4.3.3.0-40
- Bundle
- org.apache.nifi | nifi-kafka-2-6-nar
- Description
- Sends the contents of a FlowFile as individual records to Apache Kafka using the Kafka 2.6 Producer API. The contents of the FlowFile are expected to be record-oriented data that can be read by the configured Record Reader. The complementary NiFi processor for fetching messages is ConsumeKafkaRecord_2_6.
- Tags
- 2.6, Apache, Kafka, Message, PubSub, Put, Record, Send, avro, csv, json, logs
- Input Requirement
- REQUIRED
- Supports Sensitive Dynamic Properties
- false
-
Additional Details for PublishKafkaRecord_2_6 2.4.0.4.3.3.0-40
PublishKafkaRecord
Description
This Processor puts the contents of a FlowFile to a Topic in Apache Kafka using KafkaProducer API available with Kafka 2.6 API. The contents of the incoming FlowFile will be read using the configured Record Reader. Each record will then be serialized using the configured Record Writer, and this serialized form will be the content of a Kafka message. This message is optionally assigned a key by using the <Kafka Key> Property.
Security Configuration
The Security Protocol property allows the user to specify the protocol for communicating with the Kafka broker. The following sections describe each of the protocols in further detail.
PLAINTEXT
This option provides an unsecured connection to the broker, with no client authentication and no encryption. In order to use this option the broker must be configured with a listener of the form:
PLAINTEXT://host.name:portSSL
This option provides an encrypted connection to the broker, with optional client authentication. In order to use this option the broker must be configured with a listener of the form:
SSL://host.name:portIn addition, the processor must have an SSL Context Service selected.
If the broker specifies ssl.client.auth=none, or does not specify ssl.client.auth, then the client will not be required to present a certificate. In this case, the SSL Context Service selected may specify only a truststore containing the public key of the certificate authority used to sign the broker’s key.
If the broker specifies ssl.client.auth=required then the client will be required to present a certificate. In this case, the SSL Context Service must also specify a keystore containing a client key, in addition to a truststore as described above.
SASL_PLAINTEXT
This option uses SASL with a PLAINTEXT transport layer to authenticate to the broker. In order to use this option the broker must be configured with a listener of the form:
SASL_PLAINTEXT://host.name:portIn addition, the Kerberos Service Name must be specified in the processor.
SASL_PLAINTEXT - GSSAPI
If the SASL mechanism is GSSAPI, then the client must provide a JAAS configuration to authenticate. The JAAS configuration can be provided by specifying the java.security.auth.login.config system property in NiFi’s bootstrap.conf, such as:
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.confAn example of the JAAS config file would be the following:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/path/to/nifi.keytab" serviceName="kafka" principal="nifi@YOURREALM.COM"; };NOTE: The serviceName in the JAAS file must match the Kerberos Service Name in the processor.
Alternatively, the JAAS configuration when using GSSAPI can be provided by specifying the Kerberos Principal and Kerberos Keytab directly in the processor properties. This will dynamically create a JAAS configuration like above, and will take precedence over the java.security.auth.login.config system property.
SASL_PLAINTEXT - PLAIN
If the SASL mechanism is PLAIN, then client must provide a JAAS configuration to authenticate, but the JAAS configuration must use Kafka’s PlainLoginModule. An example of the JAAS config file would be the following:
KafkaClient { org.apache.kafka.common.security.plain.PlainLoginModule required username="nifi" password="nifi-password"; };The JAAS configuration can be provided by either of below ways
- specify the java.security.auth.login.config system property in NiFi’s bootstrap.conf. This limits you to use only one user credential across the cluster.
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf- add user attribute ‘sasl.jaas.config’ in the processor configurations. This method allows one to have multiple consumers with different user credentials or gives flexibility to consume from multiple kafka clusters.
sasl.jaas.config : org.apache.kafka.common.security.plain.PlainLoginModule required username="nifi" password="nifi-password";NOTE: The dynamic properties of this processor are not secured and as a result the password entered when utilizing sasl.jaas.config will be stored in the flow.json.gz file in plain-text, and will be saved to NiFi Registry if using versioned flows.
NOTE: It is not recommended to use a SASL mechanism of PLAIN with SASL_PLAINTEXT, as it would transmit the username and password unencrypted.
NOTE: The Kerberos Service Name is not required for SASL mechanism of PLAIN. However, processor warns saying this attribute has to be filled with non empty string. You can choose to fill any random string, such as “null”.
NOTE: Using the PlainLoginModule will cause it be registered in the JVM’s static list of Providers, making it visible to components in other NARs that may access the providers. There is currently a known issue where Kafka processors using the PlainLoginModule will cause HDFS processors with Keberos to no longer work.
SASL_PLAINTEXT - SCRAM
If the SASL mechanism is SSL, then client must provide a JAAS configuration to authenticate, but the JAAS configuration must use Kafka’s ScramLoginModule. Ensure that you add user defined attribute ‘sasl.mechanism’ and assign ‘SCRAM-SHA-256’ or ‘SCRAM-SHA-512’ based on kafka broker configurations. An example of the JAAS config file would be the following:
KafkaClient { org.apache.kafka.common.security.scram.ScramLoginModule username="nifi" password="nifi-password"; };The JAAS configuration can be provided by either of below ways
- specify the java.security.auth.login.config system property in NiFi’s bootstrap.conf. This limits you to use only one user credential across the cluster.
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf- add user attribute ‘sasl.jaas.config’ in the processor configurations. This method allows one to have multiple consumers with different user credentials or gives flexibility to consume from multiple kafka clusters.
sasl.jaas.config : org.apache.kafka.common.security.scram.ScramLoginModule required username="nifi" password="nifi-password";NOTE: The dynamic properties of this processor are not secured and as a result the password entered when utilizing sasl.jaas.config will be stored in the flow.json.gz file in plain-text, and will be saved to NiFi Registry if using versioned flows.
NOTE: The Kerberos Service Name is not required for SASL mechanism of SCRAM-SHA-256 or SCRAM-SHA-512. However, processor warns saying this attribute has to be filled with non empty string. You can choose to fill any random string, such as “null”.
SASL_SSL
This option uses SASL with an SSL/TLS transport layer to authenticate to the broker. In order to use this option the broker must be configured with a listener of the form:
SASL_SSL://host.name:portSee the SASL_PLAINTEXT section for a description of how to provide the proper JAAS configuration depending on the SASL mechanism (GSSAPI or PLAIN).
See the SSL section for a description of how to configure the SSL Context Service based on the ssl.client.auth property.
Publish Strategy
This processor includes optional properties that control how a Kafka Record’s key and headers are determined:
- ‘Publish Strategy’
- ‘Record Key Writer’
‘Publish Strategy’ controls the mode used to convert the FlowFile record into a Kafka record.
- ‘Use Content as Record Value’ (the default) - the content of the FlowFile Record becomes the content of the Kafka record. The Kafka record’s key is determined by the ‘Message Key Field’ property, and the Kafka record’s headers are determined by the ‘Attributes to Send as Headers (Regex)’ property.
- ‘Use Wrapper’ - the content of the FlowFile record is a wrapper consisting of the Kafka record’s key, value, headers, and metadata (topic and partition).
If Publish Strategy is set to ‘Use Wrapper’, two additional processor configuration properties are made available: ‘Record Key Writer’ and ‘Record Metadata Strategy’.
The ‘Record Key Writer’ property determines the Record Writer that should be used to serialize the Kafka record’s key. This may be used to emit the key as JSON, Avro, XML, or some other data format. If this property is not set, and the NiFi Record indicates that the key itself is a Record, the FlowFile will be routed to the ‘failure’ relationship. If this property is not set and the NiFi Record has a Byte Array or a String (encoded in UTF-8 format), the Kafka record’s key will still be set accordingly.
The ‘Record Metadata Strategy’ specifies whether the Kafka Topic and partition should come from the configured ‘Topic Name’ property and ‘Partition’ / ‘Partitioner class’ properties, or if they should come from the Record’s optional
metadatafield. If the value is set to ‘Metadata From Record’, the incoming FlowFile record is expected to have a field named ‘metadata’. That field is expected to be a Record with a ’topic’ and a ‘partition’ field. If these fields are missing or invalid, the processor’s ‘Topic Name’ and ‘Partition’ / ‘Partitioner class’ properties will still be used.Using the
metadatafield to convey the topic and partition has two advantages. Firstly, it pairs well with the ConsumeKafkaRecord_* processor, which produces this same schema. This means that if data is consumed from one topic and pushed to another topic (or Kafka cluster), the data can be easily pinned to the same partition and topic name. If the data should be pushed to a different topic, it can be easily updated using an UpdateRecord processor, for instance.Additionally, because a single FlowFile can be sent as a single Kafka transaction, this allows sending records to multiple Kafka topics in a single transaction.
Examples
The below examples illustrate what will be sent to Kafka, given different configurations and FlowFile contents. These examples all assume that JsonRecordSetWriter and JsonTreeReader will be used for the Record Readers and Writers.
Publish Strategy = ‘Use Content as Record Value’
Given the processor configuration:
Processor Property Configured Value Message Key Field account Attributes to Send as Headers (Regex) attribute.* And a FlowFile with the content:
{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}And attributes:
Attribute Name Attribute Value attributeA valueA attributeB valueB otherAttribute otherValue The record that is produced to Kafka will have the following characteristics:
Record Key {"name":"Acme","number":"AC1234"}Record Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers Publish Strategy = ‘Use Wrapper’
When the Publish Strategy is configured to ‘Use Wrapper’, each FlowFile Record is expected to adhere to a specific schema. The Record must have three fields:
key,value, andheaders. There is a fourth, optional field namedmetadata. Thekeymay be a String, a byte array, or a Record. Thevaluecan be any Record. Theheadersis a Map where the values are Strings. Themetadatafield is a Record that has two fields of interest:topicandpartition. If these fields are specified, they will take precedence over the configured ‘Topic Name’ and ‘Partition’ and ‘Partitioner class’ processor properties.Example 1 - Key as String
Given a FlowFile with the content:
{ "key": "Acme Holdings", "value": { "address": "1234 First Street", "zip": "12345", "account": { "name": "Acme", "number":"AC1234" } }, "headers": { "accountType": "enterprise", "test": "true" } }The record that is produced to Kafka will have the following characteristics:
Record Key Acme HoldingsRecord Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers Note that in this case, the headers and key come directly from the Record, not from FlowFile attributes. If there is a desire to include some FlowFile attributes in the headers, this should be accomplished by using a Processor upstream in order to inject those values into the
headersfield. For example, an UpdateRecord processor could be used to easily add new fields to theheadersMap.Example 2 - Key as Record
Additionally, we may choose to use a more complex value for the record key. The key itself may be a record. This is sometimes used to write the record key either as JSON or as Avro. In this example, we assume that the ‘Record Key Writer’ property is set to a JsonRecordSetWriter.
Given a FlowFile with the content:
{ "key": { "accountName": "Acme Holdings", "accountHolder": "John Doe", "accountId": "280182830-A009" }, "value": { "address": "1234 First Street", "zip": "12345", "account": { "name": "Acme", "number":"AC1234" } } }The record that is produced to Kafka will have the following characteristics:
Record Key {"accountName":"Acme Holdings","accountHolder":"John Doe","accountId":"280182830-A009"}Record Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers Note here that the Record Key is JSON, as the ‘Record Key Writer’ property is configured to write JSON. it could just as easily be Avro.
Also note that if the ‘Record Key Writer’ had not been set, the FlowFile would have been routed to the ‘failure’ relationship because the key is a Record.
Finally, note here that the
headersfield is missing. This is acceptable and no headers will be added to the Kafka record.Example 3 - Key as Byte Array
We can also have a Record whose
keyfield is an array of bytes. In this case, the ‘Record Key Writer’ property is not used.Given a FlowFile with the content:
{ "key": [65, 27, 10, 20, 11, 57, 88, 19, 65], "value": { "address": "1234 First Street", "zip": "12345", "account": { "name": "Acme", "number":"AC1234" } }, "otherField": { "a": "b" } }The record that is produced to Kafka will have the following characteristics:
Record Key 0x411b0a140b39581341Record Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers In this case, the byte array that is specified for the key is provided to the Kafka Record as a byte array without changes (in the table, it is simply represented as Hex).
Finally, note here that the
headersfield is missing and an extraneous field,otherFieldis present. This is acceptable and no headers will be added to the Kafka record. TheotherFieldis simply ignored.Example 4 - No Key
We can also have a Record whose
keyfield is null or missing. In this case, the ‘Record Key Writer’ property is not used.Given a FlowFile with the content:
{ "value": { "address": "1234 First Street", "zip": "12345", "account": { "name": "Acme", "number":"AC1234" } }, "headers": { "a": "b", "c": { "d": "e" } } }The record that is produced to Kafka will have the following characteristics:
Record Key Record Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers In this case, the key is not present, so the Kafka record that is produced has no key associated with it.
Note also that the
headersfield has the expected value for theaheader but thecheader has an expected value ofMapRecord[{d=e}]. This is because theheadersfield is expected always to be a Map with String values. By providing a Record for thecelement, we have violated the contract. NiFi attempts to compensate for this by creating a String representation of the Record, even if it is unlikely to be the representation that the user expects.Example 5 - Topic provided in Record
If the Metadata field is provided in the FlowFile’s Record, it will be used to determine the Topic and the Partition that the Records are written to.
Given a FlowFile with the content:
{ "value": { "address": "1234 First Street", "zip": "12345", "account": { "name": "Acme", "number":"AC1234" } }, "headers": { "a": "b" }, "metadata": { "topic": "topic1" } }And considering that the processor properties are configured as:
Property Name Property Value Topic Name My Topic Partition 2 Record Metadata Strategy Metadata From Record The record that is produced to Kafka will have the following characteristics:
Kafka Topic topic1 Topic Partition 2 Record Key Record Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers Note that the topic name comes directly from the FlowFile record, and the configured topic name (“My Topic”) is ignored. However, if either the “metadata” field or its “topic” sub-field were missing, the configured topic name (“My Topic”) would be used.
Example 6 - Partition provided in Record
Given a FlowFile with the content:
{ "value": { "address": "1234 First Street", "zip": "12345", "account": { "name": "Acme", "number":"AC1234" } }, "headers": { "a": "b" }, "metadata": { "partition": 6 } }And considering that the processor properties are configured as:
Property Name Property Value Topic Name My Topic Partition 2 Record Metadata Strategy Metadata From Record The record that is produced to Kafka will have the following characteristics:
Kafka Topic My Topic Topic Partition 6 Record Key Record Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers Example 7 - Topic and Partition provided in Record
If the Metadata field is provided in the FlowFile’s Record, it will be used to determine the Topic and the Partition that the Records are written to.
Given a FlowFile with the content:
{ "value": { "address": "1234 First Street", "zip": "12345", "account": { "name": "Acme", "number":"AC1234" } }, "headers": { "a": "b" }, "metadata": { "topic": "topic1", "partition": 0 } }And considering that the processor properties are configured as:
Property Name Property Value Topic Name My Topic Partition 2 Record Metadata Strategy Metadata From Record The record that is produced to Kafka will have the following characteristics:
Kafka Topic topic1 Topic Partition 0 Record Key Record Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers In this case, both the topic name and the partition are explicitly defined within the incoming Record, and those will be used.
Example 8 - Invalid metadata provided in Record
Given a FlowFile with the content:
{ "value": { "address": "1234 First Street", "zip": "12345", "account": { "name": "Acme", "number":"AC1234" } }, "headers": { "a": "b" }, "metadata": "hello" }And considering that the processor properties are configured as:
Property Name Property Value Topic Name My Topic Partition 2 Record Metadata Strategy Metadata From Record The record that is produced to Kafka will have the following characteristics:
Kafka Topic My Topic Topic Partition 2 Record Key Record Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers In this case, the “metadata” field in the Record is ignored because it is not itself a Record.
Example 9 - Use Configured Values for Metadata
Given a FlowFile with the content:
{ "value": { "address": "1234 First Street", "zip": "12345", "account": { "name": "Acme", "number":"AC1234" } }, "headers": { "a": "b" }, "metadata": { "topic": "topic1", "partition": 6 } }And considering that the processor properties are configured as:
Property Name Property Value Topic Name My Topic Partition 2 Record Metadata Strategy Use Configured Values The record that is produced to Kafka will have the following characteristics:
Kafka Topic My Topic Topic Partition 2 Record Key Record Value {"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}Record Headers In this case, the “metadata” field specifies both the topic and the partition. However, it is ignored in favor of the processor properties ‘Topic’ and ‘Partition’ because the property ‘Record Metadata Strategy’ is set to ‘Use Configured Values’.
-
Acknowledgment Wait Time
After sending a message to Kafka, this indicates the amount of time that we are willing to wait for a response from Kafka. If Kafka does not acknowledge the message within this time period, the FlowFile will be routed to 'failure'.
- Display Name
- Acknowledgment Wait Time
- Description
- After sending a message to Kafka, this indicates the amount of time that we are willing to wait for a response from Kafka. If Kafka does not acknowledge the message within this time period, the FlowFile will be routed to 'failure'.
- API Name
- ack.wait.time
- Default Value
- 5 secs
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Delivery Guarantee
Specifies the requirement for guaranteeing that a message is sent to Kafka. Corresponds to Kafka's 'acks' property.
- Display Name
- Delivery Guarantee
- Description
- Specifies the requirement for guaranteeing that a message is sent to Kafka. Corresponds to Kafka's 'acks' property.
- API Name
- acks
- Default Value
- all
- Allowable Values
-
- Best Effort
- Guarantee Single Node Delivery
- Guarantee Replicated Delivery
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Attributes to Send as Headers (Regex)
A Regular Expression that is matched against all FlowFile attribute names. Any attribute whose name matches the regex will be added to the Kafka messages as a Header. If not specified, no FlowFile attributes will be added as headers.
- Display Name
- Attributes to Send as Headers (Regex)
- Description
- A Regular Expression that is matched against all FlowFile attribute names. Any attribute whose name matches the regex will be added to the Kafka messages as a Header. If not specified, no FlowFile attributes will be added as headers.
- API Name
- attribute-name-regex
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
- Dependencies
-
- Publish Strategy is set to any of [USE_VALUE]
-
AWS Profile Name
The Amazon Web Services Profile to select when multiple profiles are available.
- Display Name
- AWS Profile Name
- Description
- The Amazon Web Services Profile to select when multiple profiles are available.
- API Name
- aws.profile.name
- Expression Language Scope
- Environment variables and FlowFile Attributes
- Sensitive
- false
- Required
- false
- Dependencies
-
- SASL Mechanism is set to any of [AWS_MSK_IAM]
-
Kafka Brokers
Comma-separated list of Kafka Brokers in the format host:port
- Display Name
- Kafka Brokers
- Description
- Comma-separated list of Kafka Brokers in the format host:port
- API Name
- bootstrap.servers
- Default Value
- localhost:9092
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- true
-
Compression Type
This parameter allows you to specify the compression codec for all data generated by this producer.
- Display Name
- Compression Type
- Description
- This parameter allows you to specify the compression codec for all data generated by this producer.
- API Name
- compression.type
- Default Value
- none
- Allowable Values
-
- none
- gzip
- snappy
- lz4
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Failure Strategy
Specifies how the processor handles a FlowFile if it is unable to publish the data to Kafka
- Display Name
- Failure Strategy
- Description
- Specifies how the processor handles a FlowFile if it is unable to publish the data to Kafka
- API Name
- Failure Strategy
- Default Value
- Route to Failure
- Allowable Values
-
- Route to Failure
- Rollback
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Kerberos User Service
Service supporting user authentication with Kerberos
- Display Name
- Kerberos User Service
- Description
- Service supporting user authentication with Kerberos
- API Name
- kerberos-user-service
- Service Interface
- org.apache.nifi.kerberos.SelfContainedKerberosUserService
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Max Metadata Wait Time
The amount of time publisher will wait to obtain metadata or wait for the buffer to flush during the 'send' call before failing the entire 'send' call. Corresponds to Kafka's 'max.block.ms' property
- Display Name
- Max Metadata Wait Time
- Description
- The amount of time publisher will wait to obtain metadata or wait for the buffer to flush during the 'send' call before failing the entire 'send' call. Corresponds to Kafka's 'max.block.ms' property
- API Name
- max.block.ms
- Default Value
- 5 sec
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- true
-
Max Request Size
The maximum size of a request in bytes. Corresponds to Kafka's 'max.request.size' property and defaults to 1 MB (1048576).
- Display Name
- Max Request Size
- Description
- The maximum size of a request in bytes. Corresponds to Kafka's 'max.request.size' property and defaults to 1 MB (1048576).
- API Name
- max.request.size
- Default Value
- 1 MB
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Message Header Encoding
For any attribute that is added as a message header, as configured via the <Attributes to Send as Headers> property, this property indicates the Character Encoding to use for serializing the headers.
- Display Name
- Message Header Encoding
- Description
- For any attribute that is added as a message header, as configured via the <Attributes to Send as Headers> property, this property indicates the Character Encoding to use for serializing the headers.
- API Name
- message-header-encoding
- Default Value
- UTF-8
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Message Key Field
The name of a field in the Input Records that should be used as the Key for the Kafka message.
- Display Name
- Message Key Field
- Description
- The name of a field in the Input Records that should be used as the Key for the Kafka message.
- API Name
- message-key-field
- Expression Language Scope
- Environment variables and FlowFile Attributes
- Sensitive
- false
- Required
- false
- Dependencies
-
- Publish Strategy is set to any of [USE_VALUE]
-
Partition
Specifies which Partition Records will go to. How this value is interpreted is dictated by the <Partitioner class> property.
- Display Name
- Partition
- Description
- Specifies which Partition Records will go to. How this value is interpreted is dictated by the <Partitioner class> property.
- API Name
- partition
- Expression Language Scope
- Environment variables and FlowFile Attributes
- Sensitive
- false
- Required
- false
-
Partitioner class
Specifies which class to use to compute a partition id for a message. Corresponds to Kafka's 'partitioner.class' property.
- Display Name
- Partitioner class
- Description
- Specifies which class to use to compute a partition id for a message. Corresponds to Kafka's 'partitioner.class' property.
- API Name
- partitioner.class
- Default Value
- org.apache.kafka.clients.producer.internals.DefaultPartitioner
- Allowable Values
-
- RoundRobinPartitioner
- DefaultPartitioner
- RecordPath Partitioner
- Expression Language Partitioner
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Publish Strategy
The format used to publish the incoming FlowFile record to Kafka.
- Display Name
- Publish Strategy
- Description
- The format used to publish the incoming FlowFile record to Kafka.
- API Name
- publish-strategy
- Default Value
- USE_VALUE
- Allowable Values
-
- Use Content as Record Value
- Use Wrapper
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Record Metadata Strategy
Specifies whether the Record's metadata (topic and partition) should come from the Record's metadata field or if it should come from the configured Topic Name and Partition / Partitioner class properties
- Display Name
- Record Metadata Strategy
- Description
- Specifies whether the Record's metadata (topic and partition) should come from the Record's metadata field or if it should come from the configured Topic Name and Partition / Partitioner class properties
- API Name
- Record Metadata Strategy
- Default Value
- Use Configured Values
- Allowable Values
-
- Use Configured Values
- Metadata From Record
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
- Dependencies
-
- Publish Strategy is set to any of [USE_WRAPPER]
-
Record Key Writer
The Record Key Writer to use for outgoing FlowFiles
- Display Name
- Record Key Writer
- Description
- The Record Key Writer to use for outgoing FlowFiles
- API Name
- record-key-writer
- Service Interface
- org.apache.nifi.serialization.RecordSetWriterFactory
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
- Dependencies
-
- Publish Strategy is set to any of [USE_WRAPPER]
-
Record Reader
The Record Reader to use for incoming FlowFiles
- Display Name
- Record Reader
- Description
- The Record Reader to use for incoming FlowFiles
- API Name
- record-reader
- Service Interface
- org.apache.nifi.serialization.RecordReaderFactory
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Record Writer
The Record Writer to use in order to serialize the data before sending to Kafka
- Display Name
- Record Writer
- Description
- The Record Writer to use in order to serialize the data before sending to Kafka
- API Name
- record-writer
- Service Interface
- org.apache.nifi.serialization.RecordSetWriterFactory
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Kerberos Service Name
The service name that matches the primary name of the Kafka server configured in the broker JAAS configuration
- Display Name
- Kerberos Service Name
- Description
- The service name that matches the primary name of the Kafka server configured in the broker JAAS configuration
- API Name
- sasl.kerberos.service.name
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- false
-
SASL Mechanism
SASL mechanism used for authentication. Corresponds to Kafka Client sasl.mechanism property
- Display Name
- SASL Mechanism
- Description
- SASL mechanism used for authentication. Corresponds to Kafka Client sasl.mechanism property
- API Name
- sasl.mechanism
- Default Value
- GSSAPI
- Allowable Values
-
- GSSAPI
- PLAIN
- SCRAM-SHA-256
- SCRAM-SHA-512
- AWS_MSK_IAM
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Password
Password provided with configured username when using PLAIN or SCRAM SASL Mechanisms
- Display Name
- Password
- Description
- Password provided with configured username when using PLAIN or SCRAM SASL Mechanisms
- API Name
- sasl.password
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- true
- Required
- false
- Dependencies
-
- SASL Mechanism is set to any of [PLAIN, SCRAM-SHA-512, SCRAM-SHA-256]
-
Token Authentication
Enables or disables Token authentication when using SCRAM SASL Mechanisms
- Display Name
- Token Authentication
- Description
- Enables or disables Token authentication when using SCRAM SASL Mechanisms
- API Name
- sasl.token.auth
- Default Value
- false
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
- Dependencies
-
- SASL Mechanism is set to any of [SCRAM-SHA-512, SCRAM-SHA-256]
-
Username
Username provided with configured password when using PLAIN or SCRAM SASL Mechanisms
- Display Name
- Username
- Description
- Username provided with configured password when using PLAIN or SCRAM SASL Mechanisms
- API Name
- sasl.username
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- false
- Dependencies
-
- SASL Mechanism is set to any of [PLAIN, SCRAM-SHA-512, SCRAM-SHA-256]
-
Security Protocol
Security protocol used to communicate with brokers. Corresponds to Kafka Client security.protocol property
- Display Name
- Security Protocol
- Description
- Security protocol used to communicate with brokers. Corresponds to Kafka Client security.protocol property
- API Name
- security.protocol
- Default Value
- PLAINTEXT
- Allowable Values
-
- PLAINTEXT
- SSL
- SASL_PLAINTEXT
- SASL_SSL
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
SSL Context Service
Service supporting SSL communication with Kafka brokers
- Display Name
- SSL Context Service
- Description
- Service supporting SSL communication with Kafka brokers
- API Name
- ssl.context.service
- Service Interface
- org.apache.nifi.ssl.SSLContextService
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Topic Name
The name of the Kafka Topic to publish to.
- Display Name
- Topic Name
- Description
- The name of the Kafka Topic to publish to.
- API Name
- topic
- Expression Language Scope
- Environment variables and FlowFile Attributes
- Sensitive
- false
- Required
- true
-
Transactional Id Prefix
When Use Transaction is set to true, KafkaProducer config 'transactional.id' will be a generated UUID and will be prefixed with this string.
- Display Name
- Transactional Id Prefix
- Description
- When Use Transaction is set to true, KafkaProducer config 'transactional.id' will be a generated UUID and will be prefixed with this string.
- API Name
- transactional-id-prefix
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- false
- Dependencies
-
- Use Transactions is set to any of [true]
-
Use Transactions
Specifies whether or not NiFi should provide Transactional guarantees when communicating with Kafka. If there is a problem sending data to Kafka, and this property is set to false, then the messages that have already been sent to Kafka will continue on and be delivered to consumers. If this is set to true, then the Kafka transaction will be rolled back so that those messages are not available to consumers. Setting this to true requires that the <Delivery Guarantee> property be set to "Guarantee Replicated Delivery."
- Display Name
- Use Transactions
- Description
- Specifies whether or not NiFi should provide Transactional guarantees when communicating with Kafka. If there is a problem sending data to Kafka, and this property is set to false, then the messages that have already been sent to Kafka will continue on and be delivered to consumers. If this is set to true, then the Kafka transaction will be rolled back so that those messages are not available to consumers. Setting this to true requires that the <Delivery Guarantee> property be set to "Guarantee Replicated Delivery."
- API Name
- use-transactions
- Default Value
- true
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
The name of a Kafka configuration property.
These properties will be added on the Kafka configuration after loading any provided configuration properties. In the event a dynamic property represents a property that was already set, its value will be ignored and WARN message logged. For the list of available Kafka properties please refer to: http://kafka.apache.org/documentation.html#configuration.
- Name
- The name of a Kafka configuration property.
- Description
- These properties will be added on the Kafka configuration after loading any provided configuration properties. In the event a dynamic property represents a property that was already set, its value will be ignored and WARN message logged. For the list of available Kafka properties please refer to: http://kafka.apache.org/documentation.html#configuration.
- Value
- The value of a given Kafka configuration property.
- Expression Language Scope
- ENVIRONMENT
| Name | Description |
|---|---|
| failure | Any FlowFile that cannot be sent to Kafka will be routed to this Relationship |
| success | FlowFiles for which all content was sent to Kafka. |
| Name | Description |
|---|---|
| msg.count | The number of messages that were sent to Kafka for this FlowFile. This attribute is added only to FlowFiles that are routed to success. |