DBCPService
Implementations: DBCPConnectionPool
PostgreSQLConnectionPool
DBCPConnectionPoolLookup
HikariCPConnectionPool
Hive3ConnectionPool
RedshiftConnectionPool
SnowflakeComputingConnectionPool
HiveConnectionPool
HadoopDBCPConnectionPool
Specifies the JDBC Connection Pool to use in order to convert the JSON message to a SQL statement. The Connection Pool is necessary in order to determine the appropriate database column types. | SQL Statement | putsql-sql-statement | | | The SQL statement to execute. The statement can be empty, a constant value, or built from attributes using Expression Language. If this property is specified, it will be used regardless of the content of incoming FlowFiles. If this property is empty, the content of the incoming FlowFile is expected to contain a valid SQL statement, to be issued by the processor to the database. Supports Expression Language: true (will be evaluated using flow file attributes and variable registry) |
Support Fragmented Transactions | Support Fragmented Transactions | true | | If true, when a FlowFile is consumed by this Processor, the Processor will first check the fragment.identifier and fragment.count attributes of that FlowFile. If the fragment.count value is greater than 1, the Processor will not process any FlowFile with that fragment.identifier until all are available; at that point, it will process all FlowFiles with that fragment.identifier as a single transaction, in the order specified by the FlowFiles' fragment.index attributes. This Provides atomicity of those SQL statements. Once any statement of this transaction throws exception when executing, this transaction will be rolled back. When transaction rollback happened, none of these FlowFiles would be routed to 'success'. If the <Rollback On Failure> is set true, these FlowFiles will stay in the input relationship. When the <Rollback On Failure> is set false,, if any of these FlowFiles will be routed to 'retry', all of these FlowFiles will be routed to 'retry'.Otherwise, they will be routed to 'failure'. If this value is false, these attributes will be ignored and the updates will occur independent of one another. |
Database Session AutoCommit | database-session-autocommit | false | | The autocommit mode to set on the database connection being used. If set to false, the operation(s) will be explicitly committed or rolled back (based on success or failure respectively), if set to true the driver/database handles the commit/rollback. |
Transaction Timeout | Transaction Timeout | | | If the <Support Fragmented Transactions> property is set to true, specifies how long to wait for all FlowFiles for a particular fragment.identifier attribute to arrive before just transferring all of the FlowFiles with that identifier to the 'failure' relationship |
Batch Size | Batch Size | 100 | | The preferred number of FlowFiles to put to the database in a single transaction |
Obtain Generated Keys | Obtain Generated Keys | false | | If true, any key that is automatically generated by the database will be added to the FlowFile that generated it using the sql.generate.key attribute. This may result in slightly slower performance and is not supported by all databases. |
Rollback On Failure | rollback-on-failure | false | | Specify how to handle error. By default (false), if an error occurs while processing a FlowFile, the FlowFile will be routed to 'failure' or 'retry' relationship based on error type, and processor can continue with next FlowFile. Instead, you may want to rollback currently processed FlowFiles and stop further processing immediately. In that case, you can do so by enabling this 'Rollback On Failure' property. If enabled, failed FlowFiles will stay in the input relationship without penalizing it and being processed repeatedly until it gets processed successfully or removed by other means. It is important to set adequate 'Yield Duration' to avoid retrying too frequently. |
Relationships:
Name | Description |
---|
retry | A FlowFile is routed to this relationship if the database cannot be updated but attempting the operation again may succeed |
success | A FlowFile is routed to this relationship after the database is successfully updated |
failure | A FlowFile is routed to this relationship if the database cannot be updated and retrying the operation will also fail, such as an invalid query or an integrity constraint violation |
Reads Attributes:
Name | Description |
---|
fragment.identifier | If the <Support Fragment Transactions> property is true, this attribute is used to determine whether or not two FlowFiles belong to the same transaction. |
fragment.count | If the <Support Fragment Transactions> property is true, this attribute is used to determine how many FlowFiles are needed to complete the transaction. |
fragment.index | If the <Support Fragment Transactions> property is true, this attribute is used to determine the order that the FlowFiles in a transaction should be evaluated. |
sql.args.N.type | Incoming FlowFiles are expected to be parametrized SQL statements. The type of each Parameter is specified as an integer that represents the JDBC Type of the parameter. |
sql.args.N.value | Incoming FlowFiles are expected to be parametrized SQL statements. The value of the Parameters are specified as sql.args.1.value, sql.args.2.value, sql.args.3.value, and so on. The type of the sql.args.1.value Parameter is specified by the sql.args.1.type attribute. |
sql.args.N.format | This attribute is always optional, but default options may not always work for your data. Incoming FlowFiles are expected to be parametrized SQL statements. In some cases a format option needs to be specified, currently this is only applicable for binary data types, dates, times and timestamps. Binary Data Types (defaults to 'ascii') - ascii: each string character in your attribute value represents a single byte. This is the format provided by Avro Processors. base64: the string is a Base64 encoded string that can be decoded to bytes. hex: the string is hex encoded with all letters in upper case and no '0x' at the beginning. Dates/Times/Timestamps - Date, Time and Timestamp formats all support both custom formats or named format ('yyyy-MM-dd','ISO_OFFSET_DATE_TIME') as specified according to java.time.format.DateTimeFormatter. If not specified, a long value input is expected to be an unix epoch (milli seconds from 1970/1/1), or a string value in 'yyyy-MM-dd' format for Date, 'HH:mm:ss.SSS' for Time (some database engines e.g. Derby or MySQL do not support milliseconds and will truncate milliseconds), 'yyyy-MM-dd HH:mm:ss.SSS' for Timestamp is used. |
Writes Attributes:
Name | Description |
---|
sql.generated.key | If the database generated a key for an INSERT statement and the Obtain Generated Keys property is set to true, this attribute will be added to indicate the generated key, if possible. This feature is not supported by all database vendors. |
State management:
This component does not store state.Restricted:
This component is not restricted.Input requirement:
This component requires an incoming relationship.System Resource Considerations:
None specified.See Also:
ConvertJSONToSQL