Deploying the Confluent Cloud to S3/ADLS ReadyFlow
Learn how to use the Deployment wizard to deploy the Confluent Cloud to S3/ADLS ReadyFlow using the information you collected using the prerequisites check list.
The CDF Catalog is where you manage the flow definition lifecycle, from initial import, to versioning, to deploying a flow definition.
In DataFlow, from the left navigation pane, click
Flow definitions available for you to deploy are displayed, one definition per row.
Launch the Deployment wizard.
- Click the row to display the flow definition details and versions.
- Click a row representing a flow definition version to display flow definition version details and the Deploy button.
- Click Deploy to launch the Deployment wizard.
Select the environment to which you want to deploy this version of your
flow definition, and click Continue.
In the Overview, give your flow deployment a unique
You can use this name to distinguish between different versions of a flow definition, flow definitions deployed to different environments, and similar.
In NiFi Configuration:
- Select a NiFi Runtime Version for your flow deployment. Cloudera recommends that you always use the latest available version, if possible.
- Autostart Behavior is on by default, allowing your flow to start automatically after successful deployment. You can clear selection if you do not want the automatic start.
In Parameters, specify parameter values like connection
strings, usernames and similar, and upload files like truststores, and similar.
For parameters specific to this ReadyFlow, see the Example with the configuration parameters table below.
Specify your Sizing & Scaling configurations.
- NiFi node sizing
- You can adjust the size of your cluster from Extra Small to Large
- Number of NiFi nodes
- You can set whether you want to automatically scale your cluster depending on resource demands. When you enable autoscaling, the minimum NiFi nodes are used for initial size and the workload scales up or down depending on resource demands.
- You can set the number of nodes from 1 to 32.
In Key Performance Indicators, you can set up your
metrics system with specific KPIs to track the performance of a deployed
flow. You can also define when and how to receive alerts about the KPI
See Working with KPIs for more information about the KPIs available and how you can monitor them.
- Review the summary of the information you provided in the Deployment wizard and make any necessary edits by clicking Previous. When you are finished, complete your flow deployment by clicking Deploy.
Once you click Deploy, you are being redirected to the Alerts tab in the detail view for the deployment where you can track its progress.
Confluent Cloud to S3/ADLS data flow. You have collected this information in the Meeting the pre-requisites step.
|CDP Workload User||Specify the CDP machine user or workload username that you want to use to authenticate to the object stores. Ensure this user has the appropriate access rights to the object store locations in Ranger or IDBroker.|
|CDP Workload User Password||Specify the password of the CDP machine user or workload user you are using to authenticate against the object stores (via IDBroker).|
|CSV Delimiter||If your source data is CSV, specify the delimiter here.|
|Data Input Format||
Specify the format of your input data. You can use "CSV", "JSON" or "AVRO" with this ReadyFlow.
|Data Output Format||Specify the desired format for your output data. You can use "CSV", "JSON" or "AVRO" with this ReadyFlow.|
|Destination S3 or ADLS Path||Specify the name of the destination S3 or ADLS path you want to write to. Make sure that the path starts with "/".|
|Destination S3 or ADLS Storage Location||Specify the name of the destination S3 bucket or ADLS Container you want to
For S3, enter a value in the form: s3a://[Destination S3 Bucket]For ADLS, enter a value in the form: abfs://[Destination ADLS File System]@[Destination ADLS Storage Account].dfs.core.windows.net
Specify the filter rule expressed in SQL to filter streaming events for the destination database. Records matching the filter will be written to the destination database. The default value forwards all records.
|Kafka Broker Endpoint||
Specify the Kafka bootstrap server.
|Kafka Client API Key||
Specify the API Key to connect to the Kafka cluster.
|Kafka Client API Secret||
Specify the API Secret to connect to the Kafka cluster.
|Kafka Consumer Group Id||
The name of the consumer group used for the source topic you are consuming from.
|Kafka Source Topic||
Specify a topic name that you want to read from.
|Kafka Schema Name||
Specify the schema name to be looked up in the Confluent Schema Registry for the Source Kafka Topic.
|Kafka Source Topic||Specify the topic name that you want to read from.|
|Schema Registry Client Key||Specify the API Key to connect to the Confluent Schema Registry.|
|Schema Registry Client Secret||Specify the API Secret to connect to the Confluent Schema Registry.|
|Schema Registry Endpoint||Specify the Schema Registry API endpoint.|