Managing Cloudera DataFlow in an EnvironmentPDF version

Configuring access for NiFi metrics scraping

You can configure an external Prometheus service to scrape NiFi metrics for Cloudera DataFlow deployments. To do that, you need to generate a password and add a job for each deployment to your Prometheus configuration.

  • You have the DFAdmin user role for the environment where you want to configure access for NiFi metrics scraping.

  1. In Cloudera DataFlow, from the Environments page, select the Environment where you want to configure NiFi metrics scraping.
  2. From the Actions menu select Access NiFi Metrics.
  3. Depending on whether you are setting up access for the first time or updating an existing one, select Initial configuration or Manage existing.
    1. Click Generate Credentials and Enable Access.

      Copy the generated password. The username is nifi-metrics for all jobs, do not change it.

    2. Create a new job for each deployment where you want to perform metrics scraping and add it to your Prometheus configuration. Depending on your use case, either append it to an existing configuration or you can create a new one. You can use the provided Sample Prometheus scrape configuration.
      Figure 1. Sample metrics configuration code snippet
      scrape_configs:
        - job_name: 'nifi-metrics-[***DEPLOYMENT-NAME***]'
          scrape_interval: 15s
      
          scheme: https
          honor_labels: true
          metrics_path: /dfx-[***DEPLOYMENT-NAME***]-ns/federate
      
          basic_auth:
            # Use ‘nifi-metrics’ as the username for all jobs. 
            [ username: nifi-metrics ]
            [ password: [***GENERATED PASSWORD***] ]
      
          params:
            'match[]':
              # This parameter is mandatory, because Cloudera's Prometheus instance also scrapes cadvisor and Prometheus itself.
              - '{job="dfx-nifi-web"}'
      
          static_configs:
            - targets: ['https://dfx.qbllchii.xcu2-8y8x.dev.cldr.work']
      You need to replace
      • [***DEPLOYMENT-NAME***] with the encoded deployment name. (For example 'Some DataFlow Deployment' is encoded as 'some-dataflow-deployment')
      • [***GENERATED PASSWORD***] with the generated password.
      Depending on your Prometheus setup, you may need to make further additions to the job definition.
    3. Add the newly created or updated configuration file to your Prometheus service.

We want your opinion

How can we improve this page?

What kind of feedback do you have?