Additional Configuration for Ranger Audit Profiler

In addition to the generic configuration, there are additional parameters for the Ranger Audit Profiler that can optionally be edited.

  1. Click Profilers in the main navigation menu on the left..
  2. Click Configs to view all of the configured profilers.
  3. Select Ranger Audit Profiler for which you need to edit the profiler configuration.
    You can use the toggle button to enable / disable the Ranger Audit Profiler.
    The Ranger Audit Profiler detail page is displayed which contains the following entities:
    • Profiler Configurations
    • Pod Configurations
    • Executor Configurations

    Profiler Configurations

    • Cron Expression - A cron expression details about when the schedule executes and visualizes the next execution dates of your cron expression.
    • Input Block Size - When the Ranger Audit Profiler is run, it converts the logs into data frames for processing. You can set the block size to control the size of partitions in these data frames. This can impact the performance of operations on the pod.

    Pod Configurations

    As all profilers are submitted as Kubernetes jobs, you must decide if you want to add or reduce resources to handle workload of various sizes.

    Pod configurations specify the resources that would be allocated to a pod when the profiler job starts to run.

    • Pod CPU limit: Indicates the maximum number of cores that can be allocated to a Pod. The accepted values examples are 0.5, 1, 2, 500m, and 250m.
    • Pod CPU Requirements: This is the minimum number of CPUs that will be allocated to a Pod when its provisioned. If the node where a Pod is running has enough resources available, it is possible (and allowed) for a container to use more resource than its request for that resource specifies. However, a container is not allowed to use more than its resource limit. The accepted values examples are 0.5, 1, 2, 500m, and 250m.
    • Pod Memory limit: Maximum amount of memory can be allocated to a Pod. The accepted values examples are: 128974848, 129e6, 129M, 128974848000m, and 123Mi.
    • Pod Memory Requirements: This is the minimum amount of RAM that will be allocated to a Pod when it is provisioned. If the node where a Pod is running has enough resources available, it is possible (and allowed) for a container to use more resource than its request for that resource specifies. However, a container is not allowed to use more than its resource limit.

    Executor Configurations

    • Number of workers: Indicates the number of processes that are used by the distributed computing framework.
    • Number of threads per worker: Indicates the number of threads used by each worker to complete the job.
    • Worker Memory limit in GB: To avoid over utilization of memory, this parameter forces an upper threshold memory usage for a given worker. For example, if you have a 8 GB Pod and 4 threads, the value of this parameter must be 2 GB.