Ranger Audit Profiler configuration
In addition to the generic configuration, there are additional parameters for the Ranger Audit Profiler that can optionally be edited.
- Go to Profilers > Configs.
-
Select Ranger Audit Profiler.
The Detail page is displayed.
- Use the toggle button to enable or disable the profiler.
- Select a schedule to run the profiler. This is implemented as a quartz cron expression.
-
Set the Input block size.
-
Continue with the Pod Configurations and set the
Kubernetes job resources:
Pod configurations specify the resources that would be allocated to a pod when the profiler job starts to run. As all profilers are submitted as Kubernetes jobs, you must decide if you want to add or reduce resources to handle workload of various sizes.
- Pod CPU Limit: Indicates the maximum number of cores that can be allocated to a Pod. The accepted values range from one through eight.
- Pod CPU Requirement: This is the minimum number of CPUs that will be allocated to a pod when its provisioned. If the node where a pod is running has enough resources available, it is possible (and allowed) for a container to use more resource than its request for that resource specifies. However, a container is not allowed to use more than its resource limit. The accepted values range from one through eight.
- Pod Memory Limit: The maximum amount of memory can be allocated to a Pod. The accepted values range from 1 through 256.
- Pod Memory Requirement: This is the minimum amount of RAM that will be allocated to a pod when it is provisioned. If the node where a pod is running has enough resources available, it is possible (and allowed) for a container to use more resource than its request for that resource specifies. However, a container is not allowed to use more than its resource limit. The accepted values range from 1 through 256.
-
In Executor Configurations, update the following:
Executor configurations are the runtime configurations. These configuration must be changed if you are changing the pod configurations and when there is a requirement for additional compute power.
-
- Number of workers: Indicates the number of processes that are used by the distributed computing framework. The accepted values range from one through eight.
- Number of threads per worker: Indicates the number of threads used by each worker to complete the job. The accepted values range from one through eight.
- Worker Memory limit in GB: To avoid over utilization of memory, this parameter forces an upper threshold memory usage for a given worker. For example, if you have a 8 GB Pod and 4 threads, the value of this parameter must be 2 GB. The accepted values range from one through four.
-
- Click Save to apply the configuration changes to the selected profiler.