Configuring Ranger audit properties for HDFS
How to change the settings that control how Ranger writes audit records to HDFS.
The HDFS audit destination is intended to store long-term audit records.
You can configure whether Ranger stores audit records in HDFS and at which location.
You must purge long term audit records stored in HDFS manually.Parameter Name | Description | Default Setting | Units |
---|---|---|---|
ranger_plugin_hdfs_audit_enabled |
controls whether Ranger writes audit records to HDFS |
true | T/F |
ranger_plugin_hdfs_audit_url | location at which you can access audit records written to HDFS | <hdfs.host_name>/ranger/audit | string |
- From Cloudera Manager choose .
- In Search, type ranger_plugin, then press Return.
- In ranger_plugin_hdfs_audit_enabled, check/uncheck RANGER-1 (Service Wide)
- In ranger_plugin_hdfs_audit_url type a valid directory on the hdfs host.
-
Refresh the configuration, using one of the following two options:
- Click Refresh Configuration, as prompted or, if Refresh Configuration does not appear,
- In Actions, click Update Solr config-set for Ranger, then confirm.
You may want to delete older logs from HDFS. Cloudera provides no feature to do this. You may accomplish this task manually, using a script.
You must specify the audit log directory by replacing the 2nd line hdfs dfs -ls /<path_to>/<audit_logs> in the example script.
You may also include
the -skipTrash option, if you choose, on 7th line in the script.
################## today=`date +'%s'` hdfs dfs -ls /<path_to>/<audit_logs> | grep "^d" | while read line ; do dir_date=$(echo ${line} | awk '{print $6}') difference=$(( ( ${today} - $(date -d ${dir_date} +%s) ) / ( 24*60*60 ) )) filePath=$(echo ${line} | awk '{print $8}') if [ ${difference} -gt 30 ]; then hdfs dfs -rm -r $filePath fi done ###################