Enable Audit Logging in Non-Ambari Clusters
How to enable audit logging for HDFS.
It is recommended that Ranger audits be written to both Solr and HDFS. Audits to Solr are primarily used to enable queries from the Ranger Admin UI. HDFS is a long-term destination for audits; audits stored in HDFS can be exported to any SIEM system, or to another audit store.
-
Enable audit logging:
-
Set the XAAUDIT.HDFS.ENABLE value to "true" for the component plug-in in the
install.properties file, which can be found
here:
/usr/hdp/<version>/ranger-<component>=plugin
. -
Configure the NameNode host in the
XAAUDIT.HDFS.HDFS_DIR
field. -
Create a policy in the HDFS service from the Ranger Admin for individual
component users (
hive/hbase/knox/storm/yarn/kafka/kms
) to provide READ and WRITE permissions for the audit folder (i.e., for enabling Hive component to log Audits to HDFS, you need to create a policy for the hive user with Read and WRITE permissions for the audit directory). -
Set the Audit to HDFS caches logs in the local directory, which can be
specified in XAAUDIT.HDFS.LOCAL_BUFFER_DIRECTORY (this can be like
/var/log/<component>/
**), which is the path where the audit is stored for a short time. This is similar for archive logs that need to be updated.
-
Set the XAAUDIT.HDFS.ENABLE value to "true" for the component plug-in in the
install.properties file, which can be found
here:
-
Enable auditing reporting from the Solr database:
-
Enable auditing to the Solr database for a plug-in (e.g., HBase):