Update Service Specific Properties for HDP 3.X versions
Make sure you update the properties required for services such as Hive and Spark.
-
Add the following proxy users details in the custom core-site.xml file as follows:
hadoop.proxyuser.livy.groups=*
hadoop.proxyuser.livy.hosts=*
hadoop.proxyuser.knox.groups=*
hadoop.proxyuser.knox.hosts=*
hadoop.proxyuser.hive.hosts=*
-
As a part of the Spark installation, update the following properties to spark2-defaults from Ambari UI:
-
spark.sql.hive.hiveserver2.jdbc.url
- From Ambari, select Hive > Summary. Get and use the value of this property:HIVESERVER2 INTERACTIVE JDBC URL
. -
spark.datasource.hive.warehouse.metastoreUri
- From Hive > General, get and use the value of this property:hive.metastore.uris
spark.datasource.hive.warehouse.load.staging.dir
- Set this value as/tmp
spark.hadoop.hive.llap.daemon.service.hosts
- From Advanced hive-interactive-site, get and use the value of this property:hive.llap.daemon.service.hosts
spark.hadoop.hive.zookeeper.quorum
- From Advanced hive-site, get and use the value of this property:hive.zookeeper.quorum
spark.sql.hive.hiveserver2.jdbc.url.principal
- From Advanced hive-site, get and use the value of this property:hive.server2.authentication.kerberos.principal
spark.security.credentials.hiveserver2.enabled
- Set this value astrue
.
-
-
Add the following property to Custom livy2-conf:
livy.file.local-dir-whitelist : /usr/hdp/current/hive_warehouse_connector/
-
To allow the dpprofiler user to submit apps to default queue in YARN Capacity Scheduler section from Ambari UI, update the property as follows:
yarn.scheduler.capacity.root.acl_submit_applications=dpprofiler,yarn,yarn-ats,hdfs
-
For Kerberos enabled clusters, set the Hadoop HTTP authentication mode to kerberos. Set the following property in HDFS > Configs > core-site:
hadoop.http.authentication.type=kerberos
- Restart all the services as suggested by Ambari. Make sure that all the services are up and running after the restart.