Data Steward Studio Installation and Upgrade
Also available as:

Update Service Specific Properties for HDP 3.X versions

Make sure you update the properties required for services such as Hive and Spark.

  1. Add the following proxy users details in the custom core-site.xml file as follows:
    hadoop.proxyuser.livy.groups=* hadoop.proxyuser.livy.hosts=* hadoop.proxyuser.knox.groups=* hadoop.proxyuser.knox.hosts=* hadoop.proxyuser.hive.hosts=*
  2. As a part of the Spark installation, update the following properties to spark2-defaults from Ambari UI:
    1. spark.sql.hive.hiveserver2.jdbc.url - From Ambari, select Hive > Summary. Get and use the value of this property: HIVESERVER2 INTERACTIVE JDBC URL.
    2. spark.datasource.hive.warehouse.metastoreUri - From Hive > General, get and use the value of this property: hive.metastore.uris
    3. spark.datasource.hive.warehouse.load.staging.dir - Set this value as /tmp
    4. spark.hadoop.hive.llap.daemon.service.hosts - From Advanced hive-interactive-site, get and use the value of this property: hive.llap.daemon.service.hosts
    5. spark.hadoop.hive.zookeeper.quorum - From Advanced hive-site, get and use the value of this property: hive.zookeeper.quorum
    6. spark.sql.hive.hiveserver2.jdbc.url.principal - From Advanced hive-site, get and use the value of this property: hive.server2.authentication.kerberos.principal
    7. - Set this value as true.
  3. Add the following property to Custom livy2-conf:
    livy.file.local-dir-whitelist : /usr/hdp/current/hive_warehouse_connector/
  4. To allow the dpprofiler user to submit apps to default queue in YARN Capacity Scheduler section from Ambari UI, update the property as follows:
  5. For Kerberos enabled clusters, set the Hadoop HTTP authentication mode to kerberos. Set the following property in HDFS > Configs > core-site:
  6. Restart all the services as suggested by Ambari. Make sure that all the services are up and running after the restart.