Configure and Start Apache WebHCat
Note | |
---|---|
The |
You must replace your configuration after upgrading. Copy
/etc/hive-webhcat/conf
from the template to the conf directory in webhcat hosts.Modify the WebHCat configuration files.
Upload Pig, Hive and Sqoop tarballs to HDFS as the $HDFS_User (in this example, hdfs):
su - hdfs -c “hdfs dfs -mkdir -p /hdp/apps/2.3.4.7-$BUILD/pig/" su - hdfs -c “hdfs dfs -mkdir -p /hdp/apps/2.3.4.7-$BUILD/hive/" su - hdfs -c “hdfs dfs -mkdir -p /hdp/apps/2.3.4.7-$BUILD/sqoop/" su - hdfs -c "hdfs dfs -put /usr/hdp/2.3.4.7-$BUILD/pig/pig.tar.gz /hdp/apps/2.3.4.7-$BUILD/pig/" su - hdfs -c "hdfs dfs -put /usr/hdp/2.3.4.7-$BUILD/hive/hive.tar.gz /hdp/apps/2.3.4.7-$BUILD/hive/" su - hdfs -c "hdfs dfs -put /usr/hdp/2.3.4.7-$BUILD/sqoop/sqoop.tar.gz /hdp/apps/2.3.4.7-$BUILD/sqoop/" su - hdfs -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.4.7-$BUILD/pig" su - hdfs - "hdfs dfs -chmod -R 444 /hdp/apps/2.3.4.7-$BUILD/pig/pig.tar.gz" su - hdfs -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.4.7-$BUILD/hive" su - hdfs -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.4.7-$BUILD/hive/hive.tar.gz" su - hdfs -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.4.7-$BUILD/sqoop" su - hdfs -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.4.7-$BUILD/sqoop/sqoop.tar.gz" su - hdfs -c "hdfs dfs -chown -R hdfs:hadoop /hdp"
Update the following properties in the webhcat-site.xml configuration file, as their values have changed:
<property> <name>templeton.pig.archive</name> <value>hdfs:///hdp/apps/${hdp.version}/pig/pig.tar.gz</value> </property> <property> <name>templeton.hive.archive</name> <value>hdfs:///hdp/apps/${hdp.version}/hive/hive.tar.gz</value> </property> <property> <name>templeton.streaming.jar</name> <value>hdfs:///hdp/apps/${hdp.version}/mapreduce/ hadoop-streaming.jar</value> <description>The hdfs path to the Hadoop streaming jar file.</description> </property> <property> <name>templeton.sqoop.archive</name> <value>hdfs:///hdp/apps/${hdp.version}/sqoop/sqoop.tar.gz</value> <description>The path to the Sqoop archive.</description> </property> <property> <name>templeton.sqoop.path</name> <value>sqoop.tar.gz/sqoop/bin/sqoop</value> <description>The path to the Sqoop executable.</description> </property> <property> <name>templeton.sqoop.home</name> <value>sqoop.tar.gz/sqoop</value> <description>The path to the Sqoop home in the exploded archive. </description> </property>
Note You do not need to modify ${hdp.version}.
Remove the following obsolete properties from webhcat-site.xml:
<property> <name>templeton.controller.map.mem</name> <value>1600</value> <description>Total virtual memory available to map tasks.</description> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/path/to/warehouse/dir</value> </property>
Add new proxy users, if needed. In core-site.xml, make sure the following properties are also set to allow WebHcat to impersonate your additional HDP-2.3.4.7 groups and hosts:
<property> <name>hadoop.proxyuser.hcat.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hcat.hosts</name> <value>*</value> </property>
Where:
hadoop.proxyuser.hcat.group
Is a comma-separated list of the Unix groups whose users may be impersonated by 'hcat'.
hadoop.proxyuser.hcat.hosts
A comma-separated list of the hosts which are allowed to submit requests by 'hcat'.
Start WebHCat:
su - hcat -c "/usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start"
Smoke test WebHCat.
At the WebHCat host machine, run the following command:
http://$WEBHCAT_HOST_MACHINE:50111/templeton/v1/status
If you are using a secure cluster, run the following command:
curl --negotiate -u: http://cluster.$PRINCIPAL.$REALM:50111/templeton/v1/status {"status":"ok","version":"v1"}[machine@acme]$