Command Line Upgrade
Also available as:
PDF
loading table of contents...

Configure and Start Apache WebHCat

[Note]Note

The su commands in this section use "hdfs" to represent the HDFS Service user and webhcat to represent the Apache WebHCat Service user. If you are using another name for these Service users, you need to substitute your Service user name for "hdfs" or "webhcat" in each of the su commands.

  1. You must replace your configuration after upgrading. Copy /etc/hive-webhcat/conf from the template to the conf directory in webhcat hosts.

  2. Modify the WebHCat configuration files.

    1. Upload Pig, Hive and Sqoop tarballs to HDFS as the $HDFS_User (in this example, hdfs):

      su - hdfs -c "hdfs dfs -mkdir -p /hdp/apps/2.5.3.0-<$version>/pig/"
                          
      su - hdfs -c "hdfs dfs -mkdir -p /hdp/apps/2.5.3.0-<$version>/hive/"
      
      su - hdfs -c "hdfs dfs -mkdir -p /hdp/apps/2.5.3.0-<$version>/sqoop/"
      
      su - hdfs -c "hdfs dfs -put /usr/hdp/2.5.3.0-<$version>/pig/pig.tar.gz /hdp/apps/2.5.3.0-<$version>/pig/"
      
      su - hdfs -c "hdfs dfs -put /usr/hdp/2.5.3.0-<$version>/hive/hive.tar.gz /hdp/apps/2.5.3.0-<$version>/hive/"
      
      su - hdfs -c "hdfs dfs -put /usr/hdp/2.5.3.0-<$version>/sqoop/sqoop.tar.gz /hdp/apps/2.5.3.0-<$version>/sqoop/"
      
      su - hdfs -c "hdfs dfs -chmod -R 555 /hdp/apps/2.5.3.0-<$version>/pig"
      
      su - hdfs - "hdfs dfs -chmod -R 444 /hdp/apps/2.5.3.0-<$version>/pig/pig.tar.gz"
      
      su - hdfs -c "hdfs dfs -chmod -R 555 /hdp/apps/2.5.3.0-<$version>/hive"
      
      su - hdfs -c "hdfs dfs -chmod -R 444 /hdp/apps/2.5.3.0-<$version>/hive/hive.tar.gz"
      
      su - hdfs -c "hdfs dfs -chmod -R 555 /hdp/apps/2.5.3.0-<$version>/sqoop"
      
      su - hdfs -c "hdfs dfs -chmod -R 444 /hdp/apps/2.5.3.0-<$version>/sqoop/sqoop.tar.gz"
      
      su - hdfs -c "hdfs dfs -chown -R hdfs:hadoop /hdp"
    2. Update the following properties in the webhcat-site.xml configuration file, as their values have changed:

      <property>
       <name>templeton.pig.archive</name>
       <value>hdfs:///hdp/apps/${hdp.version}/pig/pig.tar.gz</value>
      </property>
       
      <property>
       <name>templeton.hive.archive</name>
       <value>hdfs:///hdp/apps/${hdp.version}/hive/hive.tar.gz</value>
      </property>
       
      <property>
       <name>templeton.streaming.jar</name>
       <value>hdfs:///hdp/apps/${hdp.version}/mapreduce/
         hadoop-streaming.jar</value>
       <description>The hdfs path to the Hadoop streaming jar file.</description>
      </property>
       
      <property>
       <name>templeton.sqoop.archive</name>
       <value>hdfs:///hdp/apps/${hdp.version}/sqoop/sqoop.tar.gz</value>
       <description>The path to the Sqoop archive.</description>
      </property>
       
      <property>
       <name>templeton.sqoop.path</name>
       <value>sqoop.tar.gz/sqoop/bin/sqoop</value>
       <description>The path to the Sqoop executable.</description>
      </property>
       
      <property>
       <name>templeton.sqoop.home</name>
       <value>sqoop.tar.gz/sqoop</value>
       <description>The path to the Sqoop home in the exploded archive.
         </description>
      </property> 
      [Note]Note

      You do not need to modify ${hdp.version}.

    3. Remove the following obsolete properties from webhcat-site.xml:

      <property>
       <name>templeton.controller.map.mem</name>
       <value>1600</value>
       <description>Total virtual memory available to map tasks.</description>
      </property>
      
      <property>
       <name>hive.metastore.warehouse.dir</name>
       <value>/path/to/warehouse/dir</value>
      </property> 
    4. Add new proxy users, if needed. In core-site.xml, make sure the following properties are also set to allow WebHcat to impersonate your additional HDP-2.5.3 groups and hosts:

      <property>
       <name>hadoop.proxyuser.hcat.groups</name>
       <value>*</value>
      </property> 
       
      <property>
       <name>hadoop.proxyuser.hcat.hosts</name>
       <value>*</value>
      </property> 

      Where:

      hadoop.proxyuser.hcat.group

      Is a comma-separated list of the Unix groups whose users may be impersonated by 'hcat'.

      hadoop.proxyuser.hcat.hosts

      A comma-separated list of the hosts which are allowed to submit requests by 'hcat'.

  3. Start WebHCat:

    su - hcat -c "/usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start"

  4. Smoke test WebHCat.

    1. At the WebHCat host machine, run the following command:

      http://$WEBHCAT_HOST_MACHINE:50111/templeton/v1/status

    2. If you are using a secure cluster, run the following command:

      curl --negotiate -u: http://cluster.$PRINCIPAL.$REALM:50111/templeton/v1/status {"status":"ok","version":"v1"}[machine@acme]$