Upgrading HDP Manually
Also available as:
PDF
loading table of contents...

Configure and Start Apache WebHCat

Before you can upgrade Apache WebHCat, you must have first upgraded your HDP components to the latest version (in this case, 2.3.6). This section assumes that you have already upgraded your components for HDP 2.3.6. If you have not already completed these steps, return to Getting Ready to Upgrade and Upgrade 2.2 Components for instructions on how to upgrade your HDP components to 2.3.6.

[Note]Note

The su commands in this section use "hdfs" to represent the HDFS Service user and webhcat to represent the WebHCat Service user. If you are using another name for these Service users, you will need to substitute your Service user name for "hdfs" or "webhcat" in each of the su commands.

  1. You must replace your configuration after upgrading. Copy /etc/webhcat/conf from the template to the conf directory in webhcat hosts.

  2. Modify the WebHCat configuration files.

    1. Upload Pig, Hive and Sqoop tarballs to HDFS as the $HDFS_User (in this example, hdfs):

      su - hdfs -c "hdfs dfs -mkdir -p /hdp/apps/2.3.6.0-$BUILD/pig/"
                          
      su - hdfs -c "hdfs dfs -mkdir -p /hdp/apps/2.3.6.0-$BUILD/hive/"
      
      su - hdfs -c "hdfs dfs -mkdir -p /hdp/apps/2.3.6.0-$BUILD/sqoop/"
      
      su - hdfs -c "hdfs dfs -put /usr/hdp/2.3.6.0-$BUILD/pig/pig.tar.gz /hdp/apps/2.3.6.0-$BUILD/pig/"
      
      su - hdfs -c "hdfs dfs -put /usr/hdp/2.3.6.0-$BUILD/hive/hive.tar.gz /hdp/apps/2.3.6.0-$BUILD/hive/"
      
      su - hdfs -c "hdfs dfs -put /usr/hdp/2.3.6.0-$BUILD/sqoop/sqoop.tar.gz /hdp/apps/2.3.6.0-$BUILD/sqoop/"
      
      su - hdfs -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.6.0-$BUILD/pig"
      
      su - hdfs -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.6.0-$BUILD/pig/pig.tar.gz"
      
      su - hdfs -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.6.0-$BUILD/hive"
      
      su - hdfs -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.6.0-$BUILD/hive/hive.tar.gz"
      
      su - hdfs -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.6.0-$BUILD/sqoop"
      
      su - hdfs -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.6.0-$BUILD/sqoop/sqoop.tar.gz"
      
      su - hdfs -c "hdfs dfs -chown -R hdfs:hadoop /hdp"
    2. Update the following properties in the webhcat-site.xml configuration file, as their values have changed:

      <property>
       <name>templeton.pig.archive</name>
       <value>hdfs:///hdp/apps/${hdp.version}/pig/pig.tar.gz</value>
      </property>
       
      <property>
       <name>templeton.hive.archive</name>
       <value>hdfs:///hdp/apps/${hdp.version}/hive/hive.tar.gz</value>
      </property>
       
      <property>
       <name>templeton.streaming.jar</name>
       <value>hdfs:///hdp/apps/${hdp.version}/mapreduce/
         hadoop-streaming.jar</value>
       <description>The hdfs path to the Hadoop streaming jar file.</description>
      </property>
       
      <property>
       <name>templeton.sqoop.archive</name>
       <value>hdfs:///hdp/apps/${hdp.version}/sqoop/sqoop.tar.gz</value>
       <description>The path to the Sqoop archive.</description>
      </property>
       
      <property>
       <name>templeton.sqoop.path</name>
       <value>sqoop.tar.gz/sqoop/bin/sqoop</value>
       <description>The path to the Sqoop executable.</description>
      </property>
       
      <property>
       <name>templeton.sqoop.home</name>
       <value>sqoop.tar.gz/sqoop</value>
       <description>The path to the Sqoop home in the exploded archive.
         </description>
      </property> 
      [Note]Note

      You do not need to modify ${hdp.version}.

    3. Add the following property if it is not present in webhcat-sitemxml:

      <property>
       <name>templeton.libjars</name>
       <value>/usr/hdp/current/zookeeper-client/zookeeper.jar,/usr/hdp/current/hive-client/lib/hive-common.jar</value>
       <description>Jars to add the classpath.</description>
      </property>
    4. Remove the following obsolete properties from webhcat-site.xml:

      <property>
       <name>templeton.controller.map.mem</name>
       <value>1600</value>
       <description>Total virtual memory available to map tasks.</description>
      </property>
      
      <property>
       <name>hive.metastore.warehouse.dir</name>
       <value>/path/to/warehouse/dir</value>
      </property> 
    5. Add new proxy users, if needed. In core-site.xml, make sure the following properties are also set to allow WebHCat to impersonate your additional HDP 2.3.6 groups and hosts:

      <property>
       <name>hadoop.proxyuser.hcat.groups</name>
       <value>*</value>
      </property> 
       
      <property>
       <name>hadoop.proxyuser.hcat.hosts</name>
       <value>*</value>
      </property> 

      Where:

      hadoop.proxyuser.hcat.group

      Is a comma-separated list of the Unix groups whose users may be impersonated by 'hcat'.

      hadoop.proxyuser.hcat.hosts

      A comma-separated list of the hosts which are allowed to submit requests by 'hcat'.

  3. Start WebHCat:

    sudo su -c "usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start" hcat

  4. Smoke test WebHCat.

    1. If you have a non-secure cluster, on the WebHCat host machine, run the following command to check the status of WebHCat server:

      curl http://$WEBHCAT_HOST_MACHINE:50111/templeton/v1/status

      You should see the following return status:

      "status":"ok","version":"v1"

    2. If you are using a Kerberos secure cluster, run the following command:

      curl --negotiate -u: http://$WEBHCAT_HOST_MACHINE:50111/templeton/v1/status

      You should see the following return status

      {"status":"ok","version":"v1"}[machine@acme]$