2.4.8.  Upgrade Hive and WebHCat

  1. Upgrade the Hive Metastore database schema. Restart the Hive Metastore Database (MySQL, Oracle, or Postgres) server, not the Hive Metastore process.

    1. Copy the older configs to the new HDP stack version's config folder. These will be overwritten when HiveServer2 is restarted.

      cp -ar /etc/hive/conf/* /etc/hive/<new_version>/0/
      # where <new version> is something like 2.3.0.0-2557.              
    2. On each HiveServer2 host, run:

      su -l <HIVE_USER>

      export HIVE_CONF_DIR=/etc/hive/conf

      /usr/hdp/current/hive-metastore/bin/schematool -upgradeSchema -dbType <DATABASE_TYPE>

      where <DATABASE_TYPE> is mysql, oracle or postgres and <HIVE_USER> is the Hive Service user. For example, hive.

      [Note]Note

      If you run into any errors while running the "schematool -upgradeSchema" command, then try to get more information about what went wrong.

      /usr/hdp/current/hive-metastore/bin/schematool -upgradeSchema -dbType <DATABASE_TYPE> -verbose -dryrun

      • From the logs, make sure that it is connecting using the correct database type, such as mysql, postgres, or derby.

      • Make sure that you can connect to the Hive database and run.

      • Note that the case of the table name does matter. select * from VERSION;

      • You may need to run the command with the "-passWord <value>" flag.

      [Note]Note

      If you are using Postgres 8 and Postgres 9, you should reset the Hive Metastore database owner to

      <HIVE_USER>: psql -U <POSTGRES_USER> -c ALTER DATABASE <HIVE-METASTORE-DB-NAME> OWNER TO <HIVE_USER>

  2. Save the old Hive configuration and add a symlink from /etc/hive/conf on all Hive hosts.

    mv /etc/hive/conf /etc/hive/conf.saved;

    mv /etc/hive/conf.server /etc/hive/conf.server.saved

    Note: this command is only valid on the HiveServer host, not on Hive clients.

    ln -s /usr/hdp/current/hive-client/conf /etc/hive/conf

    ls -la /etc/hive

  3. If you use Tez as the Hive execution engine, and if the variable hive.server2.enabled.doAs is set to true, you must create a scratch directory on the NameNode host for the username that will run the HiveServer2 service.

    su -l <HDFS_USER> -c "hdfs dfs -mkdir /tmp/hive-<HIVE_USER>"

    su -l <HDFS_USER> -c "hdfs dfs -chmod 777 /tmp/hive-<HIVE_USER>"

    where <HIVE_USER> is the Hive Service user. For example, hive. And where <HDFS_USER> is the HDFS Service user. For example, hdfs.

  4. From Ambari Web, browse to Services > Hive > Configs. Under the Advanced tab, add the following properties to Advanced hive-site, only if these properties do not already exist on the cluster:

    Name

    Value

    hive.cluster.delegation.token.store.zookeeper.connectString

    The ZooKeeper token store connect string. For example:

    ZooKeeperHost:2181

    hive.zookeeper.quorum

    The comma-separated list of ZooKeeper hosts to talk to. For example:

    ZooKeeperHost1:2181, ZooKeeperHost2:2181

  5. For WebHCat , upload new Pig, Hive, and Sqoop tarballs to HDFS. Run the following command from a node that has the HDP clients installed:

    su -l <HDFS_USER> -c "hdfs dfs -mkdir -p /hdp/apps/2.3.x.y-z/pig/"
    su -l <HDFS_USER> -c "hdfs dfs -mkdir -p /hdp/apps/2.3.x.y-z/hive/"
    su -l <HDFS_USER> -c "hdfs dfs -mkdir -p /hdp/apps/2.3.x.y-z/sqoop/"
    su -l <HDFS_USER> -c "hdfs dfs -put /usr/hdp/current/pig-client/pig.tar.gz /hdp/apps/2.3.x.y-z/pig/"
    su -l <HDFS_USER> -c "hdfs dfs -put /usr/hdp/current/hive-client/hive.tar.gz /hdp/apps/2.3.x.y-z/hive/"
    su -l <HDFS_USER> -c "hdfs dfs -put /usr/hdp/current/sqoop-client/sqoop.tar.gz /hdp/apps/2.3.x.y-z/sqoop/"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.x.y-z/pig"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.x.y-z/pig/pig.tar.gz"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.x.y-z/hive"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.x.y-z/hive/hive.tar.gz"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.x.y-z/sqoop"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.x.y-z/sqoop/sqoop.tar.gz"
    su -l <HDFS_USER> -c "hdfs dfs -chown -R <HDFS_USER>:<HADOOP_GROUP> /hdp"     

    where <HDFS_USER> is the HDFS Service user. For example, hdfs.

  6. In Ambari Web, browse to Services > Hive and start Hive.

  7. After Hive has started, select Run Service Check from the Service Actions menu. Confirm the check passes.


loading table of contents...