1. Starting HDP Services

Start all the Hadoop services in the following order:

  • HDFS

  • MapReduce

  • ZooKeeper

  • HBase

  • Hive Metastore

  • HiveServer2

  • WebHCat

  • Oozie

  • Ganglia

  • Nagios

Instructions

  1. Start HDFS

    1. Execute these commands on the NameNode host machine:

      su -l $HDFS_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode" 
    2. Execute these commands on the Secondary NameNode host machine:

      su -l $HDFS_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start secondarynamenode" 
    3. Execute these commands on all DataNodes:

      su -l $HDFS_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode"

    where $HDFS_USER is the HDFS Service user. For example, hdfs.

  2. Start MapReduce

    1. Execute these commands on the JobTracker host machine:

      su -l $MAPRED_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start jobtracker; sleep 25"
    2. Execute these commands on the JobTracker host machine:

      su -l $MAPRED_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start historyserver" 
    3. Execute these commands on all TaskTrackers:

      su -l $MAPRED_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start tasktracker"

    where $MAPRED_USER is the MapReduce Service user. For example, mapred.

  3. Start ZooKeeper. On the ZooKeeper host machine, execute the following command:

    su - $ZOOKEEPER_USER -c "export  ZOOCFGDIR=/etc/zookeeper/conf ; export ZOOCFG=zoo.cfg ; source /etc/zookeeper/conf/zookeeper-env.sh ; /usr/lib/zookeeper/bin/zkServer.sh start"

    where $ZOOKEEPER_USER is the ZooKeeper Service user. For example, zookeeper.

  4. Start HBase

    1. Execute these commands on the HBase Master host machine:

      su -l $HBASE_USER -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start master"
    2. Execute these commands on all RegionServers:

      su -l $HBASE_USER -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start regionserver" 

    where $HBASE_USER is the HBase Service user. For example, hbase.

  5. Start Hive Metastore. On the Hive Metastore host machine, execute the following command:

     su -l $HIVE_USER -c "nohup hive --service metastore > $HIVE_LOG_DIR/hive.out 2> $HIVE_LOG_DIR/hive.log   &"  

    where:

    • $HIVE_USER is the Hive Service user. For example, hive.

    • $HIVE_LOG_DIR is the directory where Hive server logs are stored (example:/var/log/hive).

  6. Start HiveServer2. On the Hive Server2 host machine, execute the following command:

    sudo su $HIVE_USER -c "nohup /usr/lib/hive/bin/hiveserver2 -hiveconf hive.metastore.uris=\" \" > $HIVE_LOG_DIR /hiveServer2.out 2>$HIVE_LOG_DIR/hiveServer2.log &" 

    This will start both the Hive Metastore and HCatalog services.

    • $HIVE_USER is the Hive Service user. For example, hive.

    • $HIVE_LOG_DIR is the directory where Hive server logs are stored (example:/var/log/hive).

  7. Start WebHCat. On the WebHCat host machine, execute the following command:

    su -l $WEBHCAT_USER -c "/usr/lib/hcatalog/sbin/webhcat_server.sh  start"

    where $WEBHCAT_USER is the WebHCat Service user. For example, hcat.

  8. Start Oozie. On the Oozie server host machine, execute the following command:

     sudo su -l $OOZIE_USER -c "cd $OOZIE_LOG_DIR/log; /usr/lib/oozie/bin/oozie-start.sh" 

    where:

    • $OOZIE_USER is the Oozie Service user. For example, oozie

    • $OOZIE_LOG_DIR is the directory where Oozie log files are stored (for example: /var/log/oozie).

  9. Start Ganglia.

    1. Execute this command on the Ganglia server host machine:

      /etc/init.d/hdp-gmetad start
    2. Execute this command on all the nodes in your Hadoop cluster:

      /etc/init.d/hdp-gmond start
  10. Start Nagios.

    service nagios start


loading table of contents...