1. Starting HDP services

Start all the Hadoop services in the following order:

  • HDFS

  • MapReduce

  • ZooKeeper

  • HBase

  • Hive Metastore

  • HiveServer2

  • WebHCat

  • Oozie

  • Ganglia

  • Nagios

Instructions

  1. Start HDFS

    1. Execute these commands on the NameNode host machine:

      su -l hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode" 
    2. Execute these commands on the Secondary NameNode host machine:

      su -l hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start secondarynamenode” 
    3. Execute these commands on all DataNodes:

      su -l hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode"

  2. Start MapReduce

    1. Execute these commands on the JobTracker host machine:

      su -l mapred -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start jobtracker; sleep 25"
    2. Execute these commands on the JobTracker host machine:

      su -l mapred -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start historyserver" 
    3. Execute these commands on all TaskTrackers:

      su -l mapred -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start tasktracker"
  3. Start ZooKeeper. On the ZooKeeper host machine, execute the following command:

    su - zookeeper -c "export  ZOOCFGDIR=/etc/zookeeper/conf ; export ZOOCFG=zoo.cfg ; source /etc/zookeeper/conf/zookeeper-env.sh ; /usr/lib/zookeeper/bin/zkServer.sh start"
  4. Start HBase

    1. Execute these commands on the HBase Master host machine:

      su -l hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start master"
    2. Execute these commands on all RegionServers:

      su -l hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start regionserver" 
  5. Start Hive Metastore. On the Hive Metastore host machine, execute the following command:

     su -l hive -c "nohup hive --service metastore > $HIVE_LOG_DIR/hive.out 2> $HIVE_LOG_DIR/hive.log   &"  

    where $HIVE_LOG_DIR is the directory where Hive server logs are stored (example: /var/log/hive).

  6. Start HiveServer2. On the Hive Server2 host machine, execute the following command:

    sudo su hive -c "nohup /usr/lib/hive/bin/hiveserver2 -hiveconf hive.metastore.uris=\" \" > $HIVE_LOG_DIR /hiveServer2.out 2>$HIVE_LOG_DIR/hiveServer2.log &" 

    where $HIVE_LOG_DIR is the directory where Hive server logs are stored (example: /var/log/hive).

  7. Start WebHCat. On the WebHCat host machine, execute the following command:

    su -l hcat -c "/usr/lib/hcatalog/sbin/webhcat_server.sh  start"

  8. Start Oozie. On the Oozie server host machine, execute the following command:

     sudo su -l oozie -c "cd $OOZIE_LOG_DIR/log; /usr/lib/oozie/bin/oozie-start.sh" 

    where $OOZIE_LOG_DIR is the directory where Oozie log files are stored (for example: /var/log/oozie).

  9. Start Ganglia.

    1. Execute this command on the Ganglia server host machine:

      /etc/init.d/hdp-gmetad start
    2. Execute this command on all the nodes in your Hadoop cluster:

      /etc/init.d/hdp-gmond start
  10. Start Nagios.

    service nagios start