Reference
Also available as:
PDF

Starting HDP Services

Start the Hadoop services in the following order:

  • Ranger

  • Knox

  • ZooKeeper

  • HDFS

  • YARN

  • HBase

  • Hive Metastore

  • HiveServer2

  • WebHCat

  • Oozie

  • Hue

  • Storm

  • Kafka

  • Atlas

Instructions

  1. Start Ranger. Execute the following commands on the Ranger host machine:

    sudo service ranger-admin start
    sudo service ranger-usersync start
  2. Start Knox. When starting the gateway with the script below, the process runs in the background. The log output is written to /var/log/knox and a PID (process ID) is written to /var/run/knox. Execute this command on the Knox host machine.

    su -l knox -c "/usr/hdp/current/knox-server/bin/gateway.sh start"
    [Note]Note

    If Knox has been stopped without using gateway.sh stop, you must start the service using gateway.sh clean. The clean option removes all log files in /var/log/knox.

  3. Start ZooKeeper. Execute this command on the ZooKeeper host machine(s):

    su - zookeeper -c "export ZOOCFGDIR=/usr/hdp/current/zookeeper-server/conf ; export ZOOCFG=zoo.cfg; source /usr/hdp/current/zookeeper-server/conf/zookeeper-env.sh ; /usr/hdp/current/zookeeper-server/bin/zkServer.sh start"
  4. Start HDFS

    • If you are running NameNode HA (High Availability), start the JournalNodes by executing these commands on the JournalNode host machines:

      su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-journalnode/../hadoop/sbin/hadoop-daemon.sh start journalnode"

      where $HDFS_USER is the HDFS user. For example, hdfs.

    • Execute this command on the NameNode host machine(s):

      su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode"
    • If you are running NameNode HA, start the ZooKeeper Failover Controller (ZKFC) by executing the following command on all NameNode machines. The starting sequence of the ZKFCs determines which NameNode will become Active.

      su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start zkfc"
    • If you are not running NameNode HA, execute the following command on the Secondary NameNode host machine. If you are running NameNode HA, the Standby NameNode takes on the role of the Secondary NameNode.

      su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start secondarynamenode"
    • Execute these commands on all DataNodes:

      su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode"
  5. Start YARN

    • Execute this command on the ResourceManager host machine(s):

      su -l yarn -c "/usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh start resourcemanager"
    • Execute this command on the History Server host machine:

      su -l mapred -c "/usr/hdp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh start historyserver"
    • Execute this command on the timeline server:

      su -l yarn -c "/usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh start timelineserver"
    • Execute this command on all NodeManagers:

      su -l yarn -c "/usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh start nodemanager"
  6. Start HBase

    • Execute this command on the HBase Master host machine:

      su -l hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start master; sleep 25"
    • Execute this command on all RegionServers:

      su -l hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh start regionserver"
  7. Start the Hive Metastore. On the Hive Metastore host machine, execute the following commands:

    su $HIVE_USER
    nohup /usr/hdp/current/hive-metastore/bin/hive --service metastore>/var/log/hive/hive.out 2>/var/log/hive/hive.log &

    Where $HIVE_USER is the Hive user. For example,hive.

  8. Start HiveServer2. On the Hive Server2 host machine, execute the following commands:

    su $HIVE_USER
    nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=/tmp/hiveserver2HD.out 2 /tmp/hiveserver2HD.log

    Where $HIVE_USER is the Hive user. For example,hive.

  9. Start WebHCat. On the WebHCat host machine, execute the following command:

    su -l hcat -c "/usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start"
  10. Start Oozie. Execute the following command on the Oozie host machine:

    su -l oozie -c "/usr/hdp/current/oozie-server/bin/oozied.sh start"
  11. As a root user, execute the following command on the Hue Server:

    /etc/init.d/hue start

    This command starts several subprocesses corresponding to the different Hue components. Even though the root user is the one calls the init.d script, the actual process runs with the Hue user.

  12. Start Storm services using a process controller, such as supervisord. See "Installing and Configuring Apache Storm" in the Non-Ambari Cluster Installation Guide. For example, to start the storm-nimbus service:

    sudo /usr/bin/supervisorctl
    storm-drpc RUNNING pid 9801, uptime 0:05:05
    storm-nimbus STOPPED Dec 01 06:18 PM
    storm-ui RUNNING pid 9800, uptime 0:05:05
    supervisor> start storm-nimbus
    storm-nimbus: started

    where $STORM_USER is the operating system user that installed Storm. For example, storm.

  13. Start Kafka with the following commands:

    su $KAFKA_USER
    /usr/hdp/current/kafka-broker/bin/kafka start

    where $KAFKA_USER is the operating system user that installed Kafka. For example, kafka.

  14. Start the Atlas server with the following commands:

    /usr/hdp/<hdp-version>/atlas/bin/atlas_start.py –port 21000