Start the Hadoop services in the following order:
Knox
ZooKeeper
HDFS
YARN
HBase
Hive Metastore
HiveServer2
WebHCat
Oozie
Storm
Kafka
Instructions
Start Knox. When starting the gateway with the script below, the process runs in the background. The log output is written to /var/log/knox and a PID (process ID) is written to /var/run/knox. Execute this command on the Knox host machine.
su -l knox -c "/usr/hdp/current/knox-server/bin/gateway.sh start"
Note If Knox has been stopped without using
gateway.sh stop
, you must start the service usinggateway.sh clean
. The clean option removes all log files in /var/log/knox.Start ZooKeeper. Execute this command on the ZooKeeper host machine(s):
su - zookeeper -c "export ZOOCFGDIR=/usr/hdp/current/zookeeper-server/conf ; export ZOOCFG=zoo.cfg; source /usr/hdp/current/zookeeper-server/conf/zookeeper-env.sh ; /usr/hdp/current/zookeeper-server/bin/zkServer.sh start"
Start HDFS
If you are running NameNode HA (High Availability), start the JournalNodes by executing these commands on the JournalNode host machines:
su $HDFS_USER /usr/hdp/current/hadoop-hdfs-journalnode/../hadoop/sbin/hadoop-daemon.sh start journalnode
where
$HDFS_USER
is the HDFS user. For example,hdfs
.Execute this command on the NameNode host machine(s):
su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode"
If you are running NameNode HA, start the Zookeeper Failover Controller (ZKFC) by executing the following command on all NameNode machines. The starting sequence of the ZKFCs determines which NameNode will become Active.
su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start zkfc"
If you are not running NameNode HA, execute the following command on the Secondary NameNode host machine. If you are running NameNode HA, the Standby NameNode takes on the role of the Secondary NameNode.
su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start secondarynamenode"
Execute these commands on all DataNodes:
su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode"
Start YARN
Execute this command on the ResourceManager host machine(s):
su -l yarn -c "/usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh start resourcemanager"
Execute this command on the History Server host machine:
su -l yarn -c "/usr/hdp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh start historyserver"
Execute this command on all NodeManagers:
su -l yarn -c "/usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh start nodemanager"
Start HBase
Execute this command on the HBase Master host machine:
su -l hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start master; sleep 25"
Execute this command on all RegionServers:
su -l hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh start regionserver"
Start the Hive Metastore. On the Hive Metastore host machine, execute the following commands:
su $HIVE_USER nohup /usr/hdp/current/hive-metastore/bin/hive --service metastore>/var/log/hive/hive.out 2>/var/log/hive/hive.log &
Where
$HIVE_USER
is the Hive user. For example,hive
.Start HiveServer2. On the Hive Server2 host machine, execute the following commands:
su $HIVE_USER nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=/tmp/hiveserver2HD.out 2 /tmp/hiveserver2HD.log
Where
$HIVE_USER
is the Hive user. For example,hive
.Start WebHCat. On the WebHCat host machine, execute the following command:
su -l hcat -c "/usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start"
Start Oozie. Execute the following command on the Oozie host machine:
su -l oozie -c "/usr/hdp/current/oozie-server/bin/oozied.sh start"
Start Storm services using a process controller, such as supervisord. See Installing and Configuring Apache Storm. For example, to start the storm-nimbus service:
sudo /usr/bin/supervisorctl storm-drpc RUNNING pid 9801, uptime 0:05:05 storm-nimbus STOPPED Dec 01 06:18 PM storm-ui RUNNING pid 9800, uptime 0:05:05 supervisor> start storm-nimbus storm-nimbus: started
where
$STORM_USER
is the operating system user that installed Storm. For example,storm
.Start Kafka with the following commands:
su $KAFKA_USER /usr/hdp/current/kafka-broker/bin/kafka start
where
$KAFKA_USER
is the operating system user that installed Kafka. For example,kafka
.