Start all the Hadoop services in the following order:
HDFS
MapReduce
ZooKeeper
HBase
Hive Metastore
HiveServer2
WebHCat
Oozie
Ganglia
Nagios
Instructions
Start HDFS
Execute these commands on the NameNode host machine:
su -l $HDFS_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode"
Execute these commands on the Secondary NameNode host machine:
su -l $HDFS_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start secondarynamenode"
Execute these commands on all DataNodes:
su -l $HDFS_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode"
where
$HDFS_USER
is the HDFS Service user. For example,hdfs
.Start MapReduce
Execute these commands on the JobTracker host machine:
su -l $MAPRED_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start jobtracker; sleep 25"
Execute these commands on the JobTracker host machine:
su -l $MAPRED_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start historyserver"
Execute these commands on all TaskTrackers:
su -l $MAPRED_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start tasktracker"
where
$MAPRED_USER
is the MapReduce Service user. For example,mapred
.Start ZooKeeper. On the ZooKeeper host machine, execute the following command:
su - $ZOOKEEPER_USER -c "export ZOOCFGDIR=/etc/zookeeper/conf ; export ZOOCFG=zoo.cfg ; source /etc/zookeeper/conf/zookeeper-env.sh ; /usr/lib/zookeeper/bin/zkServer.sh start"
where
$ZOOKEEPER_USER
is the ZooKeeper Service user. For example,zookeeper
.Start HBase
Execute these commands on the HBase Master host machine:
su -l $HBASE_USER -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start master"
Execute these commands on all RegionServers:
su -l $HBASE_USER -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start regionserver"
where
$HBASE_USER
is the HBase Service user. For example,hbase
.Start Hive Metastore. On the Hive Metastore host machine, execute the following command:
su -l $HIVE_USER -c "nohup hive --service metastore > $HIVE_LOG_DIR/hive.out 2> $HIVE_LOG_DIR/hive.log &"
where:
$HIVE_USER
is the Hive Service user. For example,hive
.$HIVE_LOG_DIR
is the directory where Hive server logs are stored (example:/var/log/hive
).
Start HiveServer2. On the Hive Server2 host machine, execute the following command:
sudo su $HIVE_USER -c "nohup /usr/lib/hive/bin/hiveserver2 -hiveconf hive.metastore.uris=\" \" > $HIVE_LOG_DIR /hiveServer2.out 2>$HIVE_LOG_DIR/hiveServer2.log &"
This will start both the Hive Metastore and HCatalog services.
$HIVE_USER
is the Hive Service user. For example,hive
.$HIVE_LOG_DIR
is the directory where Hive server logs are stored (example:/var/log/hive
).
Start WebHCat. On the WebHCat host machine, execute the following command:
su -l $WEBHCAT_USER -c "/usr/lib/hcatalog/sbin/webhcat_server.sh start"
where
$WEBHCAT_USER
is the WebHCat Service user. For example,hcat
.Start Oozie. On the Oozie server host machine, execute the following command:
sudo su -l $OOZIE_USER -c "cd $OOZIE_LOG_DIR/log; /usr/lib/oozie/bin/oozie-start.sh"
where:
$OOZIE_USER
is the Oozie Service user. For example,oozie
$OOZIE_LOG_DIR
is the directory where Oozie log files are stored (for example:/var/log/oozie
).
Start Ganglia.
Execute this command on the Ganglia server host machine:
/etc/init.d/hdp-gmetad start
Execute this command on all the nodes in your Hadoop cluster:
/etc/init.d/hdp-gmond start
Start Nagios.
service nagios start