Non-Ambari Cluster Installation Guide
Also available as:
PDF
loading table of contents...

Start MapReduce JobHistory Server

  1. Change permissions on the container-executor file.

    chown -R root:hadoop /usr/hdp/current/hadoop-yarn*/bin/container-executor 
    chmod -R 6050 /usr/hdp/current/hadoop-yarn*/bin/container-executor
    [Note]Note

    If these permissions are not set, the healthcheck script will return an error stating that the DataNode is UNHEALTHY.

  2. Execute these commands from the JobHistory server to set up directories on HDFS:

    su $HDFS_USER
    hdfs dfs -mkdir -p /mr-history/tmp 
    hdfs dfs -mkdir -p /mr-history/done 
    
    hdfs dfs -chmod 1777 /mr-history 
    hdfs dfs -chmod 1777 /mr-history/tmp 
    hdfs dfs -chmod 1770 /mr-history/done
    
    hdfs dfs -chown $MAPRED_USER:$MAPRED_USER_GROUP /mr-history
    hdfs dfs -chown $MAPRED_USER:$MAPRED_USER_GROUP /mr-history/tmp
    hdfs dfs -chown $MAPRED_USER:$MAPRED_USER_GROUP /mr-history/done
    
    Where 
    $MAPRED_USER : mapred
    $MAPRED_USER_GROUP: mapred or hadoop
     
    hdfs dfs -mkdir -p /app-logs
    hdfs dfs -chmod 1777 /app-logs
    hdfs dfs -chown $YARN_USER:$HADOOP_GROUP /app-logs
    
    Where 
    $YARN_USER : yarn
    $HADOOP_GROUP: hadoop
  3. Run the following command from the JobHistory server:

    su -l $YARN_USER -c "/usr/hdp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver"

    $HADOOP_CONF_DIR is the directory for storing the Hadoop configuration files. For example, /etc/hadoop/conf.