Configuring Fault Tolerance
Also available as:
loading table of contents...

Deploying Hue with an HA Cluster

If you are going to use Hue with an HA Cluster, make changes to /etc/hue/conf/hue.ini file.

  1. Install the Hadoop HttpFS component on the Hue server.

    For RHEL/CentOS/Oracle Linux:

    yum install hadoop-httpfs

    For SLES:

    zypper install hadoop-httpfs
  2. Modify /etc/hadoop-httpfs/conf/ to add the JDK path. In the file, ensure that JAVA_HOME is set:
    export JAVA_HOME=/usr/jdk64/jdk1.7.0_67
  3. Configure the HttpFS service script by setting up the symlink in /etc/init.d:
    > ln -s /usr/hdp/{HDP2.4.x version number}
    /hadoop-httpfs/etc/rc.d/init.d/hadoop-httpfs /etc/init.d/hadoop-httpfs
  4. Modify /etc/hadoop-httpfs/conf/httpfs-site.xml to configure HttpFS to talk to the cluster, by confirming that the following properties are correct:
  5. Start the HttpFS service.
    service hadoop-httpfs start 
  6. Modify the core-site.xml file. On the NameNodes and all the DataNodes, add the following properties to the $HADOOP_CONF_DIR /core-site.xml file, where $HADOOP_CONF_DIR is the directory for storing the Hadoop configuration files. For example, /etc/hadoop/conf.
  7. In the hue.ini file, under the [hadoop][[hdfs_clusters]][[[default]]] subsection, use the following variables to configure the cluster:
    Property Description Example


    NameNode URL using the logical name for the new name service. For reference, this is the dfs.nameservices property in hdfs-site.xml in your Hadoop configuration.



    URL to the HttpFS server. webhdfs/v1/

  8. Restart Hue for the changes to take effect.
    service hue restart