If you are going to use Hue with an HA Cluster, make the following changes to
/etc/hue/conf/hue.ini
:
Install the Hadoop HttpFS component on the Hue server.
For RHEL/CentOS/Oracle Linux:
yum install hadoop-httpfs
For SLES:
yum install hadoop-httpfs
Modify
/etc/hadoop-httpfs/conf/httpfs-env.sh
to add the JDK path. In the file, ensure thatJAVA_HOME
is set:export JAVA_HOME=/usr/jdk64/jdk1.7.0_67
Configure the HttpFS service script for use by setting up the symlink in
/etc/init.d
:> ln -s /usr/hdp/{$VERSION $BUILD}/hadoop-httpfs/etc/rc.d/init.d/hadoop-httpfs /etc/init.d/hadoop-httpfs
For example,
{HDP2.2.x version number}
could be2.2.4.0-2913
.Modify
/etc/hadoop-httpfs/conf/httpfs-site.xml
to configure HttpFS to talk to the cluster, by confirming that the following properties are correct:<property> <name>httpfs.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>httpfs.proxyuser.hue.groups</name> <value>*</value> </property>
Start the HttpFS service.
service hadoop-httpfs start
Modify the
core-site.xml
file. On the NameNodes and all the DataNodes, add the following properties to the$HADOOP_CONF_DIR /core-site.xml
file, where$HADOOP_CONF_DIR
is the directory for storing the Hadoop configuration files. For example,/etc/hadoop/conf
.<property> <name>hadoop.proxyuser.httpfs.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>*</value> </property>
In the
hue.ini
file, under the[hadoop][[hdfs_clusters]][[[default]]]
subsection, use the following variables to configure the cluster:Property Description Example fs_defaultfs
NameNode URL using the logical name for the new name service. For reference, this is the dfs.nameservices property in hdfs-site.xml in your Hadoop configuration.
hdfs://mycluster
webhdfs_url
URL to the HttpFS server.
http://c6401.apache.org:14000/ webhdfs/v1/
Restart Hue for the changes to take effect.
service hue restart