Configuring an NFSv3 Gateway
The NFSv3 gateway allows a client to mount HDFS as part of the client's local file system. The gateway machine can be any host in the cluster, including the NameNode, a DataNode, or any HDFS client. The client can be any NFSv3-client-compatible machine.
After mounting HDFS to his or her local filesystem, a user can:
- Browse the HDFS file system as though it were part of the local file system
- Upload and download files from the HDFS file system to and from the local file system.
- Stream data directly to HDFS through the mount point.
The subsections that follow provide information on installing and configuring the gateway.
Upgrading from a CDH 5 Beta Release
If you are upgrading from a CDH 5 Beta release, you must first remove the hadoop-hdfs-portmap package. Proceed as follows.
- Unmount existing HDFS gateway mounts. For example, on each client, assuming the file system is mounted on /hdfs_nfs_mount:
$ umount /hdfs_nfs_mount
- Stop the services:
$ sudo service hadoop-hdfs-nfs3 stop $ sudo hadoop-hdfs-portmap stop
- Remove the hadoop-hdfs-portmap package.
- On a RHEL-compatible system:
$ sudo yum remove hadoop-hdfs-portmap
- On a SLES system:
$ sudo zypper remove hadoop-hdfs-portmap
- On an Ubuntu or Debian system:
$ sudo apt-get remove hadoop-hdfs-portmap
- On a RHEL-compatible system:
- Install the new version
- On a RHEL-compatible system:
$ sudo yum install hadoop-hdfs-nfs3
- On a SLES system:
$ sudo zypper install hadoop-hdfs-nfs3
- On an Ubuntu or Debian system:
$ sudo apt-get install hadoop-hdfs-nfs3
- On a RHEL-compatible system:
- Start the system default portmapper service:
$ sudo service portmap start
- Now proceed with Starting the NFSv3 Gateway, and then remount the HDFS gateway mounts.
Installing the Packages for the First Time
On RHEL and similar systems:
Install the following packages on the cluster host you choose for NFSv3 Gateway machine (we'll refer to it as the NFS server from here on).
- nfs-utils
- nfs-utils-lib
- hadoop-hdfs-nfs3
Use the following command:
$ sudo yum install nfs-utils nfs-utils-lib hadoop-hdfs-nfs3
On SLES:
Install nfs-utils on the cluster host you choose for NFSv3 Gateway machine (referred to as the NFS server from here on):
$ sudo zypper install nfs-utils
On an Ubuntu or Debian system:
Install nfs-common on the cluster host you choose for NFSv3 Gateway machine (referred to as the NFS server from here on):
$ sudo apt-get install nfs-common
Configuring the NFSv3 Gateway
Proceed as follows to configure the gateway.
- Add the following property to hdfs-site.xml on the NameNode:
<property> <name>dfs.namenode.accesstime.precision</name> <value>3600000</value> <description>The access time for an HDFS file is precise up to this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS.</description> </property>
- Add the following property to hdfs-site.xml on the NFS server:
<property> <name>dfs.nfs3.dump.dir</name> <value>/tmp/.hdfs-nfs</value> </property>
- Configure the user running the gateway (normally the hdfs user as in this example) to be a proxy for other users. To allow the hdfs user to be a proxy for all other users, add the following entries to core-site.xml on the NameNode:
<property> <name>hadoop.proxyuser.hdfs.groups</name> <value>*</value> <description> Set this to '*' to allow the gateway user to proxy any group. </description> </property> <property> <name>hadoop.proxyuser.hdfs.hosts</name> <value>*</value> <description> Set this to '*' to allow requests from any hosts to be proxied. </description> </property>
- Restart the NameNode.
Starting the NFSv3 Gateway
Do the following on the NFS server.
- First, stop the default NFS services, if they are running:
$ sudo service nfs stop
- Start the HDFS-specific services:
$ sudo service hadoop-hdfs-nfs3 start
Verifying that the NFSv3 Gateway is Working
To verify that the NFS services are running properly, you can use the rpcinfo command on any host on the local network:
$ rpcinfo -p <nfs_server_ip_address>
You should see output such as the following:
program vers proto port 100005 1 tcp 4242 mountd 100005 2 udp 4242 mountd 100005 2 tcp 4242 mountd 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100005 3 udp 4242 mountd 100005 1 udp 4242 mountd 100003 3 tcp 2049 nfs 100005 3 tcp 4242 mountd
To verify that the HDFS namespace is exported and can be mounted, use the showmount command.
$ showmount -e <nfs_server_ip_address>You should see output similar to the following:
Exports list on <nfs_server_ip_address>: / (everyone)
Mounting HDFS on an NFS Client
To import the HDFS file system on an NFS client, use a mount command such as the following on the client:
$ mount -t nfs -o vers=3,proto=tcp,nolock <nfs_server_hostname>:/ /hdfs_nfs_mount