This is the documentation for CDH 5.1.x. Documentation for other versions is available at Cloudera Documentation.

Step 12: Start up the NameNode

You are now ready to start the NameNode. Use the service command to run the /etc/init.d script.

$ sudo service hadoop-hdfs-namenode start

You'll see some extra information in the logs such as:

10/10/25 17:01:46 INFO security.UserGroupInformation:
Login successful for user hdfs/fully.qualified.domain.name@YOUR-REALM.COM using keytab file /etc/hadoop/conf/hdfs.keytab

and:

12/05/23 18:18:31 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to getDelegationToken
12/05/23 18:18:31 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to renewDelegationToken
12/05/23 18:18:31 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to cancelDelegationToken
12/05/23 18:18:31 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to fsck
12/05/23 18:18:31 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to getimage
12/05/23 18:18:31 INFO http.HttpServer: Jetty bound to port 50070
12/05/23 18:18:31 INFO mortbay.log: jetty-6.1.26
12/05/23 18:18:31 INFO server.KerberosAuthenticationHandler: Login using keytab /etc/hadoop/conf/hdfs.keytab, for principal HTTP/fully.qualified.domain.name@YOUR-REALM.COM
12/05/23 18:18:31 INFO server.KerberosAuthenticationHandler: Initialized, principal [HTTP/fully.qualified.domain.name@YOUR-REALM.COM] from keytab [/etc/hadoop/conf/hdfs.keytab]

You can verify that the NameNode is working properly by opening a web browser to http://machine:50070/ where machine is the name of the machine where the NameNode is running.

Cloudera also recommends testing that the NameNode is working properly by performing a metadata-only HDFS operation, which will now require correct Kerberos credentials. For example:

$ hadoop fs -ls

Information about the kinit Command

  Important:
Running the hadoop fs -ls command will fail if you do not have a valid Kerberos ticket in your credentials cache. You can examine the Kerberos tickets currently in your credentials cache by running the klist command. You can obtain a ticket by running the kinit command and either specifying a keytab file containing credentials, or entering the password for your principal. If you do not have a valid ticket, you will receive an error such as:
11/01/04 12:08:12 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException:
 GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Bad connection to FS. command aborted. exception: Call to nn-host/10.0.0.2:8020 failed on local exception: java.io.IOException:
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
  Note:

The kinit command must either be on the path for user accounts running the Hadoop client, or else the hadoop.kerberos.kinit.command parameter in core-site.xml must be manually configured to the absolute path to the kinit command.

  Note:

If you are running MIT Kerberos 1.8.1 or higher, a bug in versions of the Oracle JDK 6 Update 26 and earlier causes Java to be unable to read the Kerberos credentials cache even after you have successfully obtained a Kerberos ticket using kinit. To workaround this bug, run kinit -R after running kinit initially to obtain credentials. Doing so will cause the ticket to be renewed, and the credentials cache rewritten in a format which Java can read. For more information about this problem, see Problem 2 in Appendix A - Troubleshooting.

Page generated September 3, 2015.