Apache Storm Component Guide
Also available as:
loading table of contents...

Configuring Storm-HDFS for a Secure Cluster

To use the storm-hdfs connector in topologies that run on secure clusters:

  1. Provide your own Kerberos keytab and principal name to the connectors. The Config object that you pass into the topology must contain the storm keytab file and principal name.

  2. Specify an HdfsBolt configKey, using the method HdfsBolt.withConfigKey("somekey"). The value map of this key should have the following two properties:

    hdfs.keytab.file: "<path-to-keytab>"

    hdfs.kerberos.principal: "<principal>@<host>"


    <path-to-keytab> specifies the path to the keytab file on the supervisor hosts

    <principal>@<host> specifies the user and domain; for example, storm-admin@EXAMPLE.com.

    For example:

    Config config = new Config();
    config.put(HdfsSecurityUtil.STORM_KEYTAB_FILE_KEY, "$keytab");
    config.put(HdfsSecurityUtil.STORM_USER_NAME_KEY, "$principal");
    StormSubmitter.submitTopology("$topologyName", config, builder.createTopology());

    On worker hosts the bolt/trident-state code will use the keytab file and principal to authenticate with the NameNode. Make sure that all workers have the keytab file, stored in the same location.


    For more information about the HdfsBolt class, refer to the Apache Storm HdfsBolt API documentation.

  3. Distribute the keytab file that the Bolt is using in the Config object, to all supervisor nodes. This is the keytab that is being used to authenticate to HDFS, typically the Storm service keytab, storm. The user ID that the Storm worker is running under should have access to it.

    On an Ambari-managed cluster this is /etc/security/keytabs/storm.service.keytab (the "path-to-keytab"), where the worker runs under storm.

  4. If you set supervisor.run.worker.as.user to true (see Running Workers as Users in Configuring Storm for Kerberos over Ambari), make sure that the user that the workers are running under (typically the storm keytab) has read access on those keytabs. This is a manual step; an admin needs to go to each supervisor node and run chmod to give file system permissions to the users on these keytab files.


    You do not need to create separate keytabs or principals; the general guideline is to create a principal and keytab for each group of users that requires the same access to these resources, and use that single keytab.

    All of these connectors accept topology configurations. You can specify the keytab location on the host and the principal through which the connector will login to that system.

  5. Configure the connector(s). Here is a sample configuration for the Storm-HDFS connector (see Writing Data to HDFS for a more extensive example):

    HdfsBolt bolt = new HdfsBolt()
    Map<String, Object> map = new HashMap<String,Object>();
    Config config = new Config();
    config.put("hdfs.config", map);


For the Storm-HDFS connector, you must package hdfs-site.xml and core-site.xml (from your cluster configuration) in the topology .jar file.

In addition, include any configuration files for HDP components used in your Storm topology, such as hive-site.xml and hbase-site.xml. This fulfills the requirement that all related configuration files appear in the CLASSPATH of your Storm topology at runtime.