Configuring Storm-HDFS for a Secure Cluster
To use the storm-hdfs
connector in topologies that run on secure
clusters:
-
Provide your own Kerberos keytab and principal name to the connectors. The
Config
object that you pass into the topology must contain the storm keytab file and principal name. -
Specify an HdfsBolt
configKey
, using the methodHdfsBolt.withConfigKey("somekey")
. The value map of this key should have the following two properties:hdfs.keytab.file: "<path-to-keytab>"
hdfs.kerberos.principal: "<principal>@<host>"
where
<path-to-keytab>
specifies the path to the keytab file on the supervisor hosts<principal>@<host>
specifies the user and domain; for example,storm-admin@EXAMPLE.com
.For example:Config config = new Config(); config.put(HdfsSecurityUtil.STORM_KEYTAB_FILE_KEY, "$keytab"); config.put(HdfsSecurityUtil.STORM_USER_NAME_KEY, "$principal"); StormSubmitter.submitTopology("$topologyName", config, builder.createTopology());
On worker hosts the bolt/trident-state code will use the keytab file and principal to authenticate with the NameNode. Make sure that all workers have the keytab file, stored in the same location.NoteFor more information about the
HdfsBolt
class, refer to the Apache Storm HdfsBolt API documentation. -
Distribute the keytab file that the Bolt is using in the Config object, to all
supervisor nodes. This is the keytab that is being used to authenticate to HDFS,
typically the Storm service keytab,
storm
. The user ID that the Storm worker is running under should have access to it.On an Ambari-managed cluster this is
/etc/security/keytabs/storm.service.keytab
(the"path-to-keytab"
), where the worker runs understorm
. -
If you set
supervisor.run.worker.as.user
totrue
, make sure that the user that the workers are running under (typically thestorm
keytab) has read access on those keytabs. This is a manual step; an admin needs to go to each supervisor node and run chmod to give file system permissions to the users on these keytab files.NoteYou do not need to create separate keytabs or principals; the general guideline is to create a principal and keytab for each group of users that requires the same access to these resources, and use that single keytab.
All of these connectors accept topology configurations. You can specify the keytab location on the host and the principal through which the connector will login to that system.
-
Configure the connector(s). Here is a sample configuration for the Storm-HDFS
connector:
HdfsBolt bolt = new HdfsBolt() .withFsUrl("hdfs://localhost:8020") .withFileNameFormat(fileNameFormat) .withRecordFormat(format) .withRotationPolicy(rotationPolicy) .withSyncPolicy(syncPolicy); .withConfigKey("hdfs.config"); Map<String, Object> map = new HashMap<String,Object>(); map.put("hdfs.keytab.file","/etc/security/keytabs/storm.service.keytab"); map.put("hdfs.kerberos.principal","storm@TEST.HORTONWORKS.COM"); Config config = new Config(); config.put("hdfs.config", map); StormSubmitter.submitTopology("$topologyName",config,builder.createTopology());
ImportantFor the Storm-HDFS connector, you must package
hdfs-site.xml
andcore-site.xml
(from your cluster configuration) in the topology .jar file.In addition, include any configuration files for HDP components used in your Storm topology, such as hive-site.xml and hbase-site.xml. This fulfills the requirement that all related configuration files appear in the CLASSPATH of your Storm topology at runtime.