Configuring Storm-HDFS for a Secure Cluster
To use the
storm-hdfs connector in topologies that run on secure
Provide your own Kerberos keytab and principal name to the connectors. The
Configobject that you pass into the topology must contain the storm keytab file and principal name.
Specify an HdfsBolt
configKey, using the method
HdfsBolt.withConfigKey("somekey"). The value map of this key should have the following two properties:
<path-to-keytab>specifies the path to the keytab file on the supervisor hosts
<principal>@<host>specifies the user and domain; for example,
Config config = new Config(); config.put(HdfsSecurityUtil.STORM_KEYTAB_FILE_KEY, "$keytab"); config.put(HdfsSecurityUtil.STORM_USER_NAME_KEY, "$principal"); StormSubmitter.submitTopology("$topologyName", config, builder.createTopology());On worker hosts the bolt/trident-state code will use the keytab file and principal to authenticate with the NameNode. Make sure that all workers have the keytab file, stored in the same location.Note
For more information about the
HdfsBoltclass, refer to the Apache Storm HdfsBolt API documentation.
Distribute the keytab file that the Bolt is using in the Config object, to all
supervisor nodes. This is the keytab that is being used to authenticate to HDFS,
typically the Storm service keytab,
storm. The user ID that the Storm worker is running under should have access to it.
On an Ambari-managed cluster this is
"path-to-keytab"), where the worker runs under
If you set
true(see Running Workers as Users in Configuring Storm for Kerberos over Ambari), make sure that the user that the workers are running under (typically the
stormkeytab) has read access on those keytabs. This is a manual step; an admin needs to go to each supervisor node and run chmod to give file system permissions to the users on these keytab files.Note
You do not need to create separate keytabs or principals; the general guideline is to create a principal and keytab for each group of users that requires the same access to these resources, and use that single keytab.
All of these connectors accept topology configurations. You can specify the keytab location on the host and the principal through which the connector will login to that system.
Configure the connector(s). Here is a sample configuration for the Storm-HDFS
connector (see Writing Data to HDFS for a more extensive
HdfsBolt bolt = new HdfsBolt() .withFsUrl("hdfs://localhost:8020") .withFileNameFormat(fileNameFormat) .withRecordFormat(format) .withRotationPolicy(rotationPolicy) .withSyncPolicy(syncPolicy); .withConfigKey("hdfs.config"); Map<String, Object> map = new HashMap<String,Object>(); map.put("hdfs.keytab.file","/etc/security/keytabs/storm.service.keytab"); map.put("hdfs.kerberos.principal","storm@TEST.HORTONWORKS.COM"); Config config = new Config(); config.put("hdfs.config", map); StormSubmitter.submitTopology("$topologyName",config,builder.createTopology());Important
For the Storm-HDFS connector, you must package
core-site.xml(from your cluster configuration) in the topology .jar file.
In addition, include any configuration files for HDP components used in your Storm topology, such as hive-site.xml and hbase-site.xml. This fulfills the requirement that all related configuration files appear in the CLASSPATH of your Storm topology at runtime.