This is the documentation for CDH 5.0.x. Documentation for other versions is available at Cloudera Documentation.

Configuring Encrypted Shuffle, Encrypted Web UIs, and Encrypted HDFS Transport

This section describes how to configure encrypted shuffle, encrypted Web UIs, and encrypted HDFS transports:

Encrypted Shuffle and Encrypted Web UIs

Now that you've enabled Kerberos, which provides for strong authentication, you can optionally enable network encryption if you so desire. CDH 5 supports the Encrypted Shuffle and Encrypted Web UIs feature that allows encryption of the MapReduce shuffle and web server ports using HTTPS with optional client authentication (also known as bi-directional HTTPS, or HTTPS with client certificates). It includes:

  • Hadoop configuration setting for toggling the shuffle between HTTP and HTTPS.
  • Hadoop configuration setting for toggling the Web UIs to use either HTTP or HTTPS.
  • Hadoop configuration settings for specifying the keystore and truststore properties (location, type, passwords) that are used by the shuffle service, web server UIs and the reducers tasks that fetch shuffle data.
  • A way to re-read truststores across the cluster (when a node is added or removed).

CDH 5 supports Encrypted Shuffle for both MRv1 and MRv2 (YARN), with common configuration properties used for both versions. The only configuration difference is in the parameters used to enable the features:

  • For MRv1, setting the hadoop.ssl.enabled parameter in the core-site.xml file enables both the Encrypted Shuffle and the Encrypted Web UIs. In other words, the encryption toggling is coupled for the two features.
  • For MRv2, setting the hadoop.ssl.enabled parameter enables the Encrypted Web UI feature; setting the mapreduce.shuffle.ssl.enabled parameter in the mapred-site.xml file enables the Encrypted Shuffle feature. =

All other configuration properties apply to both the Encrypted Shuffle and Encrypted Web UI functionality.

When the Encrypted Web UI feature is enabled, all Web UIs for Hadoop components are served over HTTPS. If you configure the systems to require client certificates, browsers must be configured with the appropriate client certificates in order to access the Web UIs.

  Important:

When the Web UIs are served over HTTPS, you must specify https:// as the protocol; there is no redirection from http://. If you attempt to access an HTTPS resource over HTTP, your browser will probably show an empty screen with no warning.

Most components that run on top of MapReduce automatically use Encrypted Shuffle when it is configured.

Configuring Encrypted Shuffle and Encrypted Web UIs

To configure Encrypted Shuffle and Encrypted Web UIs, set the appropriate property/value pairs in the following:

  • core-site.xml enables these features and defines the implementation
  • mapred-site.xml enables Encrypted Shuffle for MRv2
  • ssl-server.xml stores keystone and truststore settings for the server
  • ssl-client.xml stores keystone and truststore settings for the client

core-site.xml Properties

To configure encrypted shuffle, set the following properties in the core-site.xml files of all nodes in the cluster:

Property

Default Value

Explanation

hadoop.ssl.enabled

false

For MRv1, setting this value to true enables both the Encrypted Shuffle and the Encrypted Web UI features. For MRv2, this property only enables the Encrypted WebUI; Encrypted Shuffle is enabled with a property in the mapred-site.xml file as described below.

hadoop.ssl.require.client.cert

false

When this property is set to true, client certificates are required for all shuffle operations and all browsers used to access Web UIs.

Cloudera recommends that this be set to false. See Client Certificates.

hadoop.ssl.hostname.verifier

DEFAULT

The hostname verifier to provide for HttpsURLConnections. Valid values are: DEFAULT, STRICT, STRICT_I6, DEFAULT_AND_LOCALHOST and ALLOW_ALL.

hadoop.ssl.keystores.factory.class

org.apache.hadoop .security.ssl. FileBasedKeyStoresFactory

The KeyStoresFactory implementation to use.

hadoop.ssl.server.conf

ssl-server.xml

Resource file from which ssl server keystore information is extracted. This file is looked up in the classpath; typically it should be in the /etc/hadoop/conf/ directory.

hadoop.ssl.client.conf

ssl-client.xml

Resource file from which ssl server keystore information is extracted. This file is looked up in the classpath; typically it should be in the /etc/hadoop/conf/ directory.

  Note:

All these properties should be marked as final in the cluster configuration files.

Example

<configuration>
    ...
    <property>
      <name>hadoop.ssl.require.client.cert</name>
      <value>false</value>
      <final>true</final>
    </property>

    <property>
      <name>hadoop.ssl.hostname.verifier</name>
      <value>DEFAULT</value>
      <final>true</final>
    </property>

    <property>
      <name>hadoop.ssl.keystores.factory.class</name>
      <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>
      <final>true</final>
    </property>

    <property>
      <name>hadoop.ssl.server.conf</name>
      <value>ssl-server.xml</value>
      <final>true</final>
    </property>

    <property>
      <name>hadoop.ssl.client.conf</name>
      <value>ssl-client.xml</value>
      <final>true</final>
    </property>

    <property>
      <name>hadoop.ssl.enabled</name>
      <value>true</value>
    </property>
    ...
</configuration>

The cluster should be configured to use the Linux Task Controller in MRv1 and Linux container executor in MRv2 to run job tasks so that they are prevented from reading the server keystore information and gaining access to the shuffle server certificates. Refer to Appendix B - Information about Other Hadoop Security Programs for more information.

mapred-site.xml Property (MRv2 only)

To enable Encrypted Shuffle for MRv2, set the following property in the mapred-site.xml file on every node in the cluster:

Property

Default Value

Explanation

mapreduce.shuffle.ssl.enabled

false

If this property is set to true, encrypted shuffle is enabled. If this property is not specified, it defaults to the value of hadoop.ssl.enabled. This value can be false when hadoop.ssl.enabled is true but can not be true when hadoop.ssl.enabled is false

This property should be marked as final in the cluster configuration files.

Example:

<configuration>
    ...
    <property>
      <name>mapreduce.shuffle.ssl.enabled</name>
      <value>true</value>
      <final>true</final>
    </property>
    ...
</configuration>

Keystore and Truststore Settings

FileBasedKeyStoresFactory is the only KeyStoresFactory that is currently implemented. It uses properties in the ssl-server.xml and ssl-client.xml files to configure the keystores and truststores.

ssl-server.xml (Shuffle server and Web UI) Configuration

Use the following settings to configure the keystores and truststores in the ssl-server.xml file.

  Note:

The ssl-server.xml should be owned by the hdfs or mapred Hadoop system user, belong to the hadoop group, and it should have 440 permissions. Regular users should not belong to the hadoop group.

Property

Default Value

Description

ssl.server.keystore.type

jks

Keystore file type

ssl.server.keystore.location

NONE

Keystore file location. The mapred user should own this file and have exclusive read access to it.

ssl.server.keystore.password

NONE

Keystore file password

ssl.server.keystore.keypassword

NONE

Key password

ssl.server.truststore.type

jks

Truststore file type

ssl.server.truststore.location

NONE

Truststore file location. The mapred user should own this file and have exclusive read access to it.

ssl.server.truststore.password

NONE

Truststore file password

ssl.server.truststore.reload.interval

10000

Truststore reload interval, in milliseconds

Example

<configuration>
<!-- Server Certificate Store -->
<property>
  <name>ssl.server.keystore.type</name>
     <value>jks</value>
</property>
<property>
  <name>ssl.server.keystore.location</name>
  <value>${user.home}/keystores/server-keystore.jks</value>
</property>
<property>
  <name>ssl.server.keystore.password</name>
  <value>serverfoo</value>
</property>
<property>
  <name>ssl.server.keystore.keypassword</name>
  <value>serverfoo</value>
</property>

<!-- Server Trust Store -->
<property>
  <name>ssl.server.truststore.type</name>
  <value>jks</value>
</property>
<property>
  <name>ssl.server.truststore.location</name>
  <value>${user.home}/keystores/truststore.jks</value>
</property>
<property>
  <name>ssl.server.truststore.password</name>
  <value>clientserverbar</value>
</property>
<property>
  <name>ssl.server.truststore.reload.interval</name>
  <value>10000</value>
</property>
</configuration>

ssl-client.xml (Reducer/Fetcher) Configuration

Use the following settings to configure the keystores and truststores in the ssl-client.xml file. This file should be owned by the mapred user for MRv1 and by the yarn user for MRv2; the file permissions should be 444 (read access for all users).

Property

Default Value

Description

ssl.client.keystore.type

jks

Keystore file type

ssl.client.keystore.location

NONE

Keystore file location. The mapred user should own this file and it should have default permissions.

ssl.client.keystore.password

NONE

Keystore file password

ssl.client.keystore.keypassword

NONE

Key password

ssl.client.truststore.type

jks

Truststore file type

ssl.client.truststore.location

NONE

Truststore file location. The mapred user should own this file and it should have default permissions.

ssl.client.truststore.password

NONE

Truststore file password

ssl.client.truststore.reload.interval

10000

Truststore reload interval, in milliseconds

Example

<configuration>

  <!-- Client certificate Store -->
  <property>
    <name>ssl.client.keystore.type</name>
    <value>jks</value>
  </property>
  <property>
    <name>ssl.client.keystore.location</name>
    <value>${user.home}/keystores/client-keystore.jks</value>
  </property>
  <property>
    <name>ssl.client.keystore.password</name>
    <value>clientfoo</value>
  </property>
  <property>
    <name>ssl.client.keystore.keypassword</name>
    <value>clientfoo</value>
  </property>

  <!-- Client Trust Store -->
  <property>
    <name>ssl.client.truststore.type</name>
    <value>jks</value>
  </property>
  <property>
    <name>ssl.client.truststore.location</name>
    <value>${user.home}/keystores/truststore.jks</value>
  </property>
  <property>
    <name>ssl.client.truststore.password</name>
    <value>clientserverbar</value>
  </property>
  <property>
    <name>ssl.client.truststore.reload.interval</name>
    <value>10000</value>
  </property>
</configuration>

Activating Encrypted Shuffle

When you have made the above configuration changes, activate Encrypted Shuffle by re-starting all TaskTrackers in MRv1 and all NodeManagers in YARN.

  Important:

Encrypted shuffle has a significant performance impact. You should benchmark this before implementing it in production. In many cases, one or more additional are needed to maintain performance.

Client Certificates

Client Certificates are supported but they do not guarantee that the client is a reducer task for the job. The Client Certificate keystore file that contains the private key must be readable by all users who submit jobs to the cluster, which means that a rogue job could read those keystore files and use the client certificates in them to establish a secure connection with a Shuffle server. The JobToken mechanism that the Hadoop environment provides is a better protector of the data; each job uses its own JobToken to retrieve only the shuffle data that belongs to it. Unless the rogue job has a proper JobToken, it cannot retrieve Shuffle data from the Shuffle server.

  Important:

If your certificates are signed by a certificate authority (CA), you must include the complete chain of CA certificates in the keystore that has the server's key.

Reloading Truststores

By default, each truststore reloads its configuration every 10 seconds. If a new truststore file is copied over the old one, it is re-read, and its certificates replace the old ones. This mechanism is useful for adding or removing nodes from the cluster, or for adding or removing trusted clients. In these cases, the client, TaskTracker or NodeManager certificate is added to (or removed from) all the truststore files in the system, and the new configuration is picked up without requiring that the TaskTracker in MRv1 and NodeManager in YARN daemons are restarted.

  Note:

The keystores are not automatically reloaded. To change a keystore for a TaskTracker in MRv1 or a NodeManager in YARN, you must restart the TaskTracker or NodeManager daemon.

The reload interval is controlled by the ssl.client.truststore.reload.interval and ssl.server.truststore.reload.interval configuration properties in the ssl-client.xml and ssl-server.xml files described above.

Debugging

  Important:

Enable debugging only for troubleshooting, and then only for jobs running on small amounts of data. Debugging is very verbose and slows jobs down significantly.

To enable SSL debugging in the reducers, set -Djavax.net.debug=all in the mapred.reduce.child.java.opts property; for example:

<configuration>
    ...
    <property>
        <name>mapred.reduce.child.java.opts</name>
        <value>-Xmx200m -Djavax.net.debug=all</value>
    </property>
    ...
</configuration>

You can do this on a per-job basis, or by means of a cluster-wide setting in mapred-site.xml.

To set this property in TaskTrackersfor MRv1, set it in hadoop-env.sh:

HADOOP_TASKTRACKER_OPTS="-Djavax.net.debug=all $HADOOP_TASKTRACKER_OPTS"

To set this property in NodeManagers for YARN, set it in hadoop-env.sh:

YARN_OPTS="-Djavax.net.debug=all $YARN_OPTS"

HDFS Encrypted Transport

HDFS Encrypted Transport allows encryption of all HDFS data sent over the network.

To enable encryption, proceed as follows:

  1. Enable the Hadoop Security using Kerberos, following these instructions.
  2. Set the optional RPC encryption by setting hadoop.rpc.protection to "privacy" in the core-site.xml file in both client and server configurations.
      Note:

    If RPC encryption is not enabled, transmission of other HDFS data is also insecure.

  3. Set dfs.encrypt.data.transfer to true in the hdfs-site.xml file on all server systems.
  4. Restart all daemons.
Page generated September 3, 2015.