Using a Credential Provider to Secure S3 Credentials
sqoopcommand without entering the access key and secret key on the command line. This prevents these credentials from being exposed in the console output, log files, configuration files, and other artifacts. Running the command this way requires that you provision a credential store to securely store the access key and secret key. The credential store file is saved in HDFS.
- Provision the credentials by running the following commands:
hadoop credential create fs.s3a.access.key -value access_key -provider jceks://hdfs/path_to_credential_store_file hadoop credential create fs.s3a.secret.key -value secret_key -provider jceks://hdfs/path_to_credential_store_fileFor example:
hadoop credential create fs.s3a.access.key -value foobar -provider jceks://hdfs/user/alice/home/keystores/aws.jceks hadoop credential create fs.s3a.secret.key -value barfoo -provider jceks://hdfs/user/alice/home/keystores/aws.jceks
You can omit the
-valueoption and its value. When the option is omitted, the command will prompt the user to enter the value.
- Copy the contents of the
/etc/hadoop/confdirectory to a working directory.
- Add the following to the
core-site.xmlfile in the working directory:
<property> <name>hadoop.security.credential.provider.path</name> <value>jceks://hdfs/path_to_credential_store_file</value> </property>
- Set the
HADOOP_CONF_DIRenvironment variable to the location of the working directory:
After completing these steps, you can run the
sqoop command using the
sqoop import --connect $CONN --username $USER --password $PWD --table $TABLENAME --target-dir s3a://example-bucket/target-directory
You can also reference the credential store on the command line, without having to enter
it in a copy of the core-site.xml file. You also do not have to set a value for
HADOOP_CONF_DIR. Use the following syntax:
sqoop import -Dhadoop.security.credential.provider.path=jceks://hdfspath-to-credential-store-file --connect $CONN --username $USER --password $PWD --table $TABLENAME --target-dir s3a://example-bucket/target-directory