Working with Third-party S3-compatible Object Stores

The S3A Connector can work with third-party object stores; some vendors test the connector against their stores —and even actively collaborate in developing the connector in the open source community.

Option Change to
fs.s3a.endpoint (the hostname of the S3 Store)
fs.s3a.endpoint.region Specify the AWS region for the region.
fs.s3a.path.style.access true
fs.s3a.signing-algorithm If the default signing mechanism is rejected, another mechanism may work from: "QueryStringSignerType", "S3SignerType", "AWS3SignerType", "AWS4SignerType", "AWS4UnsignedPayloadSignerType" and "NoOpSignerType".
fs.s3a.connection.ssl.enabled Set to "false" if HTTP is used instead of HTTPS
fs.s3a.multiobjectdelete.enable Set to "false" if bulk delete operations are not supported.
fs.s3a.list.version Set to "1" if the list directories with the default "2" option fails with an error.
The following aspects need to be considered when using using S3 connector with third-party object stores:
  • Third-party object stores are generally consistent. The S3A Committers will still offer better performance, and should be used for MapReduce and Spark.
  • Except for versions of HBase specifically designed to work with S3 storage, HBase must not use s3a:// paths for HBase storage.
  • S3 can not be used as a replacement for HDFS as the cluster filesystem in CDP. S3 can be used as a source and destination of work.
  • Encryption may or may not be supported, refer to the documentation of the third-party object store.
  • Security permissions are likely to be implemented differently from the IAM role mode —again, refer to the documentation of the third-party object store to see what is available.