Troubleshooting HDFS Encryption
This topic contains HDFS Encryption-specific troubleshooting information in the form of issues you might face when encrypting HDFS files/directories and their workarounds.
KMS server jute buffer exception
Description
You see the following error when the KMS (for example, as a ZooKeeper client) jute buffer size is insufficient to hold all the tokens:2017-01-31 21:23:56,416 WARN org.apache.zookeeper.ClientCnxn: Session 0x259f5fb3c1000fb for server example.cloudera.com/10.172.0.1:2181, unexpected error, closing socket connection and attempting reconnect java.io.IOException: Packet len4196356 is out of range!
Retrieval of encryption keys fails
DistCp between unencrypted and encrypted locations fails
Cannot move encrypted files to trash
NameNode - KMS communication fails after long periods of inactivity
Description
Encrypted files and encryption zones cannot be created if a long period of time (by default, 20 hours) has passed since the last time the KMS and NameNode communicated.
Solution
For lower CDH 5 releases, there are two possible workarounds to this issue :
- You can increase the KMS authentication token validity period to a very high number. Since the default value is 10 hours, this bug will only be encountered after 20 hours of no
communication between the NameNode and the KMS. Add the following property to the kms-site.xmlSafety Valve:
<property> <name>hadoop.kms.authentication.token.validity</name> <value>SOME VERY HIGH NUMBER</value> </property>
- You can switch the KMS signature secret provider to the string secret provider by adding the following property to the kms-site.xml Safety Valve:
<property> <name>hadoop.kms.authentication.signature.secret</name> <value>SOME VERY SECRET STRING</value> </property>