Configuring Kafka Security

This topic describes additional steps you can take to ensure the safety and integrity of your data stored in Kafka, with features available in Cloudera Distribution of Apache Kafka 2.0.0 and higher:

Deploying SSL for Kafka

Kafka allows clients to connect over SSL. By default, SSL is disabled, but can be turned on as needed.

Step 1. Generating Keys and Certificates for Kafka Brokers

First, generate the key and the certificate for each machine in the cluster using the Java keytool utility. See Creating Certificates.

keystore is the keystore file that stores your certificate. validity is the valid time of the certificate in days.

$ keytool -keystore {tmp.server.keystore.jks} -alias localhost -validity {validity} -genkey      

Make sure that the common name (CN) matches the fully qualified domain name (FQDN) of your server. The client compares the CN with the DNS domain name to ensure that it is connecting to the correct server.

Step 2. Creating Your Own Certificate Authority

You have generated a public-private key pair for each machine, and a certificate to identify the machine. However, the certificate is unsigned, so an attacker can create a certificate and pretend to be any machine. Sign certificates for each machine in the cluster to prevent unauthorized access.

A Certificate Authority (CA) is responsible for signing certificates. A CA is similar to a government that issues passports. A government stamps (signs) each passport so that the passport becomes difficult to forge. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed certificate is computationally difficult to forge. If the CA is a genuine and trusted authority, the clients have high assurance that they are connecting to the authentic machines.
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
The generated CA is a public-private key pair and certificate used to sign other certificates.
Add the generated CA to the client truststores so that clients can trust this CA:
keytool -keystore {client.truststore.jks} -alias CARoot -import -file {ca-cert}
The keystore created in step 1 stores each machine’s own identity. In contrast, the truststore of a client stores all the certificates that the client should trust. Importing a certificate into a truststore means trusting all certificates that are signed by that certificate. This attribute is called the chain of trust. It is particularly useful when deploying SSL on a large Kafka cluster. You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. That way, all machines can authenticate all other machines.

Step 3. Signing the certificate

Now you can sign all certificates generated by step 1 with the CA generated in step 2.
  1. Export the certificate from the keystore:
    keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
  2. Sign it with the CA:
    openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}
  3. Import both the certificate of the CA and the signed certificate into the keystore:
    keytool -keystore server.keystore.jks -alias CARoot -import -file ca-certkeytool -keystore server.keystore.jks -alias localhost -import -file cert-signed
    The definitions of the variables are as follows:
    • keystore: the location of the keystore
    • ca-cert: the certificate of the CA
    • ca-key: the private key of the CA
    • ca-password: the passphrase of the CA
    • cert-file: the exported, unsigned certificate of the server
    • cert-signed: the signed certificate of the server
The following Bash script demonstrates the steps described above. One of the commands assumes a password of test1234, so either use that password or edit the command before running it.
#!/bin/bash
#Step 1
keytool -keystore server.keystore.jks -alias localhost -validity 365 -genkey
#Step 2
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
#Step 3
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:test1234
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed  

Step 4. Configuring Kafka Brokers

Kafka Brokers support listening for connections on multiple ports. You must configure the listeners property in server.properties, with one or more comma-separated values. If SSL is not enabled for inter-broker communication (see below for how to enable it), both PLAINTEXT and SSL ports are required. For example:
listeners=PLAINTEXT://host.name:port,SSL://host.name:port
Kafka CSD auto-generates listeners for Kafka brokers, depending on your SSL and Kerberos configuration. In order to enable SSL for Kafka installations, do the following:
  1. Turn on SSL for the Kafka service by turning on the ssl_enabled configuration for the Kafka CSD.
  2. Set security.inter.broker.protocol as SSL, if Kerberos is disabled; otherwise, set it as SASL_SSL.
The following SSL configurations are required on each broker. Each of these values can be set in Cloudera Manager. See Modifying Configuration Properties Using Cloudera Manager:
ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
ssl.keystore.password.generator=test1234
ssl.key.password=test1234
ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password.generator=test1234

Other configuration settings might also be needed, depending on your requirements:

  • ssl.client.auth=none: Other options for client authentication are required, or requested, where clients without certificates can still connect. The use of requested is discouraged, as it provides a false sense of security and misconfigured clients can still connect.
  • ssl.cipher.suites: A cipher suite is a named combination of authentication, encryption, MAC, and a key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. This list is empty by default.
  • ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1: Provide a list of SSL protocols that your brokers accept from clients.
  • ssl.keystore.type=JKS
  • ssl.truststore.type=JKS
To enable SSL for inter-broker communication, add the following line to the broker properties file. The default value is PLAINTEXT. See Using Kafka Supported Protocols.
security.inter.broker.protocol=SSL
Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If you need stronger algorithms (for example, AES with 256-bit keys), you must obtain the JCE Unlimited Strength Jurisdiction Policy Files and install them in the JDK/JRE. For more information, see the JCA Providers Documentation.
Once you start the broker, you should see the following message in the server.log:
with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)
To check whether the server keystore and truststore are set up properly, run the following command:
openssl s_client -debug -connect localhost:9093 -tls1
In the output of this command, you should see the server certificate:
-----BEGIN CERTIFICATE-----
{variable sized random bytes}
-----END CERTIFICATE-----
subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=John Smith
issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com
If the certificate does not appear, or if there are any other error messages, your keystore is not set up properly.

Step 5. Configuring Kafka Clients

SSL is supported only for the new Kafka Producer and Consumer APIs. The configurations for SSL are the same for both the producer and consumer.

If client authentication is not required in the broker, the following shows a minimal configuration example:

security.protocol=SSL
ssl.truststore.location=/var/private/ssl/kafka.client.truststore.jks
ssl.truststore.password=test1234

If client authentication is required, a keystore must be created as in step 1, and you must also configure the following properties:

ssl.keystore.location=/var/private/ssl/kafka.client.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
Other configuration settings might also be needed, depending on your requirements and the broker configuration:
  • ssl.provider (Optional). The name of the security provider used for SSL connections. Default is the default security provider of the JVM.
  • ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC, and a key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.
  • ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. This property should list at least one of the protocols configured on the broker side
  • ssl.truststore.type=JKS
  • ssl.keystore.type=JKS

Using Kafka Supported Protocols

Kafka can expose multiple communication endpoints, each supporting a different protocol. Supporting multiple communication endpoints enables you to use different communication protocols for client-to-broker communications and broker-to-broker communications. Set the Kafka inter-broker communication protocol using the security.inter.broker.protocol property. Use this property primarily for the following scenarios:
  • Enabling SSL encryption for client-broker communication but keeping broker-broker communication as PLAINTEXT. Because SSL has performance overhead, you might want to keep inter-broker communication as PLAINTEXT if your Kafka brokers are behind a firewall and not susceptible to network snooping.
  • Migrating from a non-secure Kafka configuration to a secure Kafka configuration without requiring downtime. Use a rolling restart and keep security.inter.broker.protocol set to a protocol that is supported by all brokers until all brokers are updated to support the new protocol.
    For example, if you have a Kafka cluster that needs to be configured to enable Kerberos without downtime, follow these steps:
    1. Set security.inter.broker.protocol to PLAINTEXT.
    2. Update the Kafka service configuration to enable Kerberos.
    3. Perform a rolling restart.
    4. Set security.inter.broker.protocol to SASL_PLAINTEXT.
Kafka 2.0 supports the following combinations of protocols.
  SSL Kerberos
PLAINTEXT No No
SSL Yes No
SASL_PLAINTEXT No Yes
SASL_SSL Yes Yes
These protocols can be defined for broker-to-client interaction and for broker-to-broker interaction. security.inter.broker.protocol allows the broker-to-broker communication protocol to be different than the broker-to-client protocol. It was added to ease the upgrade from non-secure to secure clusters while allowing rolling upgrades.

In most cases, set security.inter.broker.protocol to the protocol you are using for broker-to-client communication. Set security.inter.broker.protocol to a protocol different than the broker-to-client protocol only when you are performing a rolling upgrade from a non-secure to a secure Kafka cluster.

Enabling Kerberos Authentication

Kafka 2.0 supports Kerberos authentication. If you already have a Kerberos server, you can add Kafka to your current configuration. If you do not have a Kerberos server, install it before proceeding. See Enabling Kerberos Authentication Using the Wizard.

To enable Kerberos authentication for Kafka:

  1. Generate keytabs for your Kafka brokers. See Step 6: Get or Create a Kerberos Principal for Each User Account.
  2. In Cloudera Manager, go to the Kafka service configuration page and select the Enable Kerberos Authentication checkbox.
  3. Set security.inter.broker.protocol as SASL_SSL, if SSL is enabled; otherwise, set it as SASL_PLAINTEXT.

Enabling Encryption at Rest

Data encryption is increasingly recognized as an optimal method for protecting data at rest.

Follow these are the steps to encryp Kafka data that is not in active use.
  1. Stop the Kafka service.
  2. Archive the Kafka data to an alternate location, using TAR or another archive tool.
  3. Unmount the affected drives.
  4. Install and configure Navigator Encrypt.
  5. Expand the TAR archive into the encrypted directories.