Appendix: Kafka Configuration Options
Server.properties key-value pairs
Ambari configures the following Kafka values during the installation process. Settings are
stored as key-value pairs stored in an underlying server.properties
configuration
file.
listeners
A comma-separated list of URIs that Kafka will listen on, and their protocols.
Required property with three parts:
<protocol>:<hostname>:<port>
Set <protocol>
to SASL_PLAINTEXT, to specify the protocol that server
accepts connections. SASL authentication will be used over a plaintext channel. Once SASL
authentication is established between client and server, the session will have the client’s
principal as an authenticated user. The broker can only accept SASL (Kerberos) connections,
and there is no wire encryption applied. (Note: For a non-secure cluster,
<protocol>
should be set to PLAINTEXT.)
Set hostname
to the hostname associated with the node you are installing.
Kerberos uses this value and "principal" to construct the Kerberos service name. Specify
hostname 0.0.0.0
to bind to all interfaces. Leave hostname empty to bind to the
default interface.
Set port
to the Kafka service port. When Kafka is installed using Ambari, the
default port number is 6667.
Examples of legal listener lists::
listeners=SASL_PLAINTEXT://kafka1.host1.com:6667
listeners=PLAINTEXT://myhost:9092, TRACE://:9091,
SASL_PLAINTEXT://0.0.0.0:9093
advertised.listeners
A list of listeners to publish to ZooKeeper for clients to use, if different than the listeners specified in the preceding section.
In IaaS environments, this value might need to be different from the interface to which the broker binds.
If advertised.listeners
is not set, the value for listeners
will
be used.
Required value with three parts:
<protocol>:<hostname>:<port>
Set protocol
to SASL_PLAINTEXT
, to specify the protocol that
server accepts connections. SASL authentication will be used over a plaintext channel. Once
SASL authentication is established between client and server, the session will have the
client’s principal as an authenticated user. The broker can only accept SASL (Kerberos)
connections, and there is no wire encryption applied. (Note: For a non-secure cluster,
<protocol>
should be set to PLAINTEXT.)
Set hostname
to the hostname associated with the node you are installing.
Kerberos uses this and "principal" to construct the Kerberos service name.
Set port
to the Kafka service port. When Kafka is installed using Ambari, the
default port number is 6667.
For example:
advertised.listeners=SASL_PLAINTEXT://kafka1.host1.com:6667
security.inter.broker.protocol
Specifies the inter-broker communication protocol. In a Kerberized cluster, brokers are
required to communicate over SASL. (This approach supports replication of topic data.) Set the
value to SASL_PLAINTEXT
:
security.inter.broker.protocol=SASL_PLAINTEXT
authorizer.class.name
Configures the authorizer class.
Set this value to kafka.security.auth.SimpleAclAuthorizer
:
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
For more information, see "Authorizing Access when Kerberos is Enabled."
principal.to.local.class
Transforms Kerberos principals to their local Unix usernames.
Set this value to kafka.security.auth.KerberosPrincipalToLocal
:
principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal
super.users
Specifies a list of user accounts that will have all cluster permissions. By default,
these super users have all permissions that would otherwise need to be added through the
kafka-acls.sh
script. Note, however, that their permissions do not include the
ability to create topics through kafka-topics.sh
, as this involves direct
interaction with ZooKeeper.
Set this value to a list of user:<account>
pairs separated by semicolons.
Note that Ambari adds user:kafka
when Kerberos is enabled.
Here is an example:
super.users=user:bob;user:alice
JAAS Configuration File for the Kafka Server
The Java Authentication and Authorization Service (JAAS) API supplies user authentication and authorization services for Java applications.
After enabling Kerberos, Ambari sets up a JAAS login configuration file for the Kafka server. This file is used to authenticate the Kafka broker against Kerberos. The file is stored at:
/usr/hdp/current/kafka-broker/config/kafka_server_jaas.conf
Ambari adds the following settings to the file. (Note: serviceName="kafka" is required for connections from other brokers.)
KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.service.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/c6401.ambari.apache.org@EXAMPLE.COM"; }; Client { // used for zookeeper connection com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.service.keytab" storeKey=true useTicketCache=false serviceName="zookeeper" principal="kafka/c6401.ambari.apache.org@EXAMPLE.COM"; };
Configuration Setting for the Kafka Producer
After enabling Kerberos, Ambari sets the following key-value pair in the
server.properties
file:
security.protocol=SASL_PLAINTEXT
JAAS Configuration File for the Kafka Client
After enabling Kerberos, Ambari sets up a JAAS login configuration file for the Kafka client. Settings in this file will be used for any client (consumer, producer) that connects to a Kerberos-enabled Kafka cluster. The file is stored at:
/usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf
Ambari adds the following settings to the file. (Note: serviceName=kafka
is
required for connections from other brokers.)
Note | |
---|---|
For command-line utilities like kafka-console-producer and kafka-console-consumer, use
|
Kafka client configuration with keytab, for producers:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/storm.service.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="storm@EXAMPLE.COM"; };
Kafka client configuration without keytab, for producers:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true renewTicket=true serviceName="kafka"; };
Kafka client configuration for consumers:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true renewTicket=true serviceName="kafka"; };