Configuring High Availability and Consistency for Kafka
To achieve high availability and consistency targets, adjust the following parameters to meet your requirements:
Replication Factor
The default replication factor for new topics is one. For high availability production systems, Cloudera recommends setting the replication factor to at least three. This requires at least three Kafka brokers.
To change the replication factor, navigate to Replication factor to 3, click Save Changes, and restart the Kafka service.
. SetPreferred Leader Election
Kafka is designed with failure in mind. At some point in time, web communications or storage resources fail. When a broker goes offline, one of the replicas becomes the new leader for the partition. When the broker comes back online, it has no leader partitions. Kafka keeps track of which machine is configured to be the leader. Once the original broker is back up and in a good state, Kafka restores the information it missed in the interim and makes it the partition leader once more.
Preferred Leader Election is enabled by default, and should occur automatically unless you actively disable the feature. Typically, the leader is restored within five minutes of coming back online. If the preferred leader is offline for a very long time, though, it might need additional time to restore its required information from the replica.
There is a small possibility that some messages might be lost when switching back to the preferred leader. You can minimize the chance of lost data by setting the kafka.request.required.acks property on the Producer to -1. See Acknowledgements.
Unclean Leader Election
Enable unclean leader election to allow an out-of-sync replica to become the leader and preserve the availability of the partition. With unclean leader election, messages that were not synced to the new leader are lost. This provides balance between consistency (guaranteed message delivery) and availability. With unclean leader election disabled, if a broker containing the leader replica for a partition becomes unavailable, and no in-sync replica exists to replace it, the partition becomes unavailable until the leader replica or another in-sync replica is back online.
To enable unclean leader election, navigate to Enable unclean leader election, click Save Changes, and restart the Kafka service.
. Check the box labeledAcknowledgements
When writing or configuring a Kafka producer, you can choose how many replicas commit a new message before the message is acknowledged using the request.required.acks property (see Kafka Sink Properties for details).
Set request.required.acks to 0 (immediately acknowledge the message without waiting for any brokers to commit), 1 (acknowledge after the leader commits the message), or -1 (acknowledge after all in-sync replicas are committed) according to your requirements. Setting request.required.acks to -1 provides the highest consistency guarantee at the expense of slower writes to the cluster.
Minimum In-sync Replicas
You can set the minimum number of in-sync replicas (ISRs) that must be available for the producer to successfully send messages to a partition using the min.insync.replicas setting. If min.insync.replicas is set to 2 and request.required.acks is set to -1, each message must be written successfully to at least two replicas. This guarantees that the message is not lost unless both hosts crash.
It also means that if one of the hosts crashes, the partition is no longer available for writes. Similar to the unclean leader election configuration, setting min.insync.replicas is a balance between higher consistency (requiring writes to more than one broker) and higher availability (allowing writes when fewer brokers are available).
The leader is considered one of the in-sync replicas. It is included in the count of total min.insync.replicas. However, leaders are special, in that producers and consumers can only interact with leaders in a Kafka cluster.
To configure min.insync.replicas at the cluster level, navigate to Minimum number of replicas in ISR to the desired value, click Save Changes, and restart the Kafka service.
. Setmin.insync.replicas.per.topic=topic_name_1:value,topic_name_2:value
Replace topic_name_n with the topic names, and replace value with the desired minimum number of in-sync replicas.
/usr/bin/kafka-topics --alter --zookeeper zk01.example.com:2181 --topic topicname \ --config min.insync.replicas=2
Kafka MirrorMaker
Kafka mirroring enables maintaining a replica of an existing Kafka cluster. You can configure MirrorMaker directly in Cloudera Manager 5.4 and higher.
The most important configuration setting is Destination broker list. This is a list of brokers on the destination cluster. You should list more than one, to support high availability, but you do not need to list all brokers.
MirrorMaker requires that you specify a Topic whitelist that represents the exclusive set of topics to replicate. The Topic blacklist setting has been removed in CDK 2.0 and higher Powered By Apache Kafka.