Log Settings
Review the following settings in the Kafka Broker category, and modify as needed:
-
log.roll.hours
-
The maximum time, in hours, before a new log segment is rolled out. The default value is 168 hours (seven days).
This setting controls the period of time after which Kafka will force the log to roll, even if the segment file is not full. This ensures that the retention process is able to delete or compact old data.
-
log.retention.hours
-
The number of hours to keep a log file before deleting it. The default value is 168 hours (seven days).
When setting this value, take into account your disk space and how long you would like messages to be available. An active consumer can read quickly and deliver messages to their destination.
The higher the retention setting, the longer the data will be preserved. Higher settings generate larger log files, so increasing this setting might reduce your overall storage capacity.
-
log.dirs
-
A comma-separated list of directories in which log data is kept. If you have multiple disks, list all directories under each disk.
Review the following setting in the Advanced kafka-broker category, and modify as needed:
-
log.retention.bytes
-
The amount of data to retain in the log for each topic partition. By default, log size is unlimited.
Note that this is the limit for each partition, so multiply this value by the number of partitions to calculate the total data retained for the topic.
If
log.retention.hours
andlog.retention.bytes
are both set, Kafka deletes a segment when either limit is exceeded. -
log.segment.bytes
-
The log for a topic partition is stored as a directory of segment files. This setting controls the maximum size of a segment file before a new segment is rolled over in the log. The default is 1 GB.
Log Flush Management
Kafka writes topic messages to a log file immediately upon receipt, but the data is initially buffered in page cache. A log flush forces Kafka to flush topic messages from page cache, writing the messages to disk.
We recommend using the default flush settings, which rely on background flushes done by Linux and Kafka. Default settings provide high throughput and low latency, and they guarantee recovery through the use of replication.
If you decide to specify your own flush settings, you can force a flush after a period of time, or after a specified number of messages, or both (whichever limit is reached first). You can set property values globally and override them on a per-topic basis.
There are several important considerations related to log file flushing:
-
Durability: unflushed data is at greater risk of loss in the event of a crash. A failed broker can recover topic partitions from its replicas, but if a follower does not issue a fetch request or consume from the leader's log-end offset within the time specified by
replica.lag.time.max.ms
(which defaults to 10 seconds), the leader removes the follower from the in-sync replica ("ISR"). When this happens there is a slight chance of message loss if you do not explicitly setlog.flush.interval.messages
. If the leader broker fails and the follower is not caught up with the leader, the follower can still be under ISR for those 10 seconds and messages during leader transition to follower can be lost. -
Increased latency: data is not available to consumers until it is flushed (the
fsync
implementation in most Linux filesystems blocks writes to the file system). -
Throughput: a flush operation is typically an expensive operation.
-
Disk usage patterns are less efficient.
-
Page-level locking in background flushing is much more granular.
log.flush.interval.messages
specifies the number of messages to
accumulate on a log partition before Kafka forces a flush of data to
disk.
log.flush.scheduler.interval.ms
specifies the amount of time (in
milliseconds) after which Kafka checks to see if a log needs to be flushed to disk.
log.segment.bytes
specifies the size of the log file. Kafka flushes
the log file to disk whenever a log file reaches its maximum size.
log.roll.hours
specifies the maximum length of time before a new log
segment is rolled out (in hours); this value is secondary to
log.roll.ms
. Kafka flushes the log file to disk whenever a log file
reaches this time limit.