Sizing and performance considerations

Learn about ways you can size your deployment for optimal performance.

Kafka broker performance primarily depends on the IO bandwidth of the nodes and disks. Because of this, Cloudera recommends using SSDs with high IOPS and throughput for large workloads. JBOD can also lead to throughput improvements when the node IO bandwidth can support multiple disks.

When optimizing for large workloads, using HDDs and storage replication services such as Longhorn might add a significant performance overhead.

Depending on the characteristics of the workload, brokers might require a large memory pool to be able to serve fetch requests from the cache. Brokers might also require an increased CPU allocation to support compressed messages

ZooKeeper and KRaft Controllers requires a small resource pool in most workloads, and scaling the cluster to more than three nodes usually provides no benefit.

Recommended minimum setup

Cloudera recommends the following cluster sizing as a baseline for small and medium workloads.

Container Count CPU (m) Memory (MiB) Notes
Strimzi Cluster Operator 2 1000 384 Required
Kafka Broker 3 8000 20480 Required for Kafka workloads.
KRaft Controller or ZooKeeper 3 4000 4096 Required for Kafka workloads.
Cruise Control 3 4000 4096 Required for rebalance operations.
Topic Operator 1 500 256 Required if you want to manage topics with KafkaTopic resources.
User Operator 1 500 256 Required if you want to manage Kafka users with KafkaUser resources.
Kafka Exporter 1 500 256 Required if you want to have additional broker and client metrics available.
Kafka Connect 3 4000 4096 Required if you want to use Kafka Connect and related functionality.