Storage configuration

Learn about storage configuration, available storage types, and storage configuration recommendations for Kafka and ZooKeeper in CSM Operator.

Kafka and Zookeeper storage is configured in separate resources. Kafka storage is configured in the KafkaNodePool resource using the spec.storage property. ZooKeeper Storage is configured in the Kafka resource using the spec.zookeeper.storage property.

#...
kind: KafkaNodePool
spec:
  storage:
    type: persistent-claim
    size: 100Gi
    deleteClaim: true
This configuration snippet defines a 100 GB persistent storage with the default storage class for Kafka in a KafkaNodePool resource. The deleteClaim property specifies if the persistent volume claim has to be deleted when the cluster is un-deployed.
#...
kind: Kafka
spec:
  zookeeper:
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
This configuration snippet defines a 100 GB persistent storage with the default storage class for ZooKeeper in a Kafka resource. The deleteClaim property specifies if the persistent volume claim has to be deleted when the cluster is un-deployed.

CSM Operator supports multiple types of storage depending on the platform. The supported storage types are as follows:

  • Ephemeral
  • Persistent
  • JBOD (Just a Bunch of Disks)

The storage type is configured with storage.type. The property accepts three values, ephemeral, persistent-claim, and jbod. Each value corresponds to its respective storage type. JBOD (jbod) is only supported for Kafka.

The following sections provide a more in-depth look at each storage type, and collect Cloudera recommendations on storage.

Ephemeral storage

Learn about ephemeral storage.

When using ephemeral storage, data is only retained as long as the pod that uses it is running and it is lost when the pod is deleted. Ephemeral storage can be used for both Kafka brokers and ZooKeeper servers. Since this storage type does not preserve your data on the long run, this is not recommended and should only be used for development and test clusters.

To use ephemeral storage, set storage.type to ephemeral.

#...
kind: KafkaNodePool
spec:
  storage:
    type: ephemeral
#...
kind: Kafka
spec:
  zookeeper:
    storage:
      type: ephemeral

The available configuration options are listed in the Strimzi documentation.

Persistent storage

Learn about persistent storage, which is the storage type recommended by Cloudera for Kafka and ZooKeeper clusters.

When using persistent storage, data is retained even in case of a system disruption. Because of this, persistent storage is the storage type recommended by Cloudera for production environments. When using this configuration, a single persistent storage volume is defined. Persistent storage can be used for both Kafka brokers and ZooKeeper servers.

To use persistent storage, set storage.type to persistent-claim.

#...
kind: KafkaNodePool
spec:
  storage:
    type: persistent-claim
#...
kind: Kafka
spec:
  zookeeper:
    storage:
      type: persistent-claim

Custom storage classes

Storage classes define storage profiles and dynamically provision persistent volumes based on that profile. If there is no default storage class, or you would not like to use the default, you can specify your storage class by setting storage.class.

#...
kind: KafkaNodePool
spec:
  storage:
    type: persistent-claim
    class: custom-storage-class
#...
kind: Kafka
spec:
  zookeeper:
    storage:
      type: persistent-claim
      class: custom-storage-class

These examples configure a custom storage class for the pods in the cluster which it is configured for. Custom storage classes can be configured on a more granular level as well with storage overrides.

Storage overrides

Persistent volumes can be configured on a per-broker and ZooKeeper server basis by specifying the Kubernetes storage class for each volume with storage overrides. Specifying storage overrides can be used to influence the storage parameters and pod scheduling constraints of each broker and ZooKeeper server.

#...
kind: KafkaNodePool
spec:
  storage:
    type: persistent-claim
    overrides:
      - broker: 0
        class: storageclass1
      - broker: 1
        class: storageclass2
#...
kind: Kafka
spec:
  zookeeper:
    storage:
      type: persistent-claim
      overrides:
        - broker: 0
          class: storageclass1
        - broker: 1
          class: storageclass2

The available configuration options for persistent storage are listed in the Strimzi documentation.

JBOD storage

Just a bunch of disks (JBOD) refers to a system configuration where disks are used independently rather than organizing them into redundant arrays. Learn how you can configure JBOD storage for Kafka.

JBOD storage allows you to configure your Kafka cluster to use multiple volumes. This approach provides increased data storage capacity for Kafka nodes, and can lead to performance improvements. A JBOD configuration is defined by one or more volumes, each of which can be either ephemeral or persistent. JBOD is only applicable to the Kafka storage in the KafkaNodePool resource.

To use JBOD storage, set the storage.type to jbod and specify the volumes.

#...
kind: KafkaNodePool
spec:
  storage:
    type: jbod
    volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
      - id: 1
        type: persistent-claim
        size: 100Gi
        deleteClaim: false

This example uses a jbod storage type with two attached persistent volumes. The volumes must all be identified by a unique ID.

You can always increase or decrease the number of disks or increase the volume size by modifying the KafkaNodePool resource and reapplying the changes. However, you cannot change the IDs once volumes are created.

The available configuration options are listed in the Strimzi documentation.

Storage recommendations

Cloudera recommends using persistent storage to store Kafka and ZooKeeper data. Ephemeral storage is only suitable for short-lived test clusters. Consider the following when using persistent storage.

Local storage

Using local storage makes the deployment similar to a bare-metal deployment in terms of scheduling and availability. It provides good throughput as both Kafka and ZooKeeper storage operations have less overhead when replication and network hops are not necessary.

However, the Kafka and ZooKeeper pods become bound to the node where the backing volume is located. This means that the pods cannot be scheduled to a different node, which impacts availability

Distributed storage

Using distributed storage with synchronous replication allows leveraging the flexibility of Kubernetes pod scheduling. Both Kafka and ZooKeeper pods can be migrated across nodes due to the availability of the same storage on different nodes. This improves the availability of the Kafka cluster. Node failures do not bring down Kafka brokers and ZooKeeper servers permanently.

However, distributed storage reduces throughput in the Kafka cluster. The synchronous replication of storage adds extra overhead to disk writes. Additionally, if the backing storage class does not support data locality, reads and writes require extra network hops.