Cloudera Runtime provides different types of storage components that you can use depending on your data requirements. Apache Kudu completes Apache Hadoop’s storage layer, enabling fast analytics on fast data. Apache Hadoop HDFS is a distributed file system for storing large volumes of data.
Describes information about the configuration parameters used to access data stored in the cloud.
Apache Hadoop HDFS
Provides information about optimizing data storage, APIs and services for accessing data, and managing data across clusters.
Provides information about configuring data protection on a Hadoop cluster.
Describes the procedure to configure Access Control Lists (ACLs) on Apache Hadoop HDFS.
Describes the procedure to configure HDFS high availability on a cluster.
Describes common Apache Kudu configuration tasks.
Describes common Apache Kudu management tasks and workflows.
Provide information about how to configure and manage security for Apache Kudu.
Provide information about how to back up and recover Apache Kudu tables.
Provides reference examples to use C++ and Java client APIs to develop apps using Apache Kudu.
Provides information about how to use Apache Kudu as a storage for Apache Impala.
Provide information about how to monitor Kudu metrics and cluster health, and how to collect diagnostics information.