Homepage
/
Cloudera Private Cloud Base
7.3.1
(on premises • latest)
Search Documentation
▶︎
Cloudera
Reference Architectures
▶︎
Cloudera Public Cloud
Getting Started
Patterns
Preview Features
Data Catalog
Data Engineering
DataFlow
Data Hub
Data Warehouse
Data Warehouse Runtime
Cloudera AI
Management Console
Operational Database
Replication Manager
DataFlow for Data Hub
Runtime
▼
Cloudera Private Cloud
Data Services
Getting Started
Cloudera Manager
Management Console
Replication Manager
Data Catalog
Data Engineering
Data Warehouse
Data Warehouse Runtime
Cloudera AI
Base
Getting Started
Runtime
Upgrade
Storage
Flow Management
Streaming Analytics
Flow Management Operator
Streaming Analytics Operator
Streams Messaging Operator
▶︎
Cloudera Manager
Cloudera Manager
▶︎
Applications
Cloudera Streaming Community Edition
Data Science Workbench
Data Visualization
Edge Management
Observability SaaS
Observability on premises
Workload XM On-Prem
▶︎
Legacy
Cloudera Enterprise
Flow Management
Stream Processing
HDP
HDF
Streams Messaging Manager
Streams Replication Manager
▶︎
Data Services
Getting Started
Cloudera Manager
Management Console
Replication Manager
Data Catalog
Data Engineering
Data Warehouse
Data Warehouse Runtime
Cloudera AI
Base
Getting Started
Runtime
Upgrade
Storage
Flow Management
Streaming Analytics
Flow Management Operator
Streaming Analytics Operator
Streams Messaging Operator
«
Filter topics
Cloudera Base on Premises
▶︎
Cloudera Runtime Release Notes
Overview
▶︎
What's New
Atlas
Cloud Connectors
Cruise Control
HBase
Hive
Hue
Iceberg
Impala
Kafka
Kerberos
Livy
Navigator Encrypt
Oozie
Phoenix
Ranger
Ranger KMS
Schema Registry
Solr
Spark
SMM
SRM
YARN and YARN Queue Manager
Unaffected Components in this release
What's new in Platform Support
Cloudera Runtime Component Versions
▶︎
Using the Cloudera Runtime Maven repository 7.3.1
Cloudera Runtime 7.3.1.0-197
▶︎
Fixed Issues In Cloudera Runtime 7.3.1
Atlas
Avro
Cloud Connectors
Cruise Control
Hadoop
HDFS
HBase
Hive
Hue
Impala
Iceberg
Kafka
Kudu
Knox
Livy
Navigator Encrypt
Oozie
Ozone
Parquet
Phoenix
Ranger
Ranger KMS
Schema Registry
Solr
Spark
SMM
SRM
Tez
YARN and YARN Queue Manager
Zookeeper
▶︎
Known Issues In Cloudera Runtime 7.3.1
Atlas
Avro
Cloud Connectors
Cruise Control
Hadoop
HBase
HDFS
Hive
Hue
Iceberg
Impala
Kafka
Kerberos
Knox
Kudu
Navigator Encrypt
Oozie
Ozone
Parquet
Phoenix
Ranger
Ranger KMS
Schema Registry
Solr
Spark
Sqoop
SMM
SRM
MapReduce, YARN and YARN Queue Manager
Zookeeper
▶︎
Behavioral Changes In Cloudera Runtime 7.3.1
Atlas
Hive
Impala
Knox
Livy
Ranger
Spark
▶︎
Deprecation Notices In Cloudera Runtime 7.3.1
Platform and OS
Kafka
Livy
KTS
Oozie
Spark 2
Zeppelin
Fixed Common Vulnerabilities and Exposures 7.3.1
▶︎
Concepts
▶︎
Virtual Clusters on premises and Cloudera SDX
▶︎
Introduction to Virtual Private Clusters and Cloudera SDX
Advantages of Separating Compute and Data Resources
Architecture
Performance Trade Offs
Compatibility Considerations for Virtual Private Clusters
Networking Considerations for Virtual Private Clusters
▶︎
Storage
▶︎
Apache Hadoop HDFS Overview
▶︎
Introduction
Overview of HDFS
▶︎
NameNodes
▶︎
Moving NameNode roles
Moving highly available NameNode, failover controller, and JournalNode roles using the Migrate Roles wizard
Moving a NameNode to a different host using Cloudera Manager
▶︎
Sizing NameNode heap memory
Environment variables for sizing NameNode heap memory
Monitoring heap memory usage
Files and directories
Disk space versus namespace
Replication
Examples of estimating NameNode heap memory
Remove or add storage directories for NameNode data directories
▶︎
DataNodes
How NameNode manages blocks on a failed DataNode
Replace a disk on a DataNode host
Remove a DataNode
Fixing block inconsistencies
Add storage directories using Cloudera Manager
Remove storage directories using Cloudera Manager
▶︎
Configuring storage balancing for DataNodes
Configure storage balancing for DataNodes using Cloudera Manager
Perform a disk hot swap for DataNodes using Cloudera Manager
▶︎
JournalNodes
Moving the JournalNode edits directory for a role group using Cloudera Manager
Moving the JournalNode edits directory for a role instance using Cloudera Manager
Synchronizing the contents of JournalNodes
▶︎
Multiple NameNodes overview
Multiple Namenodes configurations
Known issue and its workaround
Adding multiple namenodes using the HDFS service
▶︎
Apache Ozone Overview
▶︎
Introduction to Ozone
Ozone architecture
Ozone security architecture
Ozone containers
How Ozone manages read operations
How Ozone manages write operations
How Ozone manages delete operations
▶︎
Apache HBase Overview
Apache HBase overview
▶︎
Apache Kudu Overview
Kudu introduction
Kudu architecture in a Cloudera Base on premises deployment
Kudu network architecture
Kudu-Impala integration
Example use cases
Kudu concepts
▶︎
Apache Kudu usage limitations
Schema design limitations
Partitioning limitations
Scaling recommendations and limitations
Server management limitations
Cluster management limitations
Impala integration limitations
Spark integration limitations
Kudu security limitations
Other known issues
More Resources
▶︎
Apache Kudu Background Operations
Maintenance manager
Flushing data to disk
Compacting on-disk data
Write-ahead log garbage collection
Tablet history garbage collection and the ancient history mark
▶︎
Apache Hadoop YARN Overview
Introduction
YARN Features
Understanding YARN architecture
▶︎
Data Access
▶︎
Hue Overview
Hue overview
About Hue Query Processor
About the Hue SQL AI Assistant
▶︎
Cloudera Search Overview
What is Cloudera Search
How Cloudera Search works
Cloudera Search and Cloudera
Search and other Cloudera Runtime components
Cloudera Search architecture
Local file system support
Cloudera Search tasks and processes
Backing up and restoring data
▶︎
Data Warehousing
▶︎
Apache Hive Metastore Overview
Introduction to Hive metastore
▶︎
Apache Hive Overview
Apache Hive features
Spark integration with Hive
Hive on Tez introduction
Hive unsupported interfaces and features
Apache Hive 3 architectural overview
▶︎
Installing Hive on Tez and adding a HiveServer role
Adding a HiveServer role
Changing the Hive warehouse location
Apache Hive content roadmap
▶︎
Apache Iceberg Overview
Iceberg overview
▶︎
Apache Impala Overview
Apache Impala Overview
Components of Impala
▶︎
Open Data Lakehouse
What is Open Data Lakehouse?
Benefits of Open Data Lakehouse
▶︎
Operational Database
▶︎
Operational Database Overview
▶︎
Introduction to Operational Database
Introduction to Apache HBase
▶︎
Introduction to Apache Phoenix
Apache Phoenix and SQL
▶︎
Operational Database powered by Apache Accumulo Overview
Release notes
Operational Database overview
CLI tool support
System requirements
▶︎
Introduction to HBase Multi-cluster Client
▶︎
Introduction to HBase Multi-cluster Client
HBase MCC Usage with Kerberos
HBase MCC Usage in Spark with Scala
HBase MCC Usage in Spark with Java
Zookeeper Configurations
HBase MCC Configurations
HBase MCC Restrictions
▶︎
Data Engineering
▶︎
Apache Spark Overview
Apache Spark Overview
Unsupported Apache Spark Features
▶︎
Cloudera Security Overview
▶︎
Introduction
What is Cloudera on premises?
Importance of a Secure Cluster
Secure by Design
▶︎
Pillars of Security
Authentication
Authorization
Encryption
Identity Management
Security Management Model
▶︎
Security Levels
Choosing the Sufficient Security Level for Your Environment
Logical Architecture
SDX
Security Terms
▶︎
Governance
▶︎
Governance Overview
Using metadata for cluster governance
Data Stewardship with Apache Atlas
Apache Atlas dashboard tour
Apache Atlas metadata collection overview
Atlas metadata model overview
▶︎
Controlling Data Access with Tags
Atlas classifications drive Ranger policies
When to use Atlas classifications for access control
▶︎
How tag-based access control works
Propagation of tags as deferred actions
Examples of controlling data access using classifications
▶︎
Extending Atlas to Manage Metadata from Additional Sources
Top-down process for adding a new metadata source
▶︎
Streams Messaging
▶︎
Apache Kafka Overview
Kafka Introduction
▶︎
Kafka Architecture
Brokers
Topics
Records
Partitions
Record order and assignment
Logs and log segments
Kafka brokers and Zookeeper
Leader positions and in-sync replicas
Kafka stretch clusters
Kafka disaster recovery
Kafka rack awareness
Kafka KRaft [Technical Preview]
▶︎
Kafka FAQ
Basics
Use cases
▶︎
Cruise Control Overview
Kafka cluster load balancing using Cruise Control
▶︎
Streams Messaging Manager Overview
Introduction to Streams Messaging Manager
▶︎
Streams Replication Manager Overview
Overview
Key Features
Main Use Cases
Use case architectures
▶︎
Streams Replication Manager Architecture
▶︎
Streams Replication Manager Driver
Connect workers
Connectors
Task architecture and load-balancing
Driver inter-node coordination
▶︎
Streams Replication Manager Service
Remote Querying
Monitoring and metrics
REST API
Replication flows and replication policies
Remote topic discovery
Automatic group offset synchronization
Understanding co-located and external clusters
Understanding Streams Replication Manager properties, their configuration and hierarchy
▶︎
Schema Registry Overview
▶︎
Schema Registry overview
Examples of interacting with Schema Registry
▶︎
Schema Registry use cases
Registering and querying a schema for a Kafka topic
Deserializing and serializing data from and to a Kafka topic
Dataflow management with schema-based routing
Schema Registry component architecture
▶︎
Schema Registry concepts
Schema entities
Compatibility policies
Importance of logical types in Avro
▶︎
Planning
▶︎
Deployment Planning for Cloudera Search
Planning overview
Dimensioning guidelines
Schemaless mode overview and best practices
Advantages of defining a schema for production use
▶︎
Planning for Infra Solr
Calculating Infra Solr resource needs
▶︎
Planning for Apache Impala
Guidelines for Schema Design
User Account Requirements
▶︎
Planning for Apache Kudu
▶︎
Kudu schema design
The perfect schema
▶︎
Column design
Decimal type
Varchar type
Column encoding
Column compression
▶︎
Primary key design
Primary key index
Non-unique primary key index
Considerations for backfill inserts
▶︎
Partitioning
▶︎
Range partitioning
Adding and Removing Range Partitions
Hash partitioning
Multilevel partitioning
Partition pruning
▶︎
Partitioning examples
Range partitioning
Hash partitioning
Hash and range partitioning
Hash and hash partitioning
Schema alterations
Schema design limitations
Partitioning limitations
▶︎
Kudu transaction semantics
Single tablet write operations
Writing to multiple tablets
Read operations (scans)
▶︎
Known issues and limitations
Writes
Reads (scans)
▶︎
Scaling Kudu
Terms
Example workload
▶︎
Memory
Verifying if a memory limit is sufficient
File descriptors
Threads
Scaling recommendations and limitations
▶︎
Planning for Streams Replication Manager
Streams Replication Manager requirements
Recommended deployment architecture
▶︎
Planning for Apache Kafka
Stretch cluster reference architecture
▶︎
Installation & Upgrade
▶︎
Installing Cloudera Base on Premises
Cloudera Base on premises Installation Guide
▶︎
Version and Download Information
Cloudera Runtime Version Information
Cloudera Runtime Download Information
Cloudera Manager Support Matrix
Cloudera Base on premises Trial Download Information
▶︎
System Requirements
▶︎
Hardware Requirements
▶︎
Cloudera Runtime
Atlas
HDFS
HBase
Hive
Hue
Impala
Kafka
Ranger KMS
Kudu
Oozie
Ozone
Phoenix
Ranger
Solr
Spark
Livy
Zeppelin
YARN
ZooKeeper
Operating System Requirements
Database Requirements
Java Requirements
Networking and Security Requirements
Data at Rest Encryption Requirements
Third-party filesystems
▶︎
Trial Installation
▶︎
Installing a Trial Cluster
Before You Begin a Trial Installation
Download the Trial version of Cloudera Base on premises
Run the Cloudera Manager Server Installer
Install Cloudera Runtime
Set Up a Cluster Using the Wizard
Stopping the Embedded PostgreSQL Database
Starting the Embedded PostgreSQL Database
Changing Embedded PostgreSQL Database Passwords
▶︎
Migrating from the Cloudera Manager Embedded PostgreSQL Database Server to an External PostgreSQL Database
Prerequisites
Identify Roles that Use the Embedded Database Server
Migrate Databases from the Embedded Database Server to the External PostgreSQL Database Server
▶︎
Installing and Configuring Cloudera with FIPS
Overview
Prerequisites for using FIPS
Configure Cloudera Manager for FIPS
Install and configure additional required components
▶︎
Production Installation
▶︎
Before You Install
▶︎
Install and Configure Databases
Required Databases
▶︎
Install and Configure PostgreSQL for Cloudera Base on premises
Installing Postgres JDBC Driver
Installing PostgreSQL Server
Installing the psycopg2 Python package for PostgreSQL database
Installing psycopg2 from source (FIPS - RHEL 8)
Configuring and Starting the PostgreSQL Server
Install and Configure MySQL for Cloudera Software
Install and Configure MariaDB for Cloudera Software
▶︎
Configure Oracle Database
Configuring the Hue Server to Store Data in the Oracle database
▶︎
Enabling TLS 1.2 on Database Server
Enable TLS 1.2 for MySQL
Enable TCPS for Oracle
Enable TLS 1.2 for MariaDB
Enable TLS 1.2 for PostgreSQL
Enable Kerberos for MariaDB
▶︎
Configuring a database for Ranger or Ranger KMS
Configuring a Ranger or Ranger KMS Database: MySQL/MariaDB
Configuring a Ranger or Ranger KMS Database: Oracle
Configuring a Ranger or Ranger KMS Database: Oracle using /ServiceName format
Configuring a PostgreSQL Database for Ranger or Ranger KMS
Configure Ranger with SSL/TLS enabled PostgreSQL Database
Enable HA for a Ranger Postgres database
▶︎
Configuring the Database for Streaming Components
Configure PostgreSQL for Streaming Components
Configuring MySQL for Streaming Components
Configuring Oracle for Streaming Components
Configure Network Names
Setting SELinux Mode
Disabling the Firewall
Enable an NTP Service
Impala Requirements
Cloudera Runtime Cluster Hosts and Role Assignments
▶︎
Configuring Local Package and Parcel Repositories
▶︎
Understanding Package Management
Repository Configuration Files
Listing Repositories
Installing Cloudera Manager
Installing Cloudera Runtime
▶︎
Set Up a Cluster Using the Wizard
Select Services
Assign Roles
▶︎
Setup database
▶︎
Database setup details for cluster services for TLS 1.2/TCPS-enabled databases
Database setup details for Hue for TLS 1.2/TCPS-enabled databases
Database setup details for Ranger KMS for TLS 1.2/TCPS-enabled databases
Database setup details for Ranger for TLS 1.2/TCPS-enabled databases
Database setup details for Oozie for TLS 1.2/TCPS-enabled databases
Database setup details for Streams Messaging Manager for TLS 1.2/TCPS-enabled databases
Database setup details for Schema Registry for TLS 1.2/TCPS-enabled databases
Database setup details for Hive Metastore for TLS 1.2/TCPS-enabled databases
Enter Required Parameters
Review Changes
Configure Kerberos
Command Details
Summary
(optional) Enable high availability for Cloudera Manager
(Recommended) Enable Auto-TLS
(Recommended) Enable Kerberos
Additional Steps for Apache Ranger
▶︎
Installing Apache Knox
Apache Knox install role parameters
▶︎
Setting Up Data at Rest Encryption for HDFS
Installing Ranger KMS backed by a Database and HA
Installing Cloudera Navigator Encrypt
Installing Ranger RMS
▶︎
Installation Reference
▶︎
Ports
Ports Used by Cloudera Manager
Ports Used by Cloudera Runtime Components
Ports Used by DistCp
Ports Used by Third-Party Components
Service Dependencies in Cloudera Manager
Cloudera Manager sudo command options
Introduction to Parcels
▶︎
After You Install
Deploying Clients
Initializing Solr and creating HDFS home directory
Testing the Installation
Checking Host Heartbeats
Running a MapReduce Job
Testing with Hue
Deploying Atlas service
Secure Your Cluster
Installing the GPL Extras Parcel
Configuring HDFS properties to optimize log collection
Migrating from H2 to PostgreSQL database in YARN Queue Manager
Troubleshooting Installation Problems
▶︎
Uninstalling Cloudera Manager and Managed Software
Record User Data Paths
Stop all Services
Deactivate and Remove Parcels
Delete the Cluster
Uninstall the Cloudera Manager Server
Uninstall Cloudera Manager Agent and Managed Software
Remove Cloudera Manager, User Data, and Databases
Uninstalling a Cloudera Runtime Component From a Single Host
▶︎
Quick Start Deployment for a Streams Cluster
Create a Streams Cluster on Cloudera Base on premises
▶︎
Before You Install
System Requirements for POC Streams Cluster
Disable the Firewall
Enable an NTP Service
▶︎
Installing a Trial Streaming Cluster
Download the Trial version of Cloudera Base on premises
Run the Cloudera Manager Server Installer
Install Cloudera Runtime
Set Up a Streaming Cluster
▶︎
Getting Started on your Streams Cluster
Create a Kafka Topic to Store your Events
Write a few Events into the Topic
Read the Events
Monitor your Cluster from the Streams Messaging Manager UI
After Evaluating Trial Software
Create a Streams Cluster on Cloudera Base on premises
▶︎
Before You Install
System Requirements for POC Streams Cluster
Disable the Firewall
Enable an NTP Service
▶︎
Installing a Trial Streaming Cluster
Download the Trial version of Cloudera Base on premises
Run the Cloudera Manager Server Installer
Install Cloudera Runtime
Set Up a Streaming Cluster
▶︎
Getting Started on your Streams Cluster
Create a Kafka Topic to Store your Events
Write a few Events into the Topic
Read the Events
Monitor your Cluster from the Streams Messaging Manager UI
After Evaluating Trial Software
▶︎
Installing Operational Database powered by Apache Accumulo
▶︎
Installing Accumulo Parcel 2.1.2
▶︎
Install Operational Database
Install Operational Database CSD file
Install Cloudera
▶︎
Install Operational Database parcel
Install Operational Database parcel using Local Parcel Repository
Install Operational Database parcel using Remote Parcel Repository
▶︎
Add Accumulo on Cloudera service
Add unsecure Accumulo on Cloudera service to your cluster
Add secure Accumulo on Cloudera service to your cluster
Verify your Operational Database installation
▶︎
Installing Accumulo Parcel 1.1.0
▶︎
Install Operational Database
Install Operational Database CSD file
Install Cloudera
▶︎
Install Operational Database parcel
Install Operational Database parcel using Local Parcel Repository
Install Operational Database parcel using Remote Parcel Repository
▶︎
Add Accumulo on Cloudera service
Add unsecure Accumulo on Cloudera service to your cluster
Add secure Accumulo on Cloudera service to your cluster
Verify your Operational Database installation
▶︎
Installing Accumulo Parcel 1.10
▶︎
Install Accumulo
Install Accumulo CSD file
Install Cloudera
▶︎
Install Accumulo 1.10 parcel
Install Accumulo parcel using Local Parcel Repository
Install Accumulo using Remote Parcel Repository
▶︎
Add Accumulo on Cloudera service
Add unsecure Accumulo on Cloudera service to your cluster
Add secure Accumulo on Cloudera service to your cluster
Customizing Kerberos Principals
Creating a trace user in unsecure Accumulo deployment
Check trace table
Provide user permissions
Verify your Accumulo installation
▶︎
Upgrading Accumulo from 1.10.3 to 2.1.2
Removing Accumulo 1.10.3
Removing and updating Accumulo parcels
Adding Accumulo service (unsecure)
Adding Accumulo service (secure)
▶︎
Upgrading Accumulo from 1.1.0 to 2.1.2
Removing Accumulo 1.1.0
Removing and updating Accumulo parcels
Adding Accumulo service (unsecure)
Adding Accumulo service (secure)
Getting Started with CDP Upgrade and Migration
In-Place Upgrade CDH 6 to CDP Private Cloud Base
In-Place Upgrade CDH 5 to CDP Private Cloud Base
In-Place Upgrade HDP3 to CDP Private Cloud Base
In-Place Upgrade HDP2 to CDP Private Cloud Base
In-Place Upgrade of Cloudera Base on premises
▶︎
Managing Clusters
▶︎
Pausing a Cluster in AWS
Shutting Down and Starting Up the Cluster
▶︎
Managing Cloudera Runtime Services
▶︎
Adding a Service
Prerequisites for installing Atlas
Installing Atlas using Add Service
Installing Ranger using Add Service
Comparing configurations for a service between clusters
Starting a Cloudera Runtime service on all hosts
Stopping a Cloudera Runtime Service on All Hosts
Restarting a Cloudera Runtime Service
Rolling Restart
Aborting a Pending Command
Deleting Services
Renaming a service
Configuring Maximum File Descriptors
▶︎
Extending Cloudera Manager
Add-on Services
Configuring Services to Use LZO Compression
▶︎
How to: Next-Gen Storage
▶︎
Storing Data Using Ozone
▶︎
Upgrading Ozone overview
Preparing Ozone for upgrade
Backing up Ozone
Upgrading Ozone parcels
▶︎
Ozone S3 Multitenancy overview (Technical Preview)
Prerequisites to enable S3 Multitenancy
Enabling S3 Multi-Tenancy
Tenant Commands
▶︎
Multi Protocol Aware System overview
Upgrading this feature
Files and Objects together
▶︎
Bucket Layout
▶︎
Ozone FS namespace optimization with prefix
Metadata layout format
Delete and Rename Operation
Interoperability Between S3 and FS APIs
OBS as Pure Object Store
Configuration to create bucket with default layout
▶︎
Performing Bucket Layout operations in Apache Ozone using CLI
▶︎
FSO operations
Multi Protocol Access operations using AWS Client
Object Store operations using AWS client
Ozone Ranger policy
▶︎
Ozone Ranger Integration
Configuring a resource-based policy using Ranger
▶︎
Snapshot support in Ozone
Cluster and hardware configuration in snapshot deployment
▶︎
Erasure Coding overview
Enabling EC replication configuration cluster-wide
Enabling EC replication configuration on bucket
Enabling EC replication configuration on keys or files
▶︎
Master node decommissioning in Ozone
▶︎
SCM decommissioning
Decommissioning SCM
▶︎
OM decommissioning
Decommissioning OM Node
Adding new Ozone Manager node
▶︎
Ozone recon heatmap
Accessing Ozone Recon Web UI
Ozone recon heatmap
▶︎
Container Balancer overview
Container balancer CLI commands
▶︎
Determining the threshold
Choosing an appropriate value for the threshold
Configuring container balancer service
Activating container balancer using Cloudera Manager
▶︎
Ozone Cloudera Replication Manager overview
▶︎
Ozone Cloudera Replication Manager throttling of tasks
Replicate container commands
Delete container replica commands
EC reconstruction commands
Configurations for throttling of tasks
▶︎
Managing Ozone quota
▶︎
Understanding quota
Storage Space level quota considerations
Namespace quota considerations
Additional quota considerations
▶︎
Commands for managing volumes and buckets
Commands for managing volumes
Commands for managing buckets
▶︎
Managing storage elements by using the command line interface
▶︎
Commands for managing volumes
Assigning administrator privileges to users
Commands for managing buckets
Commands for managing keys
▶︎
Using Ozone S3 Gateway to work with storage elements
Configuration to expose buckets under non-default volumes
REST endpoints supported on Ozone S3 Gateway
Configuring Ozone to work as a pure object store
▶︎
Access Ozone S3 Gateway using the S3A filesystem
Accessing Ozone S3 using S3A FileSystem
Examples of using the S3A filesystem with Ozone S3 Gateway
Configuring Spark access for S3A
Configuring Hive access for S3A
Configuring Impala access for S3A
▶︎
Using the AWS CLI with Ozone S3 Gateway
Configuring an https endpoint in Ozone S3 Gateway to work with AWS CLI
Examples of using the AWS CLI for Ozone S3 Gateway
▶︎
Accessing Ozone object store with Amazon Boto3 client
Obtaining resources to Ozone
Obtaining client to Ozone through session
▶︎
List of APIs verified
Create a bucket
List buckets
Head a bucket
Delete a bucket
Upload a file
Download a file
Head an object
Delete Objects
Multipart upload
▶︎
Working with Ozone File System (ofs)
Setting up ofs
Volume and bucket management using ofs
Key management using ofs
▶︎
Working with Ozone File System (o3fs)
Setting up o3fs
▶︎
Ozone configuration options to work with Cloudera components
Configuration options for Spark to work with Ozone File System (ofs)
Configuration options to store Hive managed tables on Ozone
Configuration options for Impala to work with Ozone File System
Configuration options for Oozie to work with Ozone storage
▶︎
Overview of the Ozone Manager in High Availability
Considerations for configuring High Availability on the Ozone Manager
▶︎
Ozone Manager nodes in High Availability
Read and write requests with Ozone Manager in High Availability
▶︎
Overview of Storage Container Manager in High Availability
Considerations for configuring High Availability on Storage Container Manager
Storage Container Manager operations in High Availability
Offloading Application Logs to Ozone
▶︎
Removing Ozone DataNodes from the cluster
Decommissioning Ozone DataNodes
Placing Ozone DataNodes in offline mode
Configuring the number of storage container copies for a DataNode
Recommissioning an Ozone DataNode
Handling datanode disk failure
Multi-Raft configuration for efficient write performances
▶︎
Working with the Recon web user interface
Access the Recon web user interface
▶︎
Elements of the Recon web user interface
Overview page
DataNodes page
Pipelines page
Missing Containers page
Configuring Ozone to work with Prometheus
Ozone trash overview
Configuring the Ozone trash checkpoint values
▶︎
Ozone topology awareness
Topology hierarchy
RATIS/THREE Data
Erasure Coding data
▶︎
Ozone Placement Policy
Placement Policy for Ratis Containers
Placement Policy for Erasure Coded Containers
Ozone volume scanner
▶︎
Ozone OMDBInsights
Accessing Recon Web UI
OMDBInsights
▶︎
Configuring Ozone Security
Using Ranger with Ozone
▶︎
Changing temporary path for Ozone services and CLI tools
Changing /tmp directory for Ozone services
Changing /tmp directory for CLI tools
▶︎
Kerberos configuration for Ozone
Security tokens in Ozone
Kerberos principal and keytab properties for Ozone service daemons
Securing DataNodes
Configure S3 credentials for working with Ozone
Configuring custom Kerberos principal for Ozone
Configuring Transparent Data Encryption for Ozone
Configuring TLS/SSL encryption manually for Ozone
Configuration for enabling mTLS in Ozone
▶︎
Configuring security for Storage Container Managers in High Availability
Considerations for enabling SCM HA security
▶︎
Configuring Ozone
Configuring Ozone services
Performance tuning for Ozone
Node maintenance
▶︎
How to: Storage
▶︎
Managing Data Storage
▶︎
Optimizing data storage
▶︎
Balancing data across disks of a DataNode
▶︎
Plan the data movement across disks
Parameters to configure the Disk Balancer
Run the Disk Balancer plan
Disk Balancer commands
▶︎
Erasure coding overview
Understanding erasure coding policies
Comparing replication and erasure coding
Best practices for rack and node setup for EC
Prerequisites for enabling erasure coding
Limitations of erasure coding
Using erasure coding for existing data
Using erasure coding for new data
Advanced erasure coding configuration
Erasure coding CLI command
Erasure coding examples
▶︎
Increasing storage capacity with HDFS compression
Enable GZipCodec as the default compression codec
Use GZipCodec with a one-time job
▶︎
Set HDFS quotas
Setting HDFS quotas in Cloudera Manager
▶︎
Configuring heterogeneous storage in HDFS
HDFS storage types
HDFS storage policies
Commands for configuring storage policies
Set up a storage policy for HDFS
Set up SSD storage using Cloudera Manager
Configure archival storage
The HDFS mover command
▶︎
Balancing data across an HDFS cluster
Why HDFS data becomes unbalanced
▶︎
Configurations and CLI options for the HDFS Balancer
Properties for configuring the Balancer
Balancer commands
Recommended configurations for the Balancer
▶︎
Configuring and running the HDFS balancer using Cloudera Manager
Configuring the balancer threshold
Configuring concurrent moves
Recommended configurations for the balancer
Running the balancer
Configuring block size
▶︎
Cluster balancing algorithm
Storage group classification
Storage group pairing
Block move scheduling
Block move execution
Exit statuses for the HDFS Balancer
HDFS
▶︎
Optimizing performance
▶︎
Improving performance with centralized cache management
Benefits of centralized cache management in HDFS
Use cases for centralized cache management
Centralized cache management architecture
Caching terminology
Properties for configuring centralized caching
Commands for using cache pools and directives
▶︎
Specifying racks for hosts
Viewing racks assigned to cluster hosts
Editing rack assignments for hosts
▶︎
Customizing HDFS
Customize the HDFS home directory
Properties to set the size of the NameNode edits directory
▶︎
Optimizing NameNode disk space with Hadoop archives
Overview of Hadoop archives
Hadoop archive components
Creating a Hadoop archive
List files in Hadoop archives
Format for using Hadoop archives with MapReduce
▶︎
Detecting slow DataNodes
Enable disk IO statistics
Enable detection of slow DataNodes
▶︎
Allocating DataNode memory as storage
HDFS storage types
LAZY_PERSIST memory storage policy
Configure DataNode memory as storage
▶︎
Improving performance with short-circuit local reads
Prerequisites for configuring short-ciruit local reads
Properties for configuring short-circuit local reads on HDFS
▶︎
Configure mountable HDFS
Add HDFS system mount
Optimize mountable HDFS
Configuring Proxy Users to Access HDFS
▶︎
Using DistCp to copy files
Using DistCp
Distcp syntax and examples
Using DistCp with Highly Available remote clusters
▶︎
Using DistCp with Amazon S3
Using a credential provider to secure S3 credentials
Examples of DistCp commands using the S3 protocol and hidden credentials
Kerberos setup guidelines for Distcp between secure clusters
▶︎
Distcp between secure clusters in different Kerberos realms
Configure source and destination realms in krb5.conf
Configure HDFS RPC protection
Specify truststore properties
Set HADOOP_CONF to the destination cluster
Launch distcp
Copying data between a secure and an insecure cluster using DistCp and WebHDFS
Post-migration verification
Using DistCp between HA clusters using Cloudera Manager
▶︎
Using the NFS Gateway for accessing HDFS
Install the NFS Gateway
▶︎
Start and stop the NFS Gateway services
Start the NFS Gateway services
Stop the NFS Gateway services
Verify validity of the NFS services
▶︎
Access HDFS from the NFS Gateway
How NFS Gateway authenticates and maps users
▶︎
APIs for accessing HDFS
Set up WebHDFS on a secure cluster
▶︎
Using HttpFS to provide access to HDFS
Add the HttpFS role
Using Load Balancer with HttpFS
▶︎
HttpFS authentication
Use curl to access a URL protected by Kerberos HTTP SPNEGO
▶︎
Data storage metrics
Using JMX for accessing HDFS metrics
HDFS Metrics
▶︎
Using HdfsFindTool to find files
Downloading Hdfsfindtool from the CDH archives
▶︎
Configuring Data Protection
▶︎
Data protection
▶︎
Backing up HDFS metadata
▶︎
Introduction to HDFS metadata files and directories
▶︎
Files and directories
NameNodes
JournalNodes
DataNodes
▶︎
HDFS commands for metadata files and directories
Configuration properties
▶︎
Back up HDFS metadata
Prepare to back up the HDFS metadata
Backing up NameNode metadata
Back up HDFS metadata using Cloudera Manager
Restoring NameNode metadata
Restore HDFS metadata from a backup using Cloudera Manager
Perform a backup of the HDFS metadata
▶︎
Configuring HDFS trash
Trash behavior with HDFS Transparent Encryption enabled
Enabling and disabling trash
Setting the trash interval
▶︎
Using HDFS snapshots for data protection
Considerations for working with HDFS snapshots
Enable snapshot creation on a directory
Create snapshots on a directory
Recover data from a snapshot
Options to determine differences between contents of snapshots
CLI commands to perform snapshot operations
▶︎
Managing snapshot policies using Cloudera Manager
Create a snapshot policy
Edit or delete a snapshot policy
Enable and disable snapshot creation using Cloudera Manager
Create snapshots using Cloudera Manager
Delete snapshots using Cloudera Manager
Preventing inadvertent deletion of directories
▶︎
Accessing Cloud Data
Cloud storage connectors overview
The Cloud Storage Connectors
▶︎
Working with Amazon S3
Limitations of Amazon S3
▶︎
Configuring Access to S3
▶︎
Configuring Access to S3 in Cloudera Base on premises
Using Configuration Properties to Authenticate
Using Per-Bucket Credentials to Authenticate
Using Environment Variables to Authenticate
Using EC2 Instance Metadata to Authenticate
Referencing S3 Data in Applications
▶︎
Configuring Per-Bucket Settings
Customizing Per-Bucket Secrets Held in Credential Files
Configuring Per-Bucket Settings to Access Data Around the World
▶︎
Encrypting Data on S3
▶︎
SSE-S3: Amazon S3-Managed Encryption Keys
Enabling SSE-S3
▶︎
SSE-KMS: Amazon S3-KMS Managed Encryption Keys
Enabling SSE-KMS
IAM Role permissions for working with SSE-KMS
▶︎
SSE-C: Server-Side Encryption with Customer-Provided Encryption Keys
Enabling SSE-C
▶︎
CSE-KMS: Amazon S3-KMS managed encryption keys
Enabling CSE-KMS
Configuring Encryption for Specific Buckets
Encrypting an S3 Bucket with Amazon S3 Default Encryption
Performance Impact of Encryption
▶︎
Safely Writing to S3 Through the S3A Committers
Introducing the S3A Committers
Configuring Directories for Intermediate Data
Using the Directory Committer in MapReduce
Verifying That an S3A Committer Was Used
Cleaning up after failed jobs
▶︎
Advanced Committer Configuration
Enabling Speculative Execution
Using Unique Filenames to Avoid File Update Inconsistency
Speeding up Job Commits by Increasing the Number of Threads
Securing the S3A Committers
The S3A Committers and Third-Party Object Stores
Limitations of the S3A Committers
Troubleshooting the S3A Committers
Security Model and Operations on S3
S3A and Checksums (Advanced Feature)
A List of S3A Configuration Properties
Working with versioned S3 buckets
Working with Third-party S3-compatible Object Stores
▶︎
Improving Performance for S3A
Working with S3 buckets in the same AWS region
▶︎
Configuring and tuning S3A block upload
Tuning S3A Uploads
Thread Tuning for S3A Data Upload
Optimizing S3A read performance for different file types
S3 Performance Checklist
Troubleshooting S3
▶︎
Working with Google Cloud Storage
▶︎
Configuring Access to Google Cloud Storage
Create a GCP Service Account
Create a Custom Role
Modify GCS Bucket Permissions
Configure Access to GCS from Your Cluster
▶︎
Manifest committer for ABFS and GCS
Using the manifest committer
Spark Dynamic Partition overwriting
Job summaries in _SUCCESS files
Job cleanup
Working with Google Cloud Storage
Advanced topics
Additional Configuration Options for GCS
▶︎
Working with the ABFS Connector
▶︎
Introduction to Azure Storage and the ABFS Connector
Feature Comparisons
Setting up and configuring the ABFS connector
▶︎
Configuring the ABFS Connector
▶︎
Authenticating with ADLS Gen2
Configuring Access to Azure in Cloudera Base on premises
ADLS Proxy Setup
▶︎
Manifest committer for ABFS and GCS
Using the manifest committer
Spark Dynamic Partition overwriting
Job summaries in _SUCCESS files
Job cleanup
Working with Azure ADLS Gen2 storage
Advanced topics
▶︎
Performance and Scalability
Hierarchical namespaces vs. non-namespaces
Flush options
▶︎
Using ABFS using CLI
Hadoop File System commands
Create a table in Hive
Accessing Azure Storage account container from spark-shell
Copying data with Hadoop DistCp
DistCp and Proxy Settings
ADLS Trash Folder Behavior
Troubleshooting ABFS
▶︎
Configuring HDFS ACLs
HDFS ACLs
Configuring ACLs on HDFS
Using CLI commands to create and list ACLs
ACL examples
ACLS on HDFS features
Use cases for ACLs on HDFS
▶︎
Enable authorization for HDFS web UIs
Enable authorization for additional HDFS web UIs
Configuring HSTS for HDFS Web UIs
▶︎
Configuring Fault Tolerance
▶︎
High Availability on HDFS clusters
▶︎
Configuring HDFS High Availability
NameNode architecture
Preparing the hardware resources for HDFS High Availability
▶︎
Using to manage HDFS HA
Enabling HDFS HA
Prerequisites for enabling HDFS HA using
Enabling High Availability and automatic failover
Disabling and redeploying HDFS HA
▶︎
Configuring other components to use HDFS HA
Configuring HBase to use HDFS HA
Configuring the Hive Metastore to use HDFS HA
Configuring Impala to work with HDFS HA
Configuring oozie to use HDFS HA
Changing a nameservice name for Highly Available HDFS using
Manually failing over to the standby NameNode
Additional HDFS haadmin commands to administer the cluster
Turning safe mode on HA NameNodes
Converting from an NFS-mounted shared edits directory to Quorum-Based Storage
Administrative commands
▶︎
Configuring Apache Kudu
▶︎
Configure Kudu processes
Experimental flags
Configuring the Kudu master
Configuring tablet servers
Rack awareness (Location awareness)
▶︎
Directory configurations
Changing directory configuration
▶︎
Managing Apache Kudu
▶︎
Limitations
Server management limitations
Cluster management limitations
Start and stop Kudu processes
▶︎
Orchestrate a rolling restart with no downtime
Minimize cluster distruption during planned downtime
▶︎
Kudu web interfaces
Kudu master web interface
Kudu tablet server web interface
Common web interface pages
Best practices when adding new tablet servers
Decommission or remove a tablet server
Use cluster names in the kudu command line tool
Migrate Kudu data from one directory to another on the same host
Migrate to a multiple Kudu master configuration
▶︎
Change master hostnames
Prepare for master hostname changes
Perform master hostname changes
▶︎
Removing Kudu masters through Cloudera Manager
Recommissioning Kudu masters through Cloudera Manager
▶︎
Remove Kudu masters through CLI
Prepare for removal
Perform the removal
How Range-aware replica placement in Kudu works
▶︎
Run the tablet rebalancing tool
Run a tablet rebalancing tool on a rack-aware cluster
Run a tablet rebalancing tool in Cloudera Manager
Run a tablet rebalancing tool in command line
▶︎
Managing Kudu tables with range-specific hash schemas
Range-specific hash schemas example: Using impala-shell
Range-specific hash schemas example: Using Kudu C++ client API
Range-specific hash schemas example: Using Kudu Java client API
▶︎
Managing Apache Kudu Security
▶︎
Kudu security considerations
Proxied RPCs in Kudu
Kudu security limitations
▶︎
Kudu authentication
Kudu authentication with Kerberos
Kudu authentication tokens
Client authentication to secure Kudu clusters
▶︎
JWT authentication for Kudu
Configuring server side JWT authentication for Kudu
Configuring client side JWT authentication for Kudu
Kudu coarse-grained authorization
▶︎
Kudu fine-grained authorization
Kudu and Apache Ranger integration
Kudu authorization tokens
Specifying trusted users
Kudu authorization policies
Ranger policies for Kudu
Disabling redaction
▶︎
Configuring a secure Kudu cluster using Cloudera Manager
Enabling Kerberos authentication and RPC encryption
Configuring custom Kerberos principal for Kudu
Configuring coarse-grained authorization with ACLs
Configuring TLS/SSL encryption for Kudu using
Enabling Ranger authorization
Configuring HTTPS encryption
Configuring data at rest encryption
▶︎
Backing up and Recovering Apache Kudu
▶︎
Kudu backup
Back up tables
Backup tools
Generate a table list
Backup directory structure
Physical backups of an entire node
▶︎
Kudu recovery
Restore tables from backups
Recover from disk failure
Recover from full disks
Bring a tablet that has lost a majority of replicas back online
Rebuild a Kudu filesystem layout
▶︎
Developing Applications with Apache Kudu
View the API documentation
Kudu example applications
Maven artifacts
Kudu Python client
▶︎
Kudu integration with Spark
Spark integration known issues and limitations
Spark integration best practices
Upsert option in Kudu Spark
Use Spark with a secure Kudu cluster
Spark tuning
▶︎
Using Hive Metastore with Apache Kudu
Integrating the Hive Metastore with Apache Kudu
Databases and Table Names
Administrative tools for Hive Metastore integration
Upgrading existing Kudu tables for Hive Metastore integration
Enabling the Hive Metastore integration
▶︎
Using Apache Impala with Apache Kudu
▶︎
Understanding Impala integration with Kudu
Impala database containment model
Internal and external Impala tables
Verifying the Impala dependency on Kudu
Impala integration limitations
▶︎
Using Impala to query Kudu tables
Query an existing Kudu table from Impala
Create a new Kudu table from Impala
Use CREATE TABLE AS SELECT
▶︎
Partitioning tables
Basic partitioning
Advanced partitioning
Non-covering range partitions
Partitioning guidelines
Optimize performance for evaluating SQL predicates
Insert data
INSERT and primary key uniqueness violations
Update data
Upsert a row
Alter a table
Delete data
Failures during INSERT, UPDATE, UPSERT, and DELETE operations
Drop a Kudu table
▶︎
Monitoring Apache Kudu
▶︎
Kudu metrics
Listing available metrics
Collecting metrics through HTTP
Diagnostics logging
Monitor cluster health with ksck
Report Kudu crashes using breakpad
Enable core dump for the Kudu service
Use the Charts Library with the Kudu service
▶︎
How to: Compute
▶︎
Using YARN Web UI and CLI
Accessing the YARN Web User Interface
Viewing the Cluster Overview
Viewing nodes and node details
Viewing queues and queue details
▶︎
Viewing all applications
Searching applications
Viewing application details
UI Tools
Using the YARN CLI to viewlogs for applications
▶︎
Configuring Apache Hadoop YARN Security
Linux Container Executor
▶︎
Managing Access Control Lists
YARN ACL rules
YARN ACL syntax
▶︎
YARN ACL types
Admin ACLs
Queue ACLs
▶︎
Application ACLs
Application ACL evaluation
MapReduce Job ACLs
Spark Job ACLs
Application logs' ACLs
▶︎
Configuring TLS/SSL for Core Hadoop Services
Configuring TLS/SSL for HDFS
Configuring TLS/SSL for YARN
Enable HTTPS communication
Configuring Cross-Origin Support for YARN UIs and REST APIs
Configuring YARN Security for Long-Running Applications
▶︎
YARN Ranger authorization support
YARN Ranger authorization support compatibility matrix
Enabling YARN Ranger authorization support
Disabling YARN Ranger authorization support
Enabling custom Kerberos principal support in YARN
Enabling custom Kerberos principal support in a Queue Manager cluster
▶︎
Configuring Apache Hadoop YARN High Availability
▶︎
YARN ResourceManager high availability
YARN ResourceManager high availability architecture
Configuring YARN ResourceManager high availability
Using the yarn rmadmin tool to administer ResourceManager high availability
Migrating ResourceManager to another host
▶︎
Work preserving recovery for YARN components
Configuring work preserving recovery on ResourceManager
Configuring work preserving recovery on NodeManager
Example: Configuration for work preserving recovery
▶︎
Managing and Allocating Cluster Resources using Capacity Scheduler
▶︎
Resource scheduling and management
YARN resource allocation of multiple resource-types
Hierarchical queue characteristics
Scheduling among queues
Application reservations
Resource distribution workflow
Resource allocation overview
▶︎
Use CPU scheduling
Configure CPU scheduling and isolation
Use CPU scheduling with distributed shell
▶︎
Use GPU scheduling
Configure GPU scheduling and isolation
Use GPU scheduling with distributed shell
▶︎
Use FPGA scheduling
Configure FPGA scheduling and isolation
Use FPGA with distributed shell
▶︎
Limit CPU usage with Cgroups
Use Cgroups
Enable Cgroups
▶︎
Managing YARN Queue Manager
Configuring YARN Queue Manager dependency
Updating YARN Queue Manager Database Password
Accessing the YARN Queue Manager UI
Providing read-only access to Queue Manager UI
Configuring the embedded Jetty Server in Queue Manager
▶︎
Managing queues
Adding queues using YARN Queue Manager UI
Configuring cluster capacity with queues
Configuring the resource capacity of root queue
▶︎
Mixed resource allocation mode (Technical Preview)
Setting capacity using mixed resource allocation mode (Technical Preview)
Changing resource allocation mode
Starting and stopping queues
Deleting queues
Setting queue priorities
▶︎
Configuring scheduler properties at the global level
Setting global maximum application priority
Configuring preemption
Enabling Intra-Queue preemption
Enabling LazyPreemption
Setting global application limits
Setting default Application Master resource limit
Enabling asynchronous scheduler
Configuring queue mapping to use the user name from the application tag using Cloudera Manager
Configuring NodeManager heartbeat
Configuring data locality
▶︎
Setting Maximum Parallel Application
Setting maximum parallel application limits
▶︎
Configuring per queue properties
Setting user limits within a queue
Setting Maximum Application limit for a specific queue
Setting Application-Master resource-limit for a specific queue
Setting maximum parallel application limits for a specific queue
Controlling access to queues using ACLs
Enabling preemption for a specific queue
Enabling Intra-Queue Preemption for a specific queue
▶︎
Setting ordering policies within a specific queue
Configure queue ordering policies
▶︎
Autoscaling clusters
Autoscaling behavior
Configuring autoscaling
▶︎
Dynamic Queue Scheduling
Creating a new Dynamic Configuration
Managing Dynamic Configurations
How to read the Configurations table
Handling Dynamic Configuration conflicts
Revalidating Dynamic Configurations
Dynamic Configurations execution log
▶︎
Managing placement rules
Placement rule policies
How to read the Placement Rules table
▶︎
Creating placement rules
Example - Placement rules creation
Reordering placement rules
Editing placement rules
Deleting placement rules
Enabling override of default queue mappings
▶︎
Managing dynamic queues
Managed Parent Queues
Converting a queue to a Managed Parent Queue
Enabling dynamic child creation in weight mode
Disabling dynamic child creation in weight mode
Managing dynamic child creation enabled parent queues
Managing dynamically created child queues
▶︎
Deleting dynamically created child queues
Disabling auto queue deletion globally
Disabling queue auto removal on a queue level
Configuring the queue auto removal expiration time
Deleting dynamically created child queues manually
▶︎
Partition configuration
Enabling node labels on a cluster to configure partition
Creating partitions
Assigning or unassigning a node to a partition
Viewing partitions
Associating partitions with queues
Disassociating partitions from queues
Deleting partitions
Setting a default partition expression
Using partitions when submitting a job
▶︎
Managing Apache Hadoop YARN Services
▶︎
Configuring YARN Services API to manage long-running applications
MapReduce Job History Server
Configuring YARN Services using Cloudera Manager
Configuring node attribute for application master placement
Migrating database configuration to a new location
▶︎
Running YARN Services
Deploying and managing services on YARN
Launching a YARN service
Saving a YARN service definition
▶︎
Creating new YARN services using UI
Creating a standard YARN service
Creating a custom YARN service
Managing the YARN service life cycle through the REST API
YARN services API examples
▶︎
Managing YARN Docker Containers
▶︎
Running Dockerized Applications on YARN
Docker on YARN example: MapReduce job
Docker on YARN example: DistributedShell
Docker on YARN example: Spark-on-Docker-on-YARN
▶︎
Configuring Apache Hadoop YARN Log Aggregation
YARN log aggregation overview
Log aggregation file controllers
Configuring log aggregation
Log aggregation properties
Configuring debug delay
▶︎
Managing Apache ZooKeeper
Add a ZooKeeper service
Use multiple ZooKeeper services
Replace a ZooKeeper disk
Replace a ZooKeeper role with ZooKeeper service downtime
Replace a ZooKeeper role without ZooKeeper service downtime
Replace a ZooKeeper role on an unmanaged cluster
Confirm the election status of a ZooKeeper service
▶︎
Configuring Apache ZooKeeper
Enable the AdminServer
Configure four-letter-word commands in ZooKeeper
▶︎
Managing Apache ZooKeeper Security
▶︎
ZooKeeper Authentication
Configure ZooKeeper server for Kerberos authentication
Configure ZooKeeper client shell for Kerberos authentication
Verify the ZooKeeper authentication
Enable server-server mutual authentication
Use Digest Authentication Provider
Configure ZooKeeper TLS/SSL using Cloudera Manager
▶︎
ZooKeeper ACLs Best Practices
ZooKeeper ACLs Best Practices: Atlas
ZooKeeper ACLs Best Practices: Cruise Control
ZooKeeper ACLs Best Practices: HBase
ZooKeeper ACLs Best Practices: HDFS
ZooKeeper ACLs Best Practices: Kafka
ZooKeeper ACLs Best Practices: Oozie
ZooKeeper ACLs Best Practices: Ranger
ZooKeeper ACLs best practices: Search
ZooKeeper ACLs Best Practices: YARN
ZooKeeper ACLs Best Practices: ZooKeeper
▶︎
How to: Data Access
▶︎
Using Hue
▶︎
About using Hue
Accessing and using Hue
▶︎
Viewing Hive query details
Viewing Hive query history
Viewing Hive query information
Viewing explain plan for a Hive query
Viewing Hive query timeline
Viewing configurations for a Hive query
Viewing DAG information for a Hive query
▶︎
Viewing Impala query details
Viewing Impala query history
Viewing Impala query information
Viewing the Impala query execution plan
Viewing the Impala query metrics
Viewing Impala profiles in Hue
Terminating Hive queries
Comparing Hive and Impala queries in Hue
▶︎
Start SQL AI Assistant
Generate SQL from NQL
Edit query in natural language
Explain query in natural language
Optimize SQL query
Fixing a query in Hue
Generate comment for a SQL query
Enable stored procedures in Hue
Run stored procedure from Hue
Using SQL to query HBase from Hue
Querying existing HBase tables
Enabling the SQL editor autocompleter
▶︎
Using governance-based data discovery
Searching metadata tags
Creating tables in Hue by importing files
Supported non-ASCII and special characters in Hue
Options to rerun Oozie workflows in Hue
Creating Iceberg tables using Hue
Unsupported features in Hue
Known limitations in Hue
▶︎
Administering Hue
Reference architecture
Hue configuration files
Hue configurations in Cloudera Runtime
Hue Advanced Configuration Snippet
▶︎
Set up SQL AI Assistant
▶︎
Prerequisites for configuring SQL AI Assistant
Open approach for passing token
Configure SQL AI Assistant using Cloudera AI Workbench
Configure SQL AI Assistant using the Cloudera AI Inference service
Configure SQL AI Assistant using the Microsoft Azure OpenAI service
Configure SQL AI Assistant using the Amazon Bedrock Service
Configure SQL AI Assistant using the OpenAI platform
Configure SQL AI Assistant using vLLM
List of model-related configurations
▶︎
Hue logs
Standard stream logs
Hue service Django logs
Enabling DEBUG logging for Hue logs
Enabling httpd log rotation for Hue
Hue supported browsers
Enabling cache-control HTTP headers when using Hue
Adding a Hue service with Cloudera Manager
Adding a Hue role instance with Cloudera Manager
Setting up a Hue service account with a custom name
Options to restart the Hue service
▶︎
Customizing the Hue web interface
Adding a custom banner in Hue
Changing the page logo in Hue
Adding a splash screen in Hue
Setting the cache timeout
Enabling or disabling anonymous usage date collection
Configuring the number of objects displayed in Hue
▶︎
Using Oracle database with Hue
Creating Hue Schema in Oracle database
Downloading, staging, and activating the Oracle Instant Client parcel
Configuring Oracle as backend database for Hue
Configuring high availability support for Oracle RAC database
▶︎
Using MySQL database with Hue
Downloading and installing MySQL database
Configuring MySQL server
Installing and configuring MySQL on RHEL 8
Installing MySQL client for MySQL databases
Creating the Hue database
Configuring MySQL as the backend database for Hue
Configuring TLSv1.2-enforced MySQL server
▶︎
Using MariaDB database with Hue
Downloading and installing MariaDB database
Configuring MariaDB server
Installing and configuring MariaDB on RHEL 8
Installing MySQL client for MariaDB databases
Creating the Hue database
Configuring MariaDB as the backend database for Hue
▶︎
Using PostgreSQL database with Hue
Download and install PostgreSQL
Configure the PostgreSQL server
Configure PostgreSQL as the backend database for Hue
Disabling the share option in Hue
Enabling Hue applications with Cloudera Manager
Running shell commands
Downloading and exporting data from Hue
Backing up the Hue database
Enabling a multi-threaded environment for Hue
▶︎
Moving the Hue service to a different host
Migrating Hue service using Add Service wizard
Migrating Hue service by adding new role instances
Adding Query Processor service to a cluster
Removing Query Processor service from cluster
Enabling the Query Processor service in Hue
Adding Query Processor admin users and groups
Cleaning up old queries
Downloading debug bundles
Enabling DSL search for Hue
Configuring Hue to handle HS2 failover
Enabling Spark 3 engine in Hue
Enabling the Phoenix SQL editor in Hue
Using Hue scripts
Configurations for submitting a Hive query to a dedicated queue
Enabling browsing Ozone from Hue
Limitations in browsing Ozone from Hue
Configuring timezone for Hue
▶︎
Securing Hue
▶︎
User management in Hue
Understanding Hue users and groups
Finding the list of Hue superusers
Creating a Hue user
Restricting user login
▶︎
LDAP import and sync options
Import and sync LDAP users and groups
Locking an account after invalid login attempts
Unlocking locked out user accounts in Hue
Creating a group in Hue
Managing Hue permissions
Resetting Hue user password
Assigning superuser status to an LDAP user
Configuring file and directory permissions for Hue
▶︎
User authentication in Hue
Authentication using Kerberos
▶︎
Authentication using LDAP
Configuring authentication with LDAP and Search Bind
Configuring authentication with LDAP and Direct Bind
Multi-server LDAP/AD autentication
Testing the LDAP configuration
Configuring group permissions
Enabling LDAP authentication with HiveServer2 and Impala
LDAP properties
Configuring LDAP on unmanaged clusters
▶︎
Authentication using SAML
Configuring SAML authentication on managed clusters
Manually configuring SAML authentication
Integrating your identity provider's SAML server with Hue
SAML properties
Troubleshooting SAML authentication
Authentication using Knox SSO
Authentication using PAM
Applications and permissions reference
Securing Hue passwords with scripts
Directory permissions when using PAM authentication backend
▶︎
Configuring TLS/SSL for Hue
Creating a truststore file in PEM format
Configuring Hue as a TLS/SSL client
Enabling Hue as a TLS/SSL client
Configuring Hue as a TLS/SSL server
Enabling Hue as a TLS/SSL server using Cloudera Manager
Enabling TLS/SSL for Hue Load Balancer
Enabling TLS/SSL communication with HiveServer2
Enabling TLS/SSL communication with Impala
Securing database connections with TLS/SSL
Disabling CA Certificate validation from Hue
Securing sessions
Specifying HTTP request methods
Restricting supported ciphers for Hue
Specifying domains or pages to which Hue can redirect users
Securing Hue from CWE-16
Setting Oozie permissions
Configuring secure access between Solr and Hue
▶︎
Tuning Hue
Adding a load balancer
▶︎
Configuring high availability for Hue
Configuring Hive and Impala for high availability with Hue
Configuring for HDFS high availability
Configuring dedicated Impala coordinator
Configuring the Hue Query Processor scan frequency
▶︎
Search Tutorial
Tutorial
▶︎
Validating the Cloudera Search deployment
Create a test collection
Index sample data
Query sample data
▶︎
Indexing sample tweets with Cloudera Search
Create a collection for tweets
Copy sample tweets to HDFS
▶︎
Using MapReduce batch indexing to index sample Tweets
Batch indexing into online Solr servers using GoLive
Batch indexing into offline Solr shards
▶︎
Securing Cloudera Search
Cloudera Search security aspects
Configure TLS/SSL encryption for Solr
Using a load balancer
Cloudera Search authentication
▶︎
Set proxy server authentication for clusters using Kerberos
Configure Kerberos authentication for Solr
Enable Kerberos authentication in Solr
Overview of proxy usage and load balancing for Search
Configuring custom Kerberos principals for Solr
Enable LDAP authentication in Solr
Enabling Solr clients to authenticate with a secure Solr
Creating a JAAS configuration file
Manage Ranger authorization in Solr
Configuring Ranger authorization
Enable document-level authorization
▶︎
Tuning Cloudera Search
Solr server tuning categories
Setting Java system properties for Solr
Enable multi-threaded faceting
Tuning garbage collection
Enable garbage collector logging
Solr and HDFS - the block cache
▶︎
Tuning replication
Adjust the Solr replication factor for index files stored in HDFS
▶︎
Managing Cloudera Search
Viewing and modifying log levels for Cloudera Search and related services
▶︎
Viewing and modifying Solr configuration using Cloudera Manager
Setting the Solr Critical State Cores Percentage parameter
Setting the Solr Recovering Cores Percentage parameter
▶︎
Managing collection configuration
Cloudera Search config templates
Generating collection configuration using configs
Securing configs with ZooKeeper ACLs and Ranger
Generating Solr collection configuration using instance directories
Modifying a collection configuration generated using an instance directory
Converting instance directories to configs
Using custom JAR files with Cloudera Search
Retrieving the clusterstate.json file
▶︎
Managing collections
Creating a Solr collection
Viewing existing collections
Deleting all documents in a collection
Deleting a collection
Updating the schema in a collection
Creating a replica of an existing shard
Migrating Solr replicas
Splitting a shard on HDFS
Backing up a collection from HDFS
Backing up a collection from local file system
Restoring a collection
Defining a backup target in solr.xml
Cloudera Search log files
Cloudera Search configuration files
▶︎
Cloudera Search ETL
ETL with Cloudera Morphlines
Using Morphlines to index Avro
Using Morphlines with Syslog
▶︎
Indexing Data Using Morphlines
Indexing data
▶︎
Lily HBase NRT indexing
Adding the Lily HBase indexer service
Starting the Lily HBase NRT indexer service
▶︎
Using the Lily HBase NRT indexer service
Enable replication on HBase column families
Create a Collection in Cloudera Search
Creating a Lily HBase Indexer Configuration File
Creating a Morphline Configuration File
Understanding the extractHBaseCells Morphline Command
Registering a Lily HBase Indexer Configuration with the Lily HBase Indexer Service
Verifying that Indexing Works
Using the indexer HTTP interface
▶︎
Configuring Lily HBase Indexer Security
Configure Lily HBase Indexer to use TLS/SSL
Configure Lily HBase Indexer Service to use Kerberos authentication
▶︎
Batch indexing using Morphlines
Spark indexing using morphlines
▶︎
MapReduce indexing
▶︎
MapReduceIndexerTool
MapReduceIndexerTool input splits
MapReduceIndexerTool metadata
MapReduceIndexerTool usage syntax
Indexing data with MapReduceIndexerTool in Solr backup format
▶︎
Lily HBase batch indexing for Cloudera Search
Populating an HBase Table
Create a Collection in Cloudera Search
Creating a Lily HBase Indexer Configuration File
Creating a Morphline Configuration File
Understanding the extractHBaseCells Morphline Command
Running the HBaseMapReduceIndexerTool
HBaseMapReduceIndexerTool command line reference
Using --go-live with SSL or Kerberos
Understanding --go-live and HDFS ACLs
▶︎
Indexing Data Using Spark-Solr Connector
▶︎
Batch indexing to Solr using SparkApp framework
Create indexer Maven project
Run the spark-submit job
▶︎
Migrating Data Using Sqoop
Data migration to Apache Hive
Setting Up Sqoop
Atlas Hook for Sqoop
▶︎
Sqoop enhancements to the Hive import process
Configuring custom Beeline arguments
Configuring custom Hive JDBC arguments
Configuring a custom Hive CREATE TABLE statement
Configuring custom Hive table properties
▶︎
Secure options to provide Hive password during a Sqoop import
Providing the Hive password through a prompt
Providing the Hive password through a file
Providing the Hive password through an alias
Providing the Hive password through an alias in a file
Saving the password to Hive Metastore
▶︎
Imports into Hive
Creating a Sqoop import command
Importing RDBMS data into Hive
▶︎
HDFS to Apache Hive data migration
Importing RDBMS data to HDFS
Converting an HDFS file to ORC
Incrementally updating an imported table
Import command options
▶︎
Application Access
Application Access
Services support client RPMs for CDP Private Cloud Base 7.3.1
Prerequisites
Support for packages
Package management tools
Repository configuration files
▶︎
Select the repository strategy
Option 1
Option 2
▶︎
Option 3
Creating an internal yum repository
▶︎
Downloading and configuring the client packages
RHEL
SLES
Ubuntu
Installing the client packages
▶︎
How to: Data Warehousing
▶︎
Working with Apache Hive Metastore
HMS table storage
Configuring HMS for high availability
HWC authorization
Authorizing external tables
Configure HMS properties for authorization
Filter HMS results
▶︎
Setting up the metastore database
▶︎
Setting up the backend Hive metastore database
Set up MariaDB or MySQL database
Set up a PostgreSQL database
Set up an Oracle database
Configuring metastore database properties
Configuring metastore location and HTTP mode
Setting up a JDBC URL connection override
Tuning the metastore
Hive Metastore leader election
▶︎
Starting Apache Hive
Start Hive on an insecure cluster
Start Hive using a password
Run a Hive command
Running a query
Converting Hive CLI scripts to Beeline
Configuring graceful shutdown property for HiveServer
▶︎
Using Apache Hive
▶︎
Apache Hive 3 tables
Locating Hive tables and changing the location
Refer to a table using dot notation
Understanding CREATE TABLE behavior
Creating a CRUD transactional table
Creating an insert-only transactional table
Creating, using, and dropping an external table
Creating an Ozone-based external table
Accessing Hive files in Ozone
Recommended Hive configurations when using Ozone
Dropping an external table along with data
Converting a managed non-transactional table to external
▶︎
External tables based on a non-default schema
Using your schema in MariaDB
Using your schema in MS SQL
Using your schema in Oracle
Using your schema in PostgreSQL
Using constraints
Determining the table type
Apache Hive 3 ACID transactions
▶︎
Apache Hive query basics
Querying the information_schema database
Inserting data into a table
Updating data in a table
Merging data in tables
Deleting data from a table
▶︎
Creating a temporary table
Configuring temporary table storage
▶︎
Using a subquery
Subquery restrictions
Use wildcards with SHOW DATABASES
Aggregating and grouping data
Querying correlated data
▶︎
Using common table expressions
Use a CTE in a query
Comparing tables using ANY/SOME/ALL
Escaping an invalid identifier
CHAR data type support
ORC vs Parquet formats
Creating a default directory for managed tables
Generating surrogate keys
▶︎
Partitions and performance
Creating partitions dynamically
▶︎
Partition refresh and configuration
Automating partition discovery and repair
Managing partition retention time
Repairing partitions manually using MSCK repair
▶︎
Query scheduling
Enabling scheduled queries
Enabling all scheduled queries
Periodically rebuilding a materialized view
Getting scheduled query information and monitor the query
Lateral View
▶︎
Materialized views
▶︎
Creating and using a materialized view
Creating the tables and view
Verifing use of a query rewrite
Using optimizations from a subquery
Dropping a materialized view
Showing materialized views
Describing a materialized view
Managing query rewrites
Purposely using a stale materialized view
Creating and using a partitioned materialized view
▶︎
Cloudera Data Warehouse HPL/SQL stored procedures
Setting up a Cloudera Data Warehouse client
Setting up a Hive client
Creating a function
Using the cursor to return record sets
Stored procedure examples
Using JdbcStorageHandler to query RDBMS
▶︎
Using functions
Reloading, viewing, and filtering functions
▶︎
Create a user-defined function
Setting up the development environment
Creating the UDF class
Building the project and upload the JAR
Registering the UDF
Calling the UDF in a query
▶︎
Managing Apache Hive
▶︎
ACID operations
Configuring partitions for transactions
Options to monitor transactions
Options to monitor transaction locks
▶︎
Data compaction
Compaction tasks
Initiating automatic compaction in Cloudera Manager
Starting compaction manually
Options to monitor compactions
Disabling automatic compaction
Configuring compaction using table properties
Configuring compaction in Cloudera Manager
Configuring the compaction check interval
Compactor properties
▶︎
Compaction observability in Cloudera Manager
Configuring compaction health monitoring
Monitoring compaction health in Cloudera Manager
Hive ACID metric properties for compaction observability
▶︎
Query vectorization
Vectorization default
Query vectorization properties
Checking query execution
Tracking Hive on Tez query execution
Tracking an Apache Hive query in YARN
Application not running message
▶︎
Configuring Apache Hive
Understanding CREATE TABLE behavior
Configuring legacy CREATE TABLE behavior
Limiting concurrent connections
Hive on Tez configurations
▶︎
Configuring HiveServer high availability using a load balancer
Configuring the Hive Delegation Token Store
Adding a HiveServer role
Configuring the HiveServer load balancer
Achieving cross-cluster availability through Hive Load Balancer failover
Configuring HiveServer high availability using ZooKeeper
▶︎
Generating Hive statistics
Setting up the cost-based optimizer and statistics
Generating and viewing Hive statistics
Statistics generation and viewing commands
Configuring query audit logs to include caller context
Removing scratch directories
▶︎
Securing Apache Hive
Hive access authorization
Transactional table access
External table access
Accessing Hive files in Ozone
▶︎
Configuring access to Hive on YARN
Configuring HiveServer for ETL using YARN queues
Managing YARN queue users
Configuring queue mapping to use the user name from the application tag using Cloudera Manager
Disabling impersonation (doas)
Connecting to an Apache Hive endpoint through Apache Knox
HWC authorization
▶︎
Hive authentication
Securing HiveServer using LDAP
Client connections to HiveServer
Pluggable authentication modules in HiveServer
JDBC connection string syntax
▶︎
Communication encryption
Enabling TLS/SSL for HiveServer
Enabling SASL in HiveServer
▶︎
Securing an endpoint under AutoTLS
Securing Hive metastore
Token-based authentication for Cloudera Data Warehouse integrations
Activating the Hive web UI
▶︎
Integrating Apache Hive with Apache Spark and BI
▶︎
Hive Warehouse Connector for accessing Apache Spark data
Setting up HWC with build systems
HWC limitations
▶︎
Reading data through HWC
Direct Reader mode introduction
Using Direct Reader mode
Direct Reader configuration properties
Direct Reader limitations
Secure access mode introduction
Setting up secure access mode
Using secure access mode
Configuring caching for secure access mode
JDBC read mode introduction
Using JDBC read mode
JDBC mode configuration properties
JDBC mode limitations
Kerberos configurations for HWC
Writing data through HWC
Apache Spark executor task statistics
▶︎
HWC and DataFrame APIs
HWC and DataFrame API limitations
HWC supported types mapping
Catalog operations
Read and write operations
Committing a transaction for Direct Reader
Closing HiveWarehouseSession operations
Using HWC for streaming
Hive Warehouse Connector streaming for transactional tables
Managing streaming with Hive Warehouse Connector
HWC API Examples
Hive Warehouse Connector Interfaces
Submitting a Scala or Java application
Examples of writing data in various file formats
▶︎
HWC integration pyspark, sparklyr, and Zeppelin
Submitting a Python app
Reading and writing Hive tables in R
Livy interpreter configuration
Reading and writing Hive tables in Zeppelin
▶︎
Apache Hive-Kafka integration
Creating a table for a Kafka stream
▶︎
Querying Kafka data
Querying live data from Kafka
Perform ETL by ingesting data from Kafka into Hive
▶︎
Writing data to Kafka
Writing transformed Hive data to Kafka
Setting consumer and producer table properties
Kafka storage handler and table properties
▶︎
Connecting Hive to BI tools using a JDBC/ODBC driver
Getting the JDBC driver
Getting the ODBC driver
Configuring the BI tool
Specify the JDBC connection string
JDBC connection string syntax
Using JdbcStorageHandler to query RDBMS
Setting up JdbcStorageHandler for Postgres
▶︎
Apache Hive Performance Tuning
Query results cache
Managing high partition workloads
Best practices for performance tuning
▶︎
ORC file format
Advanced ORC properties
Performance improvement using partitions
Apache Tez and Hive LLAP
Bucketed tables in Hive
▶︎
Migrating Data Using Sqoop
Data migration to Apache Hive
Setting Up Sqoop
Atlas Hook for Sqoop
▶︎
Sqoop enhancements to the Hive import process
Configuring custom Beeline arguments
Configuring custom Hive JDBC arguments
Configuring a custom Hive CREATE TABLE statement
Configuring custom Hive table properties
▶︎
Secure options to provide Hive password during a Sqoop import
Providing the Hive password through a prompt
Providing the Hive password through a file
Providing the Hive password through an alias
Providing the Hive password through an alias in a file
Saving the password to Hive Metastore
▶︎
Imports into Hive
Creating a Sqoop import command
Importing RDBMS data into Hive
▶︎
HDFS to Apache Hive data migration
Importing RDBMS data to HDFS
Converting an HDFS file to ORC
Incrementally updating an imported table
Import command options
▶︎
Using Apache Iceberg
▶︎
Apache Iceberg features
Alter table feature
Create table feature
Create table as select feature
Create partitioned table as select feature
Create table … like feature
Delete data feature
Describe table metadata feature
Drop table feature
Expire snapshots feature
Insert table data feature
Load data inpath feature
Load or replace partition data feature
Materialized view feature
Materialized view rebuild feature
Merge feature
▶︎
Migrate Hive table to Iceberg feature
Changing the table metadata location
▶︎
Flexible partitioning
Partition evolution feature
Partition transform feature
Query metadata tables feature
Rollback table feature
Select Iceberg data feature
Schema evolution feature
Schema inference feature
Snapshot management
Time travel feature
Truncate table feature
▶︎
Best practices for Iceberg in Cloudera
Making row-level changes on V2 tables only
▶︎
Performance tuning
Caching manifest files
Configuring manifest caching in Cloudera Manager
Unsupported features and limitations
▶︎
Accessing Iceberg tables
Opening Ranger in Cloudera Data Hub
Editing a storage handler policy to access Iceberg files on the file system
Creating a SQL policy to query an Iceberg table
Accessing Iceberg files in Ozone
Creating an Iceberg table
Creating an Iceberg partitioned table
Expiring snapshots
Inserting data into a table
Migrating a Hive table to Iceberg
Selecting an Iceberg table
Running time travel queries
Updating an Iceberg partition
Test driving Iceberg from Impala
Hive demo data
Test driving Iceberg from Hive
Iceberg data types
Iceberg table properties
▶︎
Starting and Stopping Apache Impala
Modifying Impala Startup Options
▶︎
Configuring Client Access to Impala
Impala Startup Options for Client Connections
▶︎
Impala Shell Tool
Impala Shell Configuration Options
Impala Shell Configuration File
Connecting to Impala Daemon in Impala Shell
Running Commands and SQL Statements in Impala Shell
Impala Shell Command Reference
Connecting to a kerberized Impala daemon
Configuring ODBC for Impala
Configuring JDBC for Impala
Configuring Impyla for Impala
Configuring Delegation for Clients
Spooling Query Results
Shut Down Impala
▶︎
Setting Timeouts in Impala
Setting Timeout and Retries for Thrift Connections to Backend Client
Increasing StateStore Timeout
Setting the Idle Query and Idle Session Timeouts
Adjusting Heartbeat TCP Timeout Interval
▶︎
Securing Apache Impala
Securing Impala
Configuring Impala TLS/SSL
▶︎
Impala Authentication
Configuring Kerberos Authentication for Impala
▶︎
Configuring LDAP Authentication
Enabling LDAP in Hue
Enabling LDAP Authentication for impala-shell
▶︎
Configuring JWT Authentication
Enabling JWT Authentication for impala-shell
▶︎
Impala Authorization
Configuring Authorization
Row-level filtering in Impala with Ranger policies
▶︎
Configuring Apache Impala
Configuring Impala
▶︎
Configuring Impala for High Availability
Enabling Catalog and StateStore High Availability (HA)
Disabling Catalog and StateStore High Availability
Failure detection for Catalog and StateStore
Configuring Load Balancer for Impala
Migrating Impala Catalog to another host
▶︎
Tuning Apache Impala
Setting Up HDFS Caching
Setting up Data Cache for Remote Reads
Configuring Dedicated Coordinators and Executors
▶︎
Managing Apache Impala
▶︎
ACID Operation
Concepts Used in FULL ACID v2 Tables
Key Differences between INSERT-ONLY and FULL ACID Tables
Compaction of Data in FULL ACID Transactional Table
▶︎
Managing Resources in Impala
Estimating memory limits
Admission Control and Query Queuing
Enabling Admission Control
Creating Static Pools
Configuring Dynamic Resource Pool
Dynamic Resource Pool Settings
Admission Control Sample Scenario
Cancelling a Query
▶︎
Managing Metadata in Impala
On-demand Metadata
Automatic Invalidation of Metadata Cache
Automatic Invalidation/Refresh of Metadata
▶︎
Impala fault tolerance mechanisms
Transparent query retries in Impala
Node blacklisting in Impala
▶︎
Monitoring Apache Impala
▶︎
Impala Logs
Managing Logs
Impala lineage
▶︎
Web User Interface for Debugging
Debug Web UI for Impala Daemon
Debug Web UI for StateStore
Debug Web UI for Catalog Server
Configuring Impala Web UI
Debug Web UI for Query Timeline
▶︎
How to: Operational Database
▶︎
Configuring Apache HBase
Using DNS with HBase
Use the Network Time Protocol (NTP) with HBase
Configure the graceful shutdown timeout property
▶︎
Setting user limits for HBase
Configure ulimit for HBase using Cloudera Manager
Configuring ulimit for HBase
Configure ulimit using Pluggable Authentication Modules using the Command Line
Using dfs.datanode.max.transfer.threads with HBase
Configure encryption in HBase
▶︎
Using hedged reads
Enable hedged reads for HBase
Monitor the performance of hedged reads
▶︎
Understanding HBase garbage collection
Configure HBase garbage collection
Disable the BoundedByteBufferPool
▶︎
Configuring edge node on AWS
Prerequisites
▶︎
Configuring network line-of-sight
Reuse the subnets created for Cloudera
Verify the network line-of-sight
Configure DNS
Verify the DNS configuration
Configure Kerberos
▶︎
Configuring edge node on Azure
Prerequisites
▶︎
Configuring network line-of-sight
Reuse the subnets created for Cloudera
Verify the network line-of-sight
Configure DNS
Verify the DNS configuration
Configure Kerberos
Configuring edge node on GCP
Configure the HBase canary
Configuring auto split policy in an HBase table
▶︎
Using HBase blocksize
Configure the blocksize for a column family
▶︎
Configuring HBase BlockCache
Contents of the BlockCache
Size the BlockCache
Decide to use the BucketCache
▶︎
About the Off-heap BucketCache
Off-heap BucketCache
BucketCache IO engine
Configure BucketCache IO engine
Configure the off-heap BucketCache using Cloudera Manager
Configure the off-heap BucketCache using the command line
Cache eviction priorities
Bypass the BlockCache
Monitor the BlockCache
▶︎
HBase persistent BucketCache
Configuring HBase persistent BucketCache
Configuration details
▶︎
Using quota management
Configuring quotas
General Quota Syntax
▶︎
Throttle quotas
Throttle quota examples
Space quotas
Quota enforcement
Quota violation policies
▶︎
Impact of quota violation policy
Live write access
Bulk Write Access
Read access
Metrics and Insight
Examples of overlapping quota policies
Number-of-Tables Quotas
Number-of-Regions Quotas
▶︎
Using HBase scanner heartbeat
Configure the scanner heartbeat using Cloudera Manager
▶︎
Storing medium objects (MOBs)
Prerequisites
Configure columns to store MOBs
Configure the MOB cache using Cloudera Manager
Test MOB storage and retrieval performance
MOB cache properties
▶︎
Limiting the speed of compactions
Configure the compaction speed using Cloudera Manager
Enable HBase indexing
▶︎
Using HBase coprocessors
Add a custom coprocessor
Disable loading of coprocessors
▶︎
Configuring HBase MultiWAL
Configuring MultiWAL support using Cloudera Manager
▶︎
Configuring the storage policy for the Write-Ahead Log (WAL)
Configure the storage policy for WALs using Cloudera Manager
Configure the storage policy for WALs using the Command Line
▶︎
Using RegionServer grouping
Enable RegionServer grouping using Cloudera Manager
Configure RegionServer grouping
Monitor RegionServer grouping
Remove a RegionServer from RegionServer grouping
Enabling ACL for RegionServer grouping
Best practices when using RegionServer grouping
Disable RegionServer grouping
▶︎
HBase load balancer
▶︎
HBase cache-aware load balancer configuration
Overview
Components of cache-aware load balancer
Configuration details
▶︎
HBase stochastic load balancer configuration
Introduction to the HBase stochastic load balancer
Components of stochastic load balancer
Configuration details
▶︎
Optimizing HBase I/O
HBase I/O components
Advanced configuration for write-heavy workloads
Enabling HBase META Replicas
Enabling ZooKeeper-less connection registry for HBase client
▶︎
Managing Apache HBase Security
▶︎
HBase authentication
Configuring HBase servers to authenticate with a secure HDFS cluster
Configuring secure HBase replication
Configure the HBase client TGT renewal period
Disabling Kerberos authentication for HBase clients
HBase authorization
▶︎
Configuring TLS/SSL for HBase
Prerequisites to configure TLS/SSL for HBase
Configuring TLS/SSL for HBase Web UIs
Configuring TLS/SSL for HBase REST Server
Configuring TLS/SSL for HBase Thrift Server
Configuring HSTS for HBase Web UIs
▶︎
Accessing Apache HBase
▶︎
Use the HBase shell
Virtual machine options for HBase Shell
Script with HBase Shell
Use the HBase command-line utilities
Use the HBase APIs for Java
▶︎
Use the HBase REST server
Installing the REST Server using Cloudera Manager
Using the REST API
Using the REST proxy API
▶︎
Using the Apache Thrift Proxy API
Preparing a thrift server and client
List of Thrift API and HBase configurations
Example for using THttpClient API in secure cluster
Example for using THttpClient API in unsecure cluster
Example for using TSaslClientTransport API in secure cluster without HTTP
▶︎
Using Apache HBase Hive integration
Configure Hive to use with HBase
Using HBase Hive integration
▶︎
Using the HBase-Spark connector
Configuring HBase-Spark connector when both are on same cluster
Configuring HBase-Spark connector when HBase is on remote cluster
Example: Using the HBase-Spark connector
▶︎
Use the Hue HBase app
Configure the HBase thrift server role
▶︎
Managing Apache HBase
▶︎
Starting and stopping HBase using Cloudera Manager
Start HBase
Stop HBase
▶︎
Graceful HBase shutdown
Gracefully shut down an HBase RegionServer
Gracefully shut down the HBase service
▶︎
Importing data into HBase
Choose the right import method
Use snapshots
Use CopyTable
▶︎
Use BulkLoad
Use cases for BulkLoad
Use cluster replication
Use Sqoop
Use Spark
Use a custom MapReduce job
▶︎
Use HashTable and SyncTable Tool
HashTable/SyncTable tool configuration
Synchronize table data using HashTable/SyncTable tool
▶︎
Writing data to HBase
Variations on Put
Versions
Deletion
Examples
▶︎
Reading data from HBase
Perform scans using HBase Shell
▶︎
HBase filtering
Dynamically loading a custom filter
Logical operators, comparison operators and comparators
Compound operators
Filter types
HBase Shell example
Java API example
HBase online merge
Move HBase Master Role to another host
Expose HBase metrics to a Ganglia server
▶︎
HBase metrics
Using JMX for accessing HBase metrics
Accessing HBase metrics in Prometheus format
▶︎
Configuring Apache HBase High Availability
Enable HBase high availability using Cloudera Manager
HBase read replicas
Timeline consistency
Keep replicas current
Read replica properties
Configure read replicas using Cloudera Manager
▶︎
Using rack awareness for read replicas
Create a topology map
Create a topology script
Activate read replicas on a table
Request a timeline-consistent read
▶︎
Using Apache HBase Backup and Disaster Recovery
HBase backup and disaster recovery strategies
▶︎
Configuring HBase snapshots
About HBase snapshots
▶︎
Manage HBase snapshots using COD CLI
Create a snapshot
List snapshots
Restore a snapshot
List restored snapshots
Delete snapshots
▶︎
Manage HBase snapshots using the HBase shell
Shell commands
Take a snapshot using a shell script
Export a snapshot to another cluster
Information and debugging
▶︎
Using HBase replication
Common replication topologies
Notes about replication
Replication requirements
▶︎
Deploy HBase replication
Replication across three or more clusters
Enable replication on a specific table
Configure secure replication
▶︎
Configure bulk load replication
Enable bulk load replication using Cloudera Manager
Create empty table on the destination cluster
Disable replication at the peer level
Stop replication in an emergency
▶︎
Initiate replication when data already exist
Replicate pre-exist data in an active-active deployment
Using the CldrCopyTable utility to copy data
Effects of WAL rolling on replication
Configuring secure HBase replication
Restore data from a replica
Verify that replication works
Replication caveats
▶︎
Configuring Apache HBase for Apache Phoenix
Configure HBase for use with Phoenix
▶︎
Using Apache Phoenix to Store and Access Data
▶︎
Mapping Apache Phoenix schemas to Apache HBase namespaces
Enable namespace mapping
▶︎
Associating tables of a schema to a namespace
Associate table in a customized Kerberos environment
Associate a table in a non-customized environment without Kerberos
▶︎
Using secondary indexing
Use strongly consistent indexing
Migrate to strongly consistent indexing
▶︎
Using transactions
Configure transaction support
Use transactions with tables
▶︎
Using JDBC API
Connecting to PQS using JDBC
Connect to Phoenix Query Server
Connect to Phoenix Query Server through Apache Knox
Launching Apache Phoenix Thin Client
▶︎
Using the Phoenix JDBC Driver
▶︎
Configuring the Phoenix classpath
Adding the Phoenix JDBC driver jar
Adding the HBase or Hadoop configuration files
Understanding the Phoenix JDBC URL
Using non-JDBC drivers
▶︎
Using Apache Phoenix-Spark connector
Configure Phoenix-Spark connector
Phoenix-Spark connector usage examples
▶︎
Using Apache Phoenix-Hive connector
Configure Phoenix-Hive connector
Apache Phoenix-Hive usage examples
Limitations of Phoenix-Hive connector
▶︎
Managing Apache Phoenix Security
Phoenix is FIPS compliant
Managing Apache Phoenix security
Enable Phoenix ACLs
Configure TLS encryption manually for Phoenix Query Server
▶︎
Managing Operational Database powered by Apache Accumulo
Change root user password
Find latest Operational Database keytab
Relax WAL durability
▼
How to: Data Engineering
▶︎
Configuring Apache Spark
▶︎
Configuring dynamic resource allocation
Customize dynamic resource allocation settings
Configure a Spark job for dynamic resource allocation
Dynamic resource allocation properties
▶︎
Spark security
Enabling Spark authentication
Enabling Spark Encryption
Running Spark applications on secure clusters
Configuring HSTS for Spark
Accessing compressed files in Spark
▶︎
Using Spark History Servers with high availability
Limitation for Spark History Server with high availability
Configuring high availability for Spark History Server with an external load balancer
Configuring high availability for Spark History Server with an internal load balancer
Configuring high availability for Spark History Server with multiple Knox Gateways
How to access Spark files on Ozone
▼
Upgrading Apache Spark
▼
Upgrading Spark 2 to Spark 3 for Cloudera on premises 7.3.1
▶︎
Upgrading from 7.1.9 SP1
Upgrade from Spark 2.4.8
Upgrade from Spark 2.4.8 (with CDS 3.3.2)
Upgrade from Spark 3.3.2 (CDS)
▶︎
Upgrading from 7.1.8
Upgrade from Spark 2.4.8
Upgrade from Spark 2.4.8 (with connectors)
Upgrade from Spark 2.4.8 (with CDS 3.3.x)
Upgrade from Spark 2.4.8 (with CDS 3.3.x and connectors)
Upgrade from Spark 3.3.x (CDS)
▼
Upgrading from 7.1.7
Upgrade from Spark 2.4.7
Upgrade from Spark 2.4.7 (with connectors)
Upgrade from Spark 2.4.7 (with CDS 3.2.3)
Upgrade from Spark 2.4.7 (with CDS 3.2.3 and connectors)
Upgrade from Spark 3.2.3 (CDS)
Migrating Spark applications
▶︎
Developing Apache Spark Applications
Introduction
Spark application model
Spark execution model
Developing and running an Apache Spark WordCount application
Using the Spark DataFrame API
▶︎
Building Spark Applications
Best practices for building Apache Spark applications
Building reusable modules in Apache Spark applications
Packaging different versions of libraries with an Apache Spark application
▶︎
Using Spark SQL
SQLContext and HiveContext
Querying files into a DataFrame
Spark SQL example
Interacting with Hive views
Performance and storage considerations for Spark SQL DROP TABLE PURGE
TIMESTAMP compatibility for Parquet files
Accessing Spark SQL through the Spark shell
Calling Hive user-defined functions (UDFs)
▶︎
Using Spark Streaming
Spark Streaming and Dynamic Allocation
Spark Streaming Example
Enabling fault-tolerant processing in Spark Streaming
Configuring authentication for long-running Spark Streaming jobs
Building and running a Spark Streaming application
Sample pom.xml file for Spark Streaming with Kafka
▶︎
Accessing external storage from Spark
▶︎
Accessing data stored in Amazon S3 through Spark
Examples of accessing Amazon S3 data from Spark
Accessing Hive from Spark
Accessing HDFS Files from Spark
▶︎
Accessing ORC Data in Hive Tables
Accessing ORC files from Spark
Predicate push-down optimization
Loading ORC data into DataFrames using predicate push-down
Optimizing queries using partition pruning
Enabling vectorized query execution
Reading Hive ORC tables
Accessing Avro data files from Spark SQL applications
Accessing Parquet files from Spark SQL applications
▶︎
Using Spark MLlib
Running a Spark MLlib example
Enabling Native Acceleration For MLlib
Using custom libraries with Spark
▶︎
Running Apache Spark Applications
Introduction
Apache Spark 3.4 Requirements
Running Spark 3.4 Applications
Running your first Spark application
Running sample Spark applications
▶︎
Configuring Spark Applications
Configuring Spark application properties in spark-defaults.conf
Configuring Spark application logging properties
▶︎
Submitting Spark applications
spark-submit command options
Spark cluster execution overview
Canary test for pyspark command
Fetching Spark Maven dependencies
Accessing the Spark History Server
▶︎
Running Spark applications on YARN
Spark on YARN deployment modes
Submitting Spark Applications to YARN
Monitoring and Debugging Spark Applications
Example: Running SparkPi on YARN
Configuring Spark on YARN Applications
Dynamic allocation
▶︎
Submitting Spark applications using Livy
Using the Livy API to run Spark jobs
▶︎
Running an interactive session with the Livy REST API
Livy objects for interactive sessions
Setting Python path variables for Livy
Livy API reference for interactive sessions
▶︎
Submitting batch applications using the Livy REST API
Livy batch object
Livy API reference for batch jobs
Configuring the Livy Thrift Server
Connecting to the Apache Livy Thrift Server
Using Livy with Spark
▶︎
Using PySpark
Running PySpark in a virtual environment
Running Spark Python applications
Automating Spark Jobs with Oozie Spark Action
▶︎
Tuning Apache Spark
Introduction
Check Job Status
Check Job History
Improving Software Performance
▶︎
Tuning Apache Spark Applications
Tuning Spark Shuffle Operations
Choosing Transformations to Minimize Shuffles
When Shuffles Do Not Occur
When to Add a Shuffle Transformation
Secondary Sort
Tuning Resource Allocation
Resource Tuning Example
Tuning the Number of Partitions
Reducing the Size of Data Structures
Choosing Data Formats
▶︎
Apache Spark integration with Schema Registry
Apache Spark 3 integration with Schema Registry
▶︎
Using Apache Iceberg with Spark
▶︎
Using Apache Iceberg with Spark
Prerequisites and limitations for using Iceberg in Spark
▶︎
Accessing Iceberg tables
Editing a storage handler policy to access Iceberg files on the file system
Creating a SQL policy to query an Iceberg table
Creating a new Iceberg table from Spark 3
Configuring Hive Metastore for Iceberg column changes
Importing and migrating Iceberg table in Spark 3
Importing and migrating Iceberg table format v2
Configuring Catalog
Loading data into an unpartitioned table
Querying data in an Iceberg table
Updating Iceberg table data
Iceberg library dependencies for Spark applications
▶︎
Apache Zeppelin (unsupported)
Zeppelin Overview
▶︎
Installing Apache Zeppelin
Reinstall Apache Zeppelin in 7.3.1
▶︎
Enabling HDFS and Configuration Storage for Zeppelin Notebooks in HDP-2.6.3+
Overview
Enable HDFS Storage when Upgrading to HDP-2.6.3+
Use Local Storage when Upgrading to HDP-2.6.3+
▶︎
Configuring Apache Zeppelin
Introduction
Configuring Zeppelin caching
Configuring Livy
Livy high availability support
Configure User Impersonation for Access to Hive
Configure User Impersonation for Access to Phoenix
▶︎
Enabling Access Control for Zeppelin Elements
Enable Access Control for Interpreter, Configuration, and Credential Settings
Enable Access Control for Notebooks
Enable Access Control for Data
▶︎
Shiro Settings: Reference
Active Directory Settings
LDAP Settings
General Settings
shiro.ini Example
▶︎
Using Apache Zeppelin
Introduction
Launch Zeppelin
▶︎
Working with Zeppelin Notes
Create and Run a Note
Import a Note
Export a Note
Using the Note Toolbar
Import External Packages
▶︎
Configuring and Using Zeppelin Interpreters
Modify interpreter settings
Using Zeppelin Interpreters
Customize interpreter settings in a note
Use the JDBC interpreter to access Hive
Use the Livy interpreter to access Spark
Using Spark Hive Warehouse and HBase Connector Client .jar files with Livy
▶︎
How to: Security
▶︎
Configuring Authentication in Cloudera Manager
Overview
Kerberos Security Artifacts Overview
Kerberos Configuration Strategies for Cloudera
▶︎
Configuring Authentication in Cloudera Manager
Cloudera Manager user accounts
▶︎
Configuring external authentication and authorization for Cloudera Manager
Configuring PAM authentication with LDAP and SSSD
Configuring PAM authentication with Linux users
Configuring PAM authentication using Apache Knox
Configure authentication using Active Directory
Configure authentication using an LDAP-compliant identity service
Configure authentication using Kerberos (SPNEGO)
Configure authentication using an external program
Configure authentication using SAML
▶︎
Enabling Kerberos Authentication for Cloudera
Step 1: Install Cloudera Manager and Cloudera
Step 2: Create the Kerberos Principal for Cloudera Manager Server
Step 3: Enable Kerberos using the wizard
Step 4: Create the HDFS superuser
Step 5: Get or create a Kerberos principal for each user account
Step 6: Prepare the cluster for each user
Step 7: Verify that Kerberos security is working
Step 8: (Optional) Enable authentication for HTTP web consoles for Hadoop roles
Kerberos authentication for non-default users
▶︎
Customizing Kerberos Principals and System Users
Enabling feature flag for Custom Kerberos Principals and System Users
Customizing Kerberos Principals and System Users (Recommended)
▶︎
Customizing only Kerberos Principals
Configuring custom Kerberos principal for Atlas
Configuring custom Kerberos principal for Cruise Control
Configuring custom Kerberos principal for Apache Flink
Configuring custom Kerberos principal for HBase
Configuring custom Kerberos principal for HDFS
Configuring custom Kerberos principal for Hive and Hive-on-Tez
Configuring custom Kerberos principal for HttpFS
Configuring custom Kerberos principal for Hue
Configuring Kerberos Authentication for Impala
Configuring custom Kerberos principal for Kafka
Configuring custom Kerberos principal for Knox
Configuring custom Kerberos principal for Kudu
Configuring custom Kerberos principal for Livy
Configuring custom Kerberos principal for NiFi and NiFi Registry
Configuring custom Kerberos principal for Omid
Configuring custom Kerberos principal for Oozie
Configuring custom Kerberos principal for Ozone
Configuring custom Kerberos principal for Phoenix
Configuring custom Kerberos principal for Ranger
Configuring Custom Kerberos Principal for Ranger KMS
Configuring custom Kerberos principal for Schema Registry
Configuring custom Kerberos principals for Solr
Configuring custom Kerberos principal for Spark
Configuring custom Kerberos principal for Streams Messaging Manager
Configuring custom Kerberos principal for Cloudera SQL Stream Builder
Configuring custom Kerberos principal for Streams Replication Manager
Enabling custom Kerberos principal support in YARN
Enabling custom Kerberos principal support in a Queue Manager cluster
Configuring custom Kerberos principal for Zeppelin
Configuring custom Kerberos principal for ZooKeeper
Managing Kerberos credentials using Cloudera Manager
Using a custom Kerberos keytab retrieval script
Adding trusted realms to the cluster
Using auth-to-local rules to isolate cluster users
Configuring a dedicated MIT KDC for cross-realm trust
Integrating MIT Kerberos and Active Directory
Hadoop Users (user:group) and Kerberos Principals
Mapping Kerberos Principals to Short Names
Using a custom Kerberos configuration path
▶︎
Cloudera Authorization
Overview
Configuring LDAP Group Mappings
Using Ranger to Provide Authorization in Cloudera
▶︎
Encrypting Data in Transit
Encrypting Data in Transit
Understanding Keystores and Truststores
Disabling TLS protocols on JMX ports
Choosing manual TLS or Auto-TLS
SAN Certificates
▶︎
Configuring TLS Encryption for Cloudera Manager Using Auto-TLS
Use case 1: Use Cloudera Manager to generate internal CA and corresponding certificates
▶︎
Use case 2: Enabling Auto-TLS with an intermediate CA signed by an existing Root CA
Certmanager Options - Using Cloudera Manager's GenerateCMCA API
Use case 3: Enabling Auto-TLS with Existing Certificates
▶︎
Manually Configuring TLS Encryption for Cloudera Manager
Disable weak ciphers for TLS servers
▶︎
Configuring TLS/SSL encryption manually for Cloudera Services
Configuring TLS encryption manually for Apache Atlas
Enable security for Cruise Control
Configuring TLS/SSL encryption manually for DAS using Cloudera Manager
Enabling security for Apache Flink
▶︎
Configuring TLS/SSL for HBase
Prerequisites to configure TLS/SSL for HBase
Configuring TLS/SSL for HBase Web UIs
Configuring TLS/SSL for HBase REST Server
Configuring TLS/SSL for HBase Thrift Server
Enabling TLS/SSL for HiveServer
▶︎
Configuring TLS/SSL for Hue
Creating a truststore file in PEM format
Configuring Hue as a TLS/SSL client
Enabling Hue as a TLS/SSL client
Configuring Hue as a TLS/SSL server
Enabling Hue as a TLS/SSL server using Cloudera Manager
Enabling TLS/SSL for Hue Load Balancer
Enabling TLS/SSL communication with HiveServer2
Enabling TLS/SSL communication with Impala
Securing database connections with TLS/SSL
Configuring Impala TLS/SSL
▶︎
Channel encryption
Configure Kafka brokers
Configure Kafka MirrorMaker
Configuring TLS/SSL encryption
Configure Kafka clients
Configure Zookeeper TLS/SSL support for Kafka
▶︎
Authentication
▶︎
TLS/SSL client authentication
Configure Kafka brokers
Configure Kafka clients
Principal name mapping
Inter-broker security
Configuring multiple listeners
▶︎
Configuring TLS/SSL encryption manually for Apache Knox
Knox Properties for TLS
Configuring TLS/SSL encryption for Kudu using Cloudera Manager
Configure Lily HBase Indexer to use TLS/SSL
Configuring TLS/SSL encryption manually for Livy
▶︎
Configuring TLS/SSL manually
Requirements and recommendations
Configuring TLS/SSL encryption manually
NiFi TLS/SSL properties
NiFi Registry TLS/SSL properties
Configure TLS/SSL for Oozie
Configure TLS encryption manually for Phoenix Query Server
Configure TLS/SSL encryption manually for Apache Ranger
▶︎
Configure TLS/SSL encryption manually for Ranger KMS
Overriding custom keystore alias on a Ranger KMS Server
Configure TLS/SSL encryption manually for Ranger RMS
Configuring TLS encryption manually for Schema Registry
▶︎
Configure TLS/SSL encryption for Solr
Using a load balancer
Configuring TLS/SSL encryption manually for Spark
Encryption in Cloudera SQL Stream Builder
Enabling TLS/SSL for the SRM service
▶︎
Enabling TLS Encryption for Streams Messaging Manager on Cloudera on premises
TLS/SSL settings for Streams Messaging Manager
▶︎
Configuring TLS/SSL for Core Hadoop Services
Configuring TLS/SSL for HDFS
Configuring TLS/SSL for YARN
Configuring TLS/SSL encryption manually for Zeppelin
Configure ZooKeeper TLS/SSL using Cloudera Manager
Manually Configuring TLS Encryption on the Agent Listening Port
▶︎
Enabling TLS 1.2 for database connections
▶︎
Enabling TLS 1.2 on Database server
Enabling TLS 1.2 on MySQL database
Enabling TCPS on Oracle database
Enabling TLS 1.2 on MariaDB
Enabling TLS 1.2 on PostgreSQL
▶︎
Configuring TLS 1.2 for Cloudera Manager
▶︎
Enabling TLS 1.2 on Cloudera Manager Server
Setting up the certificate in Cloudera Manager
Modifying Cloudera Manager Server database configuration file
Configuring TLS 1.2 for Reports Manager
▶︎
Configuring Cloudera Runtime services to connect to TLS 1.2/TCPS-enabled databases
Configuring Hue to connect to TLS 1.2/TCPS-enabled databases
Configuring Ranger to connect to TLS 1.2/TCPS-enabled databases
Configuring Ranger KMS to connect to TLS 1.2/TCPS-enabled databases
Configuring Oozie to connect to TLS 1.2/TCPS-enabled databases
Configuring Streams Messaging Manager to connect to TLS 1.2/TCPS-enabled databases
Configuring Schema Registry to connect to TLS 1.2/TCPS-enabled databases
Configuring Hive to connect to TLS 1.2/TCPS-enabled databases
How to connect Cloudera components to a TCPS-enabled Oracle database
▶︎
Encrypting Data at Rest
Encrypting Data at Rest
Data at Rest Encryption Reference Architecture
Data at Rest Encryption Requirements
Resource Planning for Data at Rest Encryption
▶︎
HDFS Transparent Encryption
▶︎
Key Concepts and Architecture
Data Encryption Components and Solutions
Encryption Zones and Keys
Accessing Files Within an Encryption Zone
Optimizing Performance for HDFS Transparent Encryption
▶︎
Managing Encryption Keys and Zones
Validating Hadoop Key Operations
Creating Encryption Zones
Adding Files to an Encryption Zone
Deleting Encryption Zones
Backing Up Encryption Keys
Rolling Encryption Keys
Deleting Encryption Zone Keys
▶︎
Re-encrypting Encrypted Data Encryption Keys (EDEKs)
Benefits and Capabilities
Prerequisites and Assumptions
Limitations
Re-encrypting an EDEK
Managing Re-encryption Operations
▶︎
Configuring Cloudera Services for HDFS Encryption
Transparent Encryption Recommendations for HBase
▶︎
Transparent Encryption Recommendations for Hive
Changed Behavior after HDFS Encryption is Enabled
KMS ACL Configuration for Hive
Transparent Encryption Recommendations for Hue
Transparent Encryption Recommendations for Impala
Transparent Encryption Recommendations for MapReduce and YARN
Transparent Encryption Recommendations for Search
Transparent Encryption Recommendations for Spark
Transparent Encryption Recommendations for Sqoop
▶︎
Ranger KMS
Ranger KMS overview
▶︎
Using the Ranger Key Management Service
Accessing the Ranger KMS Web UI
List and Create Keys
Roll Over an Existing Key
Delete a Key
▶︎
Securing the Key Management System (KMS)
Enabling Kerberos Authentication for the KMS
Configuring TLS/SSL for the KMS
▶︎
Migrating Ranger Key Management Server Role Instances to a New Host
Migrate the Ranger Admin role instance to a new host
Migrate the Ranger KMS DB role instance to a new host
▶︎
Working with an HSM
Set up Luna 7 HSM for Ranger KMS
Set up Luna 10.5 HSM Client for Ranger KMS
Integrating Ranger KMS DB with Google Cloud HSM
Integrating Ranger KMS DB with CipherTrust Manager HSM
Integrating Ranger KMS DB with SafeNet Keysecure HSM
Migrating the Master Key from Ranger KMS DB to Luna HSM
Migrating the Master Key from HSM to Ranger KMS DB
▶︎
Navigator Encrypt
Navigator Encrypt Overview
Registering Cloudera Navigator Encrypt
Preparing for Encryption Using Cloudera Navigator Encrypt
Encrypting and Decrypting Data Using Cloudera Navigator Encrypt
Managing Navigator Encrypt Access Control List
Maintaining Cloudera Navigator Encrypt
Generating Kerberos keytab file for Navigator Encrypt
▶︎
Apache Ranger Access Control and Auditing
▶︎
Apache Ranger APIs
▶︎
Ranger API Overview
Ranger Admin Metrics API
Ranger REST API documentation
▶︎
Apache Ranger Auditing
Audit Overview
▶︎
Managing Auditing with Ranger
Viewing audit details
Viewing audit metrics
Creating a read-only Admin user (Auditor)
Configuring Ranger audit properties for Solr
Configuring Ranger audit properties for HDFS
Triggering HDFS audit files rollover
Configuring Ranger audit log storage to a local file
▶︎
Ranger Audit Filters
Default Ranger audit filters
Configuring a Ranger audit filter policy
How to set audit filters in Ranger Admin Web UI
Filter service access logs from Ranger UI
Configuring audit spool alert notifications
Ranger audit log event summarization
Charting spool alert metrics
Excluding audits for specific users, groups, and roles
Changing Ranger audit storage location and migrating data
Configuring Ranger audits to show actual client IP address
▶︎
Apache Ranger Authorization
Using Ranger to Provide Authorization in CDP
▶︎
Ranger plugin overview
Ranger Hive Plugin
Ranger Kafka Plugin
Ranger-HBase Plugin
Ranger special entities
Enabling Ranger HDFS plugin manually on a Cloudera Data Hub
▶︎
Ranger Policies Overview
Ranger tag-based policies
Tags and policy evaluation
Ranger access conditions
▶︎
Using the Ranger Admin Web UI
Accessing the Ranger Admin Web UI
Ranger console navigation
▶︎
Resource-based Services and Policies
▶︎
Configuring resource-based services
Configure a resource-based service: Atlas
Configure a resource-based service: HBase
Configure a resource-based service: HDFS
Configure a resource-based service: HadoopSQL
Configure a resource-based service: Kafka
Configure a resource-based service: Knox
Configure a resource-based service: NiFi
Configure a resource-based service: NiFi Registry
Configure a resource-based service: Solr
Configure a resource-based service: YARN
▶︎
Configuring resource-based policies
Configure a resource-based policy: Atlas
Configure a resource-based policy: HBase
Configure a resource-based policy: HDFS
Configure a resource-based policy: HadoopSQL
Configure a resource-based storage handler policy: HadoopSQL
Configure a resource-based policy: Kafka
Configure a resource-based policy: Knox
Configure a resource-based policy: NiFi
Configure a resource-based policy: NiFi Registry
Configure a resource-based policy: S3
Configure a resource-based policy: Solr
Configure a resource-based policy: YARN
Wildcards and variables in resource-based policies
Adding a policy condition to a resource-based policy
Adding a policy label to a resource-based policy
Preloaded resource-based services and policies
▶︎
Importing and exporting resource-based policies
Import resource-based policies for a specific service
Import resource-based policies for all services
Export resource-based policies for a specific service
Export all resource-based policies for all services
▶︎
Row-level filtering and column masking in Hive
Row-level filtering in Hive with Ranger policies
Dynamic resource-based column masking in Hive with Ranger policies
▶︎
Dynamic tag-based column masking in Hive with Ranger policies
Apply custom transformation to a column
▶︎
Tag-based Services and Policies
Adding a tag-based service
▶︎
Adding tag-based policies
Using tag attributes and values in Ranger tag-based policy conditions
Adding a policy condition to a tag-based policy
Adding a tag-based PII policy
Default EXPIRES ON tag policy
▶︎
Importing and exporting tag-based policies
Import tag-based policies
Export tag-based policies
Create a time-bound policy
Create a Hive authorizer URL policy
Showing Role|Grant definitions from Ranger HiveAuthorizer
▶︎
Ranger Security Zones
Security Zones Administration
Security Zones Example Use Cases
Adding a Ranger security zone
▶︎
Administering Ranger Reports
View Ranger reports
Search Ranger reports
Export Ranger reports
Using Ranger client libraries
Using session cookies to validate Ranger policies
Configure optimized rename and recursive delete operations in Ranger Ozone plugin
How to optimally configure Ranger RAZ client performance
▶︎
Apache Ranger User Management
▶︎
Administering Ranger Users, Groups, Roles, and Permissions
Adding a user
Editing a user
Deleting a user
Adding a group
Editing a group
Deleting a group
Adding a role through Ranger
Adding a role through Hive
Editing a role
Deleting a role
Adding or editing module permissions
Deleting users or groups in bulk
▶︎
Ranger Usersync
Adding default service users and roles for Ranger
Configuring Usersync assignment of Admin users
Configuring nested group hierarchies
Configuring Ranger Usersync for Deleted Users and Groups
Configuring Ranger Usersync for invalid usernames
Setting credentials for Ranger Usersync custom keystore
Enabling Ranger Usersync search to generate internally
Configuring Usersync to sync directly with LDAP/AD
Configure SASL Bind in Ranger Usersync
Force deletion of external users and groups from the Ranger database
▶︎
Configuring Ranger Authentication with UNIX, LDAP, or AD
▶︎
Configuring Ranger Authentication with UNIX, LDAP, AD, or PAM
Configure Ranger authentication for UNIX
Configure Ranger authentication for AD
Configure Ranger authentication for LDAP
Configure Ranger authentication for PAM
▶︎
Ranger AD Integration
Ranger UI authentication
Ranger UI authorization
▶︎
Configuring Advanced Security Options for Apache Ranger
Configuring the server work directory path for a Ranger service
Configuring session inactivity timeout for Ranger Admin Web UI
Configure Kerberos authentication for Apache Ranger
Configure TLS/SSL encryption manually for Apache Ranger
Configure TLS/SSL encryption manually for Ranger KMS
Configure TLS/SSL encryption manually for Ranger RMS
▶︎
Configuring Apache Ranger High Availability
Configure Ranger Admin High Availability
Configure Ranger Admin High Availability with a Load Balancer
Configuring Ranger Usersync and Tagsync High Availability
Migrating Ranger Usersync and Tagsync role groups
Configuring JVM options and system properties for Ranger services
How to pass JVM options to Ranger KMS services
How to clear Ranger Admin access logs
Configuring purge of x_auth_sess data
Enable Ranger Admin login using kerberos authentication
How to configure Ranger HDFS plugin configs per (NameNode) Role Group
How to add a coarse URI check for Hive agent
How to suppress database connection notifications
How to change the password for Ranger users
▶︎
How to manage log rotation for Ranger Services
Managing logging properties for Ranger services
Enabling selective debugging for Ranger Admin
Enabling selective debugging for RAZ
▶︎
Configuring and Using Ranger RMS Hive-HDFS ACL Sync
Introduction to Ranger RMS
▶︎
About Ranger RMS for Ozone
Ranger RMS Assumptions and Limitations
Installing/Verifying RMS for Ozone configuration
Enabling RMS for Ozone authorization
Understanding Ranger policies with RMS
How to full sync the Ranger RMS database
Configuring High Availability for Ranger RMS (Hive-HDFS ACL-Sync)
Configuring Ranger RMS (Hive-HDFS / Hive-OZONE ACL Sync)
Configuring HDFS plugin to view permissions through getfacl interface
Ranger RMS (Hive-HDFS ACL-Sync) Use Cases
▶︎
Configuring and Using Ranger KMS
▶︎
Configuring Ranger KMS High Availability
Configure High Availability for Ranger KMS with DB
Rotating Ranger KMS access log files
▶︎
Apache Knox Authentication
▶︎
Apache Knox overview
Dynamically generating Knox topology files
Securing access to Hadoop cluster: Apache Knox
Apache Knox Gateway overview
Knox Supported Services Matrix
Knox Topology Management in Cloudera Manager
Considerations for Knox
Proxy Cloudera Manager through Apache Knox
▶︎
Installing Apache Knox
Apache Knox Install Role Parameters
▶︎
Management of Knox shared providers in Cloudera Manager
Configure Apache Knox authentication for PAM
▶︎
Configure Apache Knox authentication for AD/LDAP
Use advanced LDAP authentication
Knox CLI testing tools
Configure Apache Knox Authentication for SAML
Add a new shared provider configuration
TLS Mutual Authentication
▶︎
Management of existing Apache Knox shared providers
Add a new provider in an existing provider configuration
Modify a provider in an existing provider configuration
Disable a provider in an existing provider configuration
Remove a provider parameter in an existing provider configuration
Saving aliases
Configuring Kerberos authentication in Apache Knox shared providers
Configuring group mapping in Knox
▶︎
Management of services for Apache Knox through Cloudera Manager
Enable proxy for a known service in Apache Knox
Disable proxy for a known service in Apache Knox
Add custom service to existing descriptor in Apache Knox Proxy
Add a custom descriptor to Apache Knox
▶︎
Management of Service Parameters for Apache Knox via Cloudera Manager
Add custom service parameter to descriptor
Modify custom service parameter in descriptor
Remove custom service parameter from descriptor
▶︎
Load balancing for Apache Knox
Generate and configure a signing keystore for Knox in HA
▶︎
Knox Gateway token integration
Overview
Token configurations
Generate tokens
Manage Knox Gateway tokens
Knox Token API
Manage Knox metadata
Knox SSO Cookie Invalidation
Concurrent session verification (Tech Preview)
▶︎
Additional Security Topics
How to Add Root and Intermediate CAs to Truststore for TLS/SSL
Amazon S3 Security
How to Authenticate Kerberos Principals Using Java
Check Cluster Security Settings
Configure Antivirus Software on Cloudera Hosts
Configure Browser-based Interfaces to Require Authentication (SPNEGO)
Configure Browsers for Kerberos Authentication (SPNEGO)
Configure Cluster to Use Kerberos Authentication
Convert DER, JKS, PEM Files for TLS/SSL Artifacts
Configure Authentication for Amazon S3
Configure Encryption for Amazon S3
Configure AWS Credentials
Enable Sensitive Data Redaction
Log a Security Support Case
Obtain and Deploy Keys and Certificates for TLS/SSL
Renew and Redistribute Certificates
How to Set Up a Gateway Host to Restrict Access to the Cluster
Set Up Access to Cloudera EDH (Microsoft Azure Marketplace)
Use Self-Signed Certificates for TLS
▶︎
Configuring Infra Solr
Configure Ranger authorization for Infra Solr
Configuring custom Kerberos principals for Solr
▶︎
How to: Governance
▶︎
Searching with Metadata
▶︎
Using Basic search
Basic search enhancement
Using Relationship Search
Using Search filters
▶︎
Ability to download search results from Atlas UI
How to download results using Basic and Advanced search options
Using Free-text Search
Enhancements with search query
▶︎
Ignore or Prune pattern to filter Hive metadata entities
How Ignore and Prune feature works
Using Ignore and Prune patterns
Saving searches
Using advanced search
Atlas index repair configuration
▶︎
Working with Classifications and Labels
▶︎
Working with Atlas classifications and labels
Text-editor for Atlas parameters
▶︎
Creating classifications
Example for finding parent object for assigned classification or term
Creating labels
Adding attributes to classifications
▶︎
Support for validating the AttributeName in parent and child TypeDef
Validations for parent types
Case for implementing backward compatibility
Associating classifications with entities
Propagating classifications through lineage
Searching for entities using classifications
▶︎
Exploring using Lineage
Lineage overview
Viewing lineage
Lineage lifecycle
Support for On-Demand lineage
▶︎
HDFS lineage data extraction in Atlas
Prerequisites for HDFS lineage extraction
▶︎
HDFS lineage commands
Running HDFS lineage commands
Inclusion and exclusion operation for HDFS files
Supported HDFS entities and their hierarchies
▶︎
Leveraging Business Metadata
Business Metadata overview
Creating Business Metadata
Adding attributes to Business Metadata
Associating Business Metadata attributes with entities
Importing Business Metadata associations in bulk
Searching for entities using Business Metadata attributes
▶︎
Managing Business Terms with Atlas Glossaries
Glossaries overview
Creating glossaries
Creating terms
Associating terms with entities
Defining related terms
Creating categories
Assigning terms to categories
Searching using terms
▶︎
Importing Glossary terms in bulk
Enhancements related to bulk glossary terms import
Glossary performance improvements
▶︎
Setting up Atlas High Availability
About Atlas High Availability
Prerequisites for setting up Atlas HA
Installing Atlas in HA using Cloudera Base on premises cluster
▶︎
Auditing Atlas Entities
▶︎
Audit Operations
Atlas Type Definitions
▶︎
Atlas Export and Import operations
Exporting data using Connected type
Atlas Server Operations
Audit enhancements
Examples of Audit Operations
▶︎
Storage reduction for Atlas
▶︎
Using audit aging
Enabling audit aging
Using default audit aging
Using Sweep out configurations
Using custom audit aging
Aging patterns
Audit aging reference configurations
Audit aging using REST API
▶︎
Using custom audit filters
Supported operators
Rule configurations
Use cases and sample payloads
▶︎
Securing Atlas
Securing Atlas
Configuring TLS/SSL for Apache Atlas
▶︎
Configuring Atlas Authentication
Configure Kerberos authentication for Apache Atlas
Configure Atlas authentication for AD
Configure Atlas authentication for LDAP
Configure Atlas PAM authentication
Configure Atlas file-based authentication
▶︎
Configuring Atlas Authorization
Restricting classifications based on user permission
Configuring Ranger Authorization for Atlas
Configuring Atlas Authorization using Ranger
Configuring Simple Authorization in Atlas
▶︎
Iceberg for Atlas
▶︎
Iceberg support for Atlas
How Atlas works with Iceberg
Using the Spark shell
Using the Impala shell
▶︎
Configuring Atlas using Cloudera Manager
▶︎
Configuring and Monitoring Atlas
Showing Atlas Server status
Accessing Atlas logs
▶︎
Integrating Atlas with Ozone
About Apache Ozone integration with Apache Atlas
How Integration works
▶︎
Using import utility tools with Atlas
▶︎
Importing Hive Metadata using Command-Line (CLI) utility
Bulk and migration import of Hive metadata
Using Atlas-Hive import utility with Ozone entities
Setting up Atlas Kafka import tool
▶︎
How to: Jobs Management
Overview of Oozie
Adding the Oozie service using Cloudera Manager
Considerations for Oozie to work with AWS
▶︎
Adding file system credentials to an Oozie workflow
Credentials for token delegation
File System Credentials
Setting file system credentials through hadoop properties
Setting default credentials using Cloudera Manager
Advanced settings: Overriding default configurations
Modifying the workflow file manually
Hue Limitation
User authorization configuration for Oozie
▶︎
Redeploying the Oozie ShareLib
Redeploying the Oozie sharelib using Cloudera Manager
▶︎
Oozie configurations with Cloudera services
▶︎
Using Sqoop actions with Oozie
Deploying and configuring Oozie Sqoop1 Action JDBC drivers
Configuring Oozie Sqoop1 Action workflow JDBC drivers
Configuring Oozie to enable MapReduce jobs to read or write from Amazon S3
Configuring Oozie to use HDFS HA
▶︎
Using Oozie with Ozone
Uploading Oozie ShareLib to Ozone
Enabling Oozie workflows that access Ozone storage
Using Hive Warehouse Connector with Oozie Spark Action
Oozie and client configurations
▶︎
Spark 3 support in Oozie
Enable Spark actions
Use Spark actions with a custom Python executable
Spark 3 Oozie action schema
Differences between Spark and Spark 3 actions
Use Spark 3 actions with a custom Python executable
Spark 3 compatibility action executor
Spark 3 examples with Python or Java application
Shell action for Spark 3
Migration of Spark 2 applications
Hue support for Oozie
▶︎
Oozie High Availability
Requirements for Oozie High Availability
▶︎
Configuring Oozie High Availability using Cloudera Manager
Oozie Load Balancer configuration
Enabling Oozie High Availability
Disabling Oozie High Availability
▶︎
Scheduling in Oozie using cron-like syntax
Oozie scheduling examples
▶︎
Configuring an external database for Oozie
Configuring PostgreSQL for Oozie
Configuring MariaDB for Oozie
Configuring MySQL 5 for Oozie
Configuring MySQL 8 for Oozie
Configuring Oracle for Oozie
▶︎
Working with the Oozie server
Starting the Oozie server
Stopping the Oozie server
Accessing the Oozie server with the Oozie Client
Accessing the Oozie server with a browser
Adding schema to Oozie using Cloudera Manager
Enabling the Oozie web console on managed clusters
Enabling Oozie SLA with Cloudera Manager
Disabling Oozie UI using Cloudera Manager
Moving the Oozie service to a different host
▶︎
Oozie database configurations
Configuring Oozie data purge settings using Cloudera Manager
Loading the Oozie database
Dumping the Oozie database
Setting the Oozie database timezone
▶︎
Fine-tuning Oozie's database connection
Assembling a secure JDBC URL for Oozie
Oracle TCPS
OpenJPA upgrade
Prerequisites for configuring TLS/SSL for Oozie
Configure TLS/SSL for Oozie
Oozie Java-based actions with Java 17
Oozie security enhancements
Additional considerations when configuring TLS/SSL for Oozie HA
Configure Oozie client when TLS/SSL is enabled
Configuring custom Kerberos principal for Oozie
▶︎
How to: Streams Messaging
▶︎
Configuring Apache Kafka
Operating system requirements
Performance considerations
Quotas
▶︎
JBOD
JBOD setup
JBOD Disk migration
Setting user limits for Kafka
▶︎
Rolling restart checks
Configuring rolling restart checks
Configuring the client configuration used for rolling restart checks
▶︎
Cluster discovery with multiple Apache Kafka clusters
▶︎
Cluster discovery using DNS records
A records and round robin DNS
client.dns.lookup property options for client
CNAME records configuration
Connection to the cluster with configured DNS aliases
▶︎
Cluster discovery using load balancers
Setup for SASL with Kerberos
Setup for TLS/SSL encryption
Connecting to the Kafka cluster using load balancer
Configuring Kafka ZooKeeper chroot
Rack awareness
▶︎
Securing Apache Kafka
▶︎
Channel encryption
Configure Kafka brokers
Configure Kafka clients
Configure Kafka MirrorMaker
Configure Zookeeper TLS/SSL support for Kafka
▶︎
Authentication
▶︎
TLS/SSL client authentication
Configure Kafka brokers
Configure Kafka clients
Principal name mapping
▶︎
Kerberos authentication
Enable Kerberos authentication
Configuring custom Kerberos principal for Kafka
▶︎
Delegation token based authentication
Enable or disable authentication with delegation tokens
Manage individual delegation tokens
Rotate the master key/secret
▶︎
Client authentication using delegation tokens
Configure clients on a producer or consumer level
Configure clients on an application level
▶︎
LDAP authentication
Configure Kafka brokers
Configure Kafka clients
▶︎
PAM authentication
Configure Kafka brokers
Configure Kafka clients
▶︎
OAuth2 authentication
Configuring Kafka brokers
Configuring Kafka clients
▶︎
Authorization
▶︎
Ranger
Enable authorization in Kafka with Ranger
Configure the resource-based Ranger service used for authorization
Kafka ACL APIs support in Ranger
▶︎
Governance
Importing Kafka entities into Atlas
Configuring the Atlas hook in Kafka
Inter-broker security
Configuring multiple listeners
▶︎
Kafka security hardening with Zookeeper ACLs
Restricting access to Kafka metadata in Zookeeper
Unlocking access to Kafka metadata in Zookeeper
▶︎
Tuning Apache Kafka Performance
Handling large messages
▶︎
Cluster sizing
Sizing estimation based on network and disk message throughput
Choosing the number of partitions for a topic
▶︎
Broker Tuning
JVM and garbage collection
Network and I/O threads
ISR management
Log cleaner
▶︎
System Level Broker Tuning
File descriptor limits
Filesystems
Virtual memory handling
Networking parameters
Configure JMX ephemeral ports
Kafka-ZooKeeper performance tuning
▶︎
Managing Apache Kafka
▶︎
Management basics
Broker log management
Record management
Broker garbage collection log configuration
Client and broker compatibility across Kafka versions
▶︎
Managing topics across multiple Kafka clusters
Set up MirrorMaker in Cloudera Manager
Settings to avoid data loss
Scaling Kafka brokers
▶︎
Broker migration
Migrate brokers by modifying broker IDs in meta.properties
Use rsync to copy files from one broker to another
▶︎
Disk management
Monitoring
▶︎
Handling disk failures
Disk Replacement
Disk Removal
Reassigning replicas between log directories
Retrieving log directory replica assignment information
▶︎
Metrics
Building Cloudera Manager charts with Kafka metrics
Essential metrics to monitor
▶︎
Command Line Tools
Unsupported command line tools
kafka-topics
kafka-cluster
kafka-configs
kafka-console-producer
kafka-console-consumer
kafka-consumer-groups
kafka-features
kafka-reassign-partitions
kafka-log-dirs
zookeeper-security-migration
kafka-delegation-tokens
kafka-*-perf-test
Configuring Kafka command line tools in FIPS clusters
Configuring log levels for command line tools
Understanding the kafka-run-class Bash Script
▶︎
Developing Apache Kafka Applications
Kafka producers
▶︎
Kafka consumers
Subscribing to a topic
Groups and fetching
Protocol between consumer and broker
Rebalancing partitions
Retries
Kafka clients and ZooKeeper
▶︎
Java client
▶︎
Client examples
Simple Java consumer
Simple Java producer
Security examples
▶︎
.NET client
▶︎
Client examples
Simple .NET consumer
Simple .NET producer
Performant .NET producer
Simple .Net consumer using Schema Registry
Simple .Net producer using Schema Registry
Security examples
Kafka Streams
Kafka public APIs
Recommendations for client development
▶︎
Kafka Connect
Kafka Connect Overview
Setting up Kafka Connect
▶︎
Using Kafka Connect
Configuring the Kafka Connect Role
Managing, Deploying and Monitoring Connectors
▶︎
Writing Kafka data to Ozone with Kafka Connect
Writing data in an unsecured cluster
Writing data in a Kerberos and TLS/SSL enabled cluster
Using the AvroConverter
Configuring EOS for source connectors
▶︎
Securing Kafka Connect
▶︎
Kafka Connect to Kafka broker security
Configuring TLS/SSL encryption
Configuring Kerberos authentication
▶︎
Kafka Connect REST API security
▶︎
Authentication
Configuring TLS/SSL client authentication
Configuring SPNEGO authentication and trusted proxies
▶︎
Authorization
Authorization model
Ranger integration
▶︎
Kafka Connect connector configuration security
▶︎
Kafka Connect Secrets Storage
Terms and concepts
Managing secrets using the REST API
Re-encrypting secrets
Configuring connector JAAS configuration and Kerberos principal overrides
Configuring a Nexus repository allow list
▶︎
Single Message Transforms
Configuring an SMT chain
ConvertFromBytes
ConvertToBytes
▶︎
Connectors
Installing connectors
Debezium Db2 Source
Debezium MySQL Source
Debezium Oracle Source
Debezium PostgreSQL Source
Debezium SQL Server Source
HTTP Source
JDBC Source
JMS Source
MQTT Source
SFTP Source
▶︎
Stateless NiFi Source and Sink
Dataflow development best practices
Kafka Connect worker assignment
Kafka Connect log files
Kafka Connect tasks
Developing a dataflow
Deploying a dataflow
Downloading and viewing predefined dataflows
Configuring flow.snapshot
Tutorial: developing and deploying a JDBC Source dataflow
Syslog TCP Source
Syslog UDP Source
ADLS Sink
Amazon S3 Sink
HDFS Sink
HDFS Stateless Sink
HTTP SInk
InfluxDB SInk
JDBC Sink
Kudu Sink
S3 Sink
▶︎
Kafka KRaft [Technical Preview]
KRaft setup
Extracting KRaft metadata
Securing KRaft
▶︎
Configuring Cruise Control
Adding Cruise Control as a service
Setting capacity estimations and goals
Configuring Metrics Reporter in Cruise Control
▶︎
Enabling self-healing in Cruise Control
Changing the Anomaly Notifier Class value to self-healing
Enabling self-healing for all or individual anomaly types
Adding self-healing goals to Cruise Control in Cloudera Manager
▶︎
Securing Cruise Control
▶︎
Enable security for Cruise Control
Configuring custom Kerberos principal for Cruise Control
▶︎
Managing Cruise Control
▶︎
Rebalancing with Cruise Control
Rebalance after adding Kafka broker
Rebalance after demoting Kafka broker
Rebalance after removing Kafka broker
Cruise Control REST API endpoints
▶︎
Configuring Streams Messaging Manager
Installing Streams Messaging Manager in Cloudera Base on premises
▶︎
Setting up Prometheus for Streams Messaging Manager
▶︎
Prometheus configuration for Streams Messaging Manager
Prerequisites for Prometheus configuration
Prometheus properties configuration
Streams Messaging Manager property configuration in Cloudera Manager for Prometheus
Kafka property configuration in Cloudera Manager for Prometheus
Kafka Connect property configuration in Cloudera Manager for Prometheus
Start Prometheus
▶︎
Secure Prometheus for Streams Messaging Manager
▶︎
Nginx proxy configuration over Prometheus
Nginx installtion
Nginx configuration for Prometheus
▶︎
Setting up TLS for Prometheus
Configuring Streams Messaging Manager to recognize Prometheus's TLS certificate
▶︎
Setting up basic authentication with TLS for Prometheus
Configuring Nginx for basic authentication
Configuring Streams Messaging Manager for basic authentication
Setting up mTLS for Prometheus
Prometheus for Streams Messaging Manager limitations
Troubleshooting Prometheus for Streams Messaging Manager
Performance comparison between Cloudera Manager and Prometheus
▶︎
Using Streams Messaging Manager
▶︎
Monitoring Kafka
Monitoring Kafka clusters
Monitoring Kafka producers
Monitoring Kafka topics
Monitoring Kafka brokers
Monitoring Kafka consumers
Monitoring log size information
Monitoring lineage information
▶︎
Managing Kafka topics
Creating a Kafka topic
Modifying a Kafka topic
Deleting a Kafka topic
▶︎
Managing Alert Policies and Notifiers
Creating a notifier
Updating a notifier
Deleting a notifier
Creating an alert policy
Updating an alert policy
Enabling an alert policy
Disabling an alert policy
Deleting an alert policy
Component types and metrics for alert policies
▶︎
Monitoring end-to-end latency
Enabling interceptors
Monitoring end to end latency for Kafka topic
End to end latency use case
▶︎
Monitoring Kafka cluster replications (Streams Replication Manager)
▶︎
Viewing Kafka cluster replication details
Searching Kafka cluster replications by source
Monitoring Kafka cluster replications by quick ranges
Monitoring status of the clusters to be replicated
▶︎
Monitoring topics to be replicated
Searching by topic name
Monitoring throughput for cluster replication
Monitoring replication latency for cluster replication
Monitoring checkpoint latency for cluster replication
Monitoring replication throughput and latency by values
▶︎
Managing and monitoring Kafka Connect
The Kafka Connect UI
Deploying and managing connectors
▶︎
Managing and monitoring Cruise Control rebalance
Cruise Control dashboard in Streams Messaging Manager UI
Using the Rebalance Wizard in Cruise Control
▶︎
Securing Streams Messaging Manager
Securing Streams Messaging Manager
Verifying the setup
▶︎
Configuring Streams Replication Manager
Add Streams Replication Manager to an existing cluster
Enable high availability
Enabling prefixless replication
▶︎
Defining and adding clusters for replication
Defining external Kafka clusters
Defining co-located Kafka clusters using a service dependency
Defining co-located Kafka clusters using Kafka credentials
Adding clusters to Streams Replication Manager's configuration
Configuring replications
Configuring the driver role target clusters
Configuring the service role target cluster
Configuring properties not exposed in Cloudera Manager
Configuring replication specific REST servers
▶︎
Configuring Remote Querying
Enabling Remote Querying
Configuring the advertised information of the Streams Replication Manager Service role
Configuring Streams Replication Manager Driver retry behaviour
Configuring Streams Replication Manager Driver heartbeat emission
Configuring automatic group offset synchronization
Configuring Streams Replication Manager Driver for performance tuning
New topic and consumer group discovery
▶︎
Configuration examples
Bidirectional replication example of two active clusters
Cross data center replication example of multiple clusters
▶︎
Using Streams Replication Manager
▶︎
Streams Replication Manager Command Line Tools
▶︎
srm-control
▶︎
Configuring srm-control
Configuring the Streams Replication Manager client's secure storage
Configuring TLS/SSL properties
Configuring Kerberos properties
Configuring properties for non-Kerberos authentication mechanisms
Setting the secure storage password as an environment variable
Configuring srm-control in FIPS clusters
Topics and Groups Subcommand
Offsets Subcommand
Monitoring Replication with Streams Messaging Manager
Replicating Data
▶︎
How to Set up Failover and Failback
Configure Streams Replication Manager for Failover and Failback
Migrating Consumer Groups Between Clusters
▶︎
Securing Streams Replication Manager
Security overview
Enabling TLS/SSL for the Streams Replication Manager service
Enabling Kerberos for the Streams Replication Manager service
Configuring custom Kerberos principal for Streams Replication Manager
▶︎
Configuring Basic Authentication for the Streams Replication Manager service
Enabling Basic Authentication for the Streams Replication Manager service
Configuring Basic Authentication for Remote Querying
Streams Replication Manager security example
▶︎
Integrating with Schema Registry
▶︎
Integrating Schema Registry with NiFi
NiFi record-based Processors and Controller Services
Configuring Schema Registry instance in NiFi
Setting schema access strategy in NiFi
Adding and configuring record-enabled Processors
Integrating Schema Registry with Kafka
Integrating Schema Registry with Flink and Cloudera SQL Stream Builder
Integrating Schema Registry with Atlas
Improving performance in Schema Registry
▶︎
Using Schema Registry
Adding a new schema
Querying a schema
Evolving a schema
Deleting a schema
Importing Confluent Schema Registry schemas into Schema Registry
▶︎
Exporting and importing schemas
Exporting schemas using Schema Registry API
Importing schemas using Schema Registry API
▶︎
ID ranges in Schema Registry
Setting a Schema Registry ID range
▶︎
Load balancer in front of Schema Registry instances
Configurations required to use load balancer with Kerberos enabled
Configurations required to use load balancer with SSL enabled
▶︎
Securing Schema Registry
▶︎
TLS encryption for Schema Registry
TLS certificate requirements and recommendations
Configuring TLS encryption manually for Schema Registry
Schema Registry TLS properties
Configuring mutual TLS for Schema Registry
▶︎
Schema Registry authorization through Ranger access policies
Predefined access policies for Schema Registry
Adding the user or group to a predefined access policy
Creating a custom access policy
▶︎
Schema Registry authentication through OAuth2 JWT tokens
JWT algorithms
Public key and secret storage
Authentication using OAuth2 with Kerberos
Schema Registry server configuration
Configuring the Schema Registry client
Configuring custom Kerberos principal for Schema Registry
▶︎
Troubleshooting
▶︎
Troubleshooting Security Issues
Troubleshooting Security Issues
Error Messages and Various Failures
Ranger RMS field issues - HDFS latency
Authentication and Kerberos Issues
HDFS Encryption Issues
TLS/SSL Issues
▶︎
YARN, MRv1, and Linux OS Security
TaskController Error Codes (MRv1)
ContainerExecutor Error Codes (YARN)
▶︎
Troubleshooting Apache Atlas
Atlas index repair configuration
▶︎
Troubleshooting Apache HDFS
▶︎
Handling Rollback issues with ZDU
▶︎
Step 1: Recover files appended during ZDU
Solution
▶︎
Step 2: Recover previous files Hsync'ed during ZDU
Solution
▶︎
Step 3: Recover open files in corrupt state
Solution
Summary
▶︎
Troubleshooting Apache Hive
HeapDumpPath (/tmp) in Hive data nodes gets full due to .hprof files
Query fails with "Counters limit exceeded" error message
Managing high partition workloads
HiveServer is unresponsive due to large queries running in parallel
Whitelisting Configurations at the Session Level
▶︎
Troubleshooting Apache Impala
Troubleshooting common issues in Impala
Using Breakpad Minidumps for Crash Reporting
topics/impala-troubleshoot-dynamic-memory.xml
▶︎
Troubleshooting Apache Hadoop YARN
Troubleshooting Docker on YARN
Troubleshooting on YARN
▶︎
YARN Queue Manager UI behavior in mixed resource allocation mode
Troubleshooting for mixed resource allocation mode in YARN Queue Manager
Troubleshooting Linux Container Executor
▶︎
Troubleshooting Apache HBase
Troubleshooting HBase
▶︎
Using the HBCK2 tool to remediate HBase clusters
Running the HBCK2 tool
Finding issues
Fixing issues
HBCK2 tool command reference
Thrift Server crashes after receiving invalid data
HBase is using more disk space than expected
Troubleshoot RegionServer grouping
▶︎
Troubleshooting Apache Kudu
▶︎
Issues starting or restarting the master or the tablet server
Errors during hole punching test
Already present: FS layout already exists
Troubleshooting NTP stability problems
Disk space usage issue
▶︎
Performance issues
▶︎
Kudu tracing
Accessing the tracing web interface
RPC timeout traces
Kernel stack watchdog traces
Memory limits
Block cache size
Heap sampling
Slow name resolution and nscd
▶︎
Usability issues
ClassNotFoundException: com.cloudera.kudu.hive.KuduStorageHandler
Runtime error: Could not create thread: Resource temporarily unavailable (error 11)
Tombstoned or STOPPED tablet replicas
Corruption: checksum error on CFile block
Symbolizing stack traces
▶︎
Recover from a dead Kudu master
Prepare for the recovery
Perform the recovery
▶︎
Troubleshooting Operational Database powered by Apache Accumulo
Under‐replicated block exceptions or cluster failure occurs on small clusters
▶︎
HDFS storage demands due to retained HDFS trash
Change the HDFS trash settings in Cloudera Manager
Disable Operational Database's use of HDFS trash
▶︎
Troubleshooting Cloudera Search
Identifying problems
Troubleshooting
▶︎
Troubleshooting Hue
The Hue load balancer not distributing users evenly across various Hue servers
Unable to authenticate users in Hue using SAML
Cleaning up old data to improve performance
Unable to connect to database with provided credential
Activating Hive query editor on Hue UI
Completed Hue query shows executing on Cloudera Manager
Finding the list of Hue superusers
Knox Gateway UI: incorrect username or password
HTTP 403 error while accessing Hue
'Type' error while accessing Hue from Knox Gateway
Unable to access Hue from Knox Gateway UI
Referer checking failed
Unable to view Snappy-compressed files
"Unknown Attribute Name" exception
Invalid query handle
Load balancing between Hue and Impala
Services backed by PostgreSQL fail or stop responding
Error validating LDAP user in Hue
502 Proxy Error while accessing Hue from the Load Balancer
Invalid method name: 'GetLog' error
Authorization Exception error
Cannot alter compressed tables in Hue
Connection failed error when accessing the Search app (Solr) from Hue
Downloading query results from Hue takes time
Hue Load Balancer does not start
Unable to terminate Hive queries from Job Browser
Unable to view or create Oozie workflows
MySQL: 1040, 'Too many connections' exception
Unable to connect Oracle database to Hue using SCAN
Increasing the maximum number of processes for Oracle database
Fixing authentication issues between HBase and Hue
Lengthy BalancerMember Route length
Enabling access to HBase browser from Hue
Unable to use pip command in Cloudera
Hue load balancer does not start after enabling TLS
Unable to log into Hue with Knox
LDAP search fails with invalid credentials error
Unable to execute queries due to atomic block
Hue service does not start after a fresh installation or upgrade
Query Process fails to start intermittently due to access issues in Java 9 and later
Unable to access Hue after upgrading
Unable to run the freeze command
Disabling the web metric collection for Hue
Resolving "The user authorized on the connection does not match the session username" error
Requirements for compressing and extracting files using Hue File Browser
Fixing a warning related to accessing non-optimized Hue
Fixing incorrect start time and duration on Hue Job Browser
▶︎
Troubleshooting Apache Spark
Spark jobs failing with memory issues
▶︎
Troubleshooting Apache Sqoop
Unable to read Sqoop metastore created by an older HSQLDB version
Merge process stops during Sqoop incremental imports
Sqoop Hive import stops when HS2 does not use Kerberos authentication
▶︎
Reference
▶︎
Apache Hadoop YARN Reference
▶︎
Tuning Apache Hadoop YARN
YARN tuning overview
Step 1: Worker host configuration
Step 2: Worker host planning
Step 3: Cluster size
Steps 4 and 5: Verify settings
Step 6: Verify container settings on cluster
Step 6A: Cluster container capacity
Step 6B: Container parameters checking
Step 7: MapReduce configuration
Step 7A: MapReduce settings checking
Set properties in Cloudera Manager
Configure memory settings
YARN Configuration Properties
Use the YARN REST APIs to manage applications
▶︎
Comparison of Fair Scheduler with Capacity Scheduler
Why one scheduler?
Scheduler performance improvements
Feature comparison
Migration from Fair Scheduler to Capacity Scheduler
▶︎
Configuring and using Queue Manager REST API
Limitations
Using the REST API
Prerequisites
Start Queue
Stop Queue
Add Queue
Change Queue Capacities
Change Queue Properties
Delete Queue
▶︎
Data Access
▶︎
Apache Hive Materialized View Commands
ALTER MATERIALIZED VIEW REBUILD
ALTER MATERIALIZED VIEW REWRITE
CREATE MATERIALIZED VIEW
DESCRIBE EXTENDED and DESCRIBE FORMATTED
DROP MATERIALIZED VIEW
SHOW MATERIALIZED VIEWS
▶︎
Apache Hive Reference
▶︎
Apache Impala Reference
▶︎
Data Durability Considerations
Erasure Coding Overview
Verify the Related Query Option
Enable EC Replication
Verify the EC Policies
Understand Query Performance
Monitor EC Metrics
▶︎
Performance Considerations
Recommended configurations
Performance Best Practices
Query Join Performance
▶︎
Table and Column Statistics
Generating Table and Column Statistics
Runtime Filtering
Min/Max Filtering
Bloom Filtering
Late Materialization of Columns
▶︎
Partitioning
Partition Pruning for Queries
HDFS Caching
HDFS Block Skew
Understanding Performance using EXPLAIN Plan
Understanding Performance using SUMMARY Report
Understanding Performance using Query Profile
Planner changes for CPU usage
▶︎
Scalability Considerations
Scaling Limits and Guidelines
Dedicated Coordinator
▶︎
Hadoop File Formats Support
Using Text Data Files
Using Parquet Data Files
Using ORC Data Files
Using Avro Data Files
Using RCFile Data Files
Using SequenceFile Data Files
▶︎
Storage Systems Supports
▶︎
Impala with HDFS
Configure Impala Daemon to spill to HDFS
▶︎
Impala with Kudu
Configuring for Kudu Tables
▶︎
Impala DDL for Kudu
Partitioning for Kudu Tables
Creating External Table
Impala DML for Kudu Tables
Impala with HBase
Impala with Azure Data Lake Store (ADLS)
▶︎
Impala with Amazon S3
Specifying Impala Credentials to Access S3
▶︎
Impala with Ozone
Configure Impala Daemon to spill to Ozone
Ports Used by Impala
Migration Guide
Setting up Data Cache for Remote Reads
▶︎
Managing Metadata in Impala
On-demand Metadata
Automatic Invalidation of Metadata Cache
Automatic Invalidation/Refresh of Metadata
Transactions
▶︎
Apache Impala SQL Reference
Apache Impala SQL Overview
▶︎
Schema objects
Impala aliases
Databases
Functions
Identifiers
Impala tables
Views
▶︎
Data types
ARRAY complex type
BIGINT data type
BINARY data type
BOOLEAN data type
CHAR data type
DATE data type
DECIMAL data type
DOUBLE data type
FLOAT data type
INT data type
MAP complex type
REAL data type
SMALLINT data type
STRING data type
STRUCT complex type
▶︎
TIMESTAMP data type
Customizing time zones
TINYINT data type
VARCHAR data type
▶︎
Complex types
Querying arrays
Zipping unnest on arrays from views
Literals
Operators
Comments
▶︎
SQL statements
ROLE statements in Impala integrated with Ranger
DDL statements
DML statements
ALTER DATABASE statement
ALTER TABLE statement
ALTER VIEW statement
COMMENT statement
COMPUTE STATS statement
CREATE DATABASE statement
CREATE FUNCTION statement
CREATE ROLE statement
CREATE TABLE statement
CREATE VIEW statement
DELETE statement
DESCRIBE statement
DROP DATABASE statement
DROP FUNCTION statement
DROP ROLE statement
DROP STATS statement
DROP TABLE statement
DROP VIEW statement
EXPLAIN statement
GRANT statement
GRANT ROLE statement
INSERT statement
INVALIDATE METADATA statement
LOAD DATA statement
REFRESH statement
REFRESH AUTHORIZATION statement
REFRESH FUNCTIONS statement
REVOKE statement
REVOKE ROLE statement
▶︎
SELECT statement
Joins in Impala SELECT statements
ORDER BY clause
GROUP BY clause
HAVING clause
LIMIT clause
OFFSET clause
UNION clause
UNION, INTERSECT, and EXCEPT clauses
Subqueries in Impala SELECT statements
TABLESAMPLE clause
WITH clause
DISTINCT operator
SET statement
SHOW statement
SHOW ROLES statement
SHOW CURRENT ROLES statement
SHOW ROLE GRANT GROUP statement
SHUTDOWN statement
TRUNCATE TABLE statement
UPDATE statement
UPSERT statement
USE statement
VALUES statement
Optimizer hints
Query options
Virtual column
▶︎
Built-in functions
Mathematical functions
Bit functions
Conversion functions
Date and time functions
Conditional functions
Impala string functions
Miscellaneous functions
▶︎
Aggregate functions
APPX_MEDIAN function
AVG function
COUNT function
GROUPING() and GROUPING_ID() functions
GROUP_CONCAT function
MAX function
MIN function
NDV function
STDDEV, STDDEV_SAMP, STDDEV_POP functions
SUM function
VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP functions
▶︎
Analytic functions
OVER
WINDOW
AVG
COUNT
CUME_DIST
DENSE_RANK
FIRST_VALUE
LAG
LAST_VALUE
LEAD
MAX
MIN
NTILE
PERCENT_RANK
RANK
ROW_NUMBER
SUM
Impala hash functions
▶︎
Creating an Impala user-defined function
UDF concepts
Runtime environment for UDFs
Installing the UDF development package
Writing UDFs
Writing user-defined aggregate functions (UDAFs)
Building and deploying UDFs
Performance considerations for UDFs
Examples of creating and using UDFs
Security considerations for UDFs
Limitations and restrictions for Impala UDFs
Transactions
Multi-row transactions
Impala reserved words
Impala SQL and Hive SQL
SQL migration to Impala
UTF-8 Support
▶︎
Cloudera Search solrctl Reference
solrctl Reference
Using solrctl with an HTTP proxy
▶︎
Cloudera Search Morphlines Reference
Implementing your own Custom Command
Morphline commands overview
kite-morphlines-core-stdio
kite-morphlines-core-stdlib
kite-morphlines-avro
kite-morphlines-json
kite-morphlines-hadoop-core
kite-morphlines-hadoop-parquet-avro
kite-morphlines-hadoop-rcfile
kite-morphlines-hadoop-sequencefile
kite-morphlines-maxmind
kite-morphlines-metrics-servlets
kite-morphlines-protobuf
kite-morphlines-tika-core
kite-morphlines-tika-decompress
kite-morphlines-saxon
kite-morphlines-solr-core
kite-morphlines-solr-cell
kite-morphlines-useragent
▶︎
Operational Database
▶︎
Apache Phoenix Frequently Asked Questions
Frequently asked questions
▶︎
Apache Phoenix Performance Tuning
Performance tuning
▶︎
Apache Phoenix Command Reference
Apache Phoenix SQL command reference
▶︎
Operational Database powered by Apache Accumulo Reference
Default ports of Operational Database
▶︎
Apache Atlas Reference
Apache Atlas Advanced Search language reference
Apache Atlas Statistics reference
Apache Atlas metadata attributes
▶︎
Dynamic handling of failure in updating index
Configurations used for index recovery
Defining Apache Atlas enumerations
▶︎
Purging deleted entities
Auditing purged entities
PUT /admin/purge/ API
POST /admin/audits/ API
▶︎
Apache Atlas technical metadata migration reference
System metadata migration
HDFS entity metadata migration
Hive entity metadata migration
Impala entity metadata migration
Spark entity metadata migration
AWS S3 entity metadata migration
▶︎
NiFi metadata collection
How Lineage strategy works
Understanding the data that flow into Atlas
NiFi lineage
Atlas NiFi relationships
Atlas NiFi audit entries
How the reporting task runs in a NiFi cluster
Analysing event analysis
Limitations of Atlas-NiFi integration
▶︎
HiveServer metadata collection
HiveServer actions that produce Atlas entities
HiveServer entities created in Atlas
HiveServer relationships
HiveServer lineage
HiveServer audit entries
▶︎
HBase metadata collection
HBase actions that produce Atlas entities
HBase entities created in Atlas
Changing the column family compression type
Hbase lineage
HBase audit entries
▶︎
Schema Registry metadata collection
Configuring Atlas and Schema Registry
Schema Registry actions that produce Atlas entities
Schema replationships
Schema Registry audit entries
Troubleshooting Schema Registry
▶︎
Impala metadata collection
Impala actions that produce Atlas entities
Impala entities created in Atlas
Impala lineage
Impala audit entries
▶︎
Kafka metadata collection
Kafka actions that produce Atlas entities
Kafka relationships
Kafka lineage
Kafka audit entries
▶︎
Spark metadata collection
Spark actions that produce Atlas entities
Spark entities created in Apache Atlas
Spark lineage
Spark relationships
Spark audit entries
Spark Connector configuration in Apache Atlas
Spark troubleshooting
▶︎
Streams Messaging
▶︎
Schema Registry Reference
SchemaRegistryClient properties reference
KafkaAvroSerializer properties reference
KafkaAvroDeserializer properties reference
▶︎
Streams Replication Manager Reference
srm-control Options Reference
Configuration Properties Reference for Properties not Available in Cloudera Manager
Kafka credentials property reference
Streams Replication Manager Service data traffic reference
Cruise Control REST API Reference
Kafka Connect REST API Reference
Schema Registry REST API Reference
Streams Messaging Manager REST API Reference
Streams Replication Manager REST API Reference
Ranger REST API Reference
▶︎
Security
▶︎
Authorization
Migrating from Sentry to Ranger
Consolidating policies created by Authzmigrator
Customizing authorization-migration-site.xml
Check MySQL isolation configuration
Ranger audit schema reference
Ranger database schema reference
Ranger policies allowing create privilege for Hadoop_SQL databases
Ranger policies allowing create privilege for Hadoop_SQL tables
Access required to Read/Write on Hadoop_SQL tables using SQL
Mapping Sentry permissions for Solr to Ranger policies
▶︎
Encryption
Auto-TLS Requirements and Limitations
Rotate Auto-TLS Certificate Authority and Host Certificates
Auto-TLS Agent File Locations
"Unknown Attribute Name" exception
'Type' error while accessing Hue from Knox Gateway
(optional) Enable high availability for Cloudera Manager
(Recommended) Enable Auto-TLS
(Recommended) Enable Kerberos
.NET client
502 Proxy Error while accessing Hue from the Load Balancer
A List of S3A Configuration Properties
A records and round robin DNS
Ability to download search results from Atlas UI
Aborting a Pending Command
About Apache Ozone integration with Apache Atlas
About Atlas High Availability
About HBase snapshots
About Hue Query Processor
About Ranger RMS for Ozone
About the Hue SQL AI Assistant
About the Off-heap BucketCache
About using Hue
Access HDFS from the NFS Gateway
Access Ozone S3 Gateway using the S3A filesystem
Access required to Read/Write on Hadoop_SQL tables using SQL
Access the Recon web user interface
Accessing and using Hue
Accessing Apache HBase
Accessing Atlas logs
Accessing Avro data files from Spark SQL applications
Accessing Azure Storage account container from spark-shell
Accessing Cloud Data
Accessing compressed files in Spark
Accessing data stored in Amazon S3 through Spark
Accessing external storage from Spark
Accessing Files Within an Encryption Zone
Accessing HBase metrics in Prometheus format
Accessing HDFS Files from Spark
Accessing Hive files in Ozone
Accessing Hive files in Ozone
Accessing Hive from Spark
Accessing Iceberg files in Ozone
Accessing Iceberg tables
Accessing Iceberg tables
Accessing ORC Data in Hive Tables
Accessing ORC files from Spark
Accessing Ozone object store with Amazon Boto3 client
Accessing Ozone Recon Web UI
Accessing Ozone S3 using S3A FileSystem
Accessing Parquet files from Spark SQL applications
Accessing Recon Web UI
Accessing Spark SQL through the Spark shell
Accessing the Oozie server with a browser
Accessing the Oozie server with the Oozie Client
Accessing the Ranger Admin Web UI
Accessing the Ranger KMS Web UI
Accessing the Spark History Server
Accessing the tracing web interface
Accessing the YARN Queue Manager UI
Accessing the YARN Web User Interface
Achieving cross-cluster availability through Hive Load Balancer failover
ACID Operation
ACID operations
ACL examples
ACLS on HDFS features
Activate read replicas on a table
Activating container balancer using Cloudera Manager
Activating Hive query editor on Hue UI
Activating the Hive web UI
Active Directory Settings
Add a custom coprocessor
Add a custom descriptor to Apache Knox
Add a new provider in an existing provider configuration
Add a new shared provider configuration
Add a ZooKeeper service
Add Accumulo on Cloudera service
Add Accumulo on Cloudera service
Add Accumulo on Cloudera service
Add custom service parameter to descriptor
Add custom service to existing descriptor in Apache Knox Proxy
Add HDFS system mount
Add Queue
Add secure Accumulo on Cloudera service to your cluster
Add secure Accumulo on Cloudera service to your cluster
Add secure Accumulo on Cloudera service to your cluster
Add storage directories using Cloudera Manager
Add Streams Replication Manager to an existing cluster
Add the HttpFS role
Add unsecure Accumulo on Cloudera service to your cluster
Add unsecure Accumulo on Cloudera service to your cluster
Add unsecure Accumulo on Cloudera service to your cluster
Add-on Services
Adding a custom banner in Hue
Adding a group
Adding a HiveServer role
Adding a HiveServer role
Adding a Hue role instance with Cloudera Manager
Adding a Hue service with Cloudera Manager
Adding a load balancer
Adding a new schema
Adding a policy condition to a resource-based policy
Adding a policy condition to a tag-based policy
Adding a policy label to a resource-based policy
Adding a Ranger security zone
Adding a role through Hive
Adding a role through Ranger
Adding a Service
Adding a splash screen in Hue
Adding a tag-based PII policy
Adding a tag-based service
Adding a user
Adding Accumulo service (secure)
Adding Accumulo service (secure)
Adding Accumulo service (unsecure)
Adding Accumulo service (unsecure)
Adding and configuring record-enabled Processors
Adding and Removing Range Partitions
Adding attributes to Business Metadata
Adding attributes to classifications
Adding clusters to Streams Replication Manager's configuration
Adding Cruise Control as a service
Adding default service users and roles for Ranger
Adding file system credentials to an Oozie workflow
Adding Files to an Encryption Zone
Adding multiple namenodes using the HDFS service
Adding new Ozone Manager node
Adding or editing module permissions
Adding Query Processor admin users and groups
Adding Query Processor service to a cluster
Adding queues using YARN Queue Manager UI
Adding schema to Oozie using Cloudera Manager
Adding self-healing goals to Cruise Control in Cloudera Manager
Adding tag-based policies
Adding the HBase or Hadoop configuration files
Adding the Lily HBase indexer service
Adding the Oozie service using Cloudera Manager
Adding the Phoenix JDBC driver jar
Adding the user or group to a predefined access policy
Adding trusted realms to the cluster
Additional Configuration Options for GCS
Additional considerations when configuring TLS/SSL for Oozie HA
Additional HDFS haadmin commands to administer the cluster
Additional quota considerations
Additional Security Topics
Additional Steps for Apache Ranger
Adjust the Solr replication factor for index files stored in HDFS
Adjusting Heartbeat TCP Timeout Interval
ADLS Proxy Setup
ADLS Sink
ADLS Trash Folder Behavior
Admin ACLs
Administering Hue
Administering Ranger Reports
Administering Ranger Users, Groups, Roles, and Permissions
Administrative commands
Administrative tools for Hive Metastore integration
Admission Control and Query Queuing
Admission Control Sample Scenario
Advanced Committer Configuration
Advanced configuration for write-heavy workloads
Advanced erasure coding configuration
Advanced ORC properties
Advanced partitioning
Advanced settings: Overriding default configurations
Advanced topics
Advanced topics
Advantages of defining a schema for production use
Advantages of Separating Compute and Data Resources
After Evaluating Trial Software
After Evaluating Trial Software
After You Install
Aggregate functions
Aggregating and grouping data
Aging patterns
Allocating DataNode memory as storage
Already present: FS layout already exists
Alter a table
ALTER DATABASE statement
ALTER MATERIALIZED VIEW REBUILD
ALTER MATERIALIZED VIEW REWRITE
Alter table feature
ALTER TABLE statement
ALTER VIEW statement
Amazon S3 Security
Amazon S3 Sink
Analysing event analysis
Analytic functions
Apache Atlas Advanced Search language reference
Apache Atlas dashboard tour
Apache Atlas metadata attributes
Apache Atlas metadata collection overview
Apache Atlas Reference
Apache Atlas Statistics reference
Apache Atlas technical metadata migration reference
Apache Hadoop HDFS Overview
Apache Hadoop YARN Overview
Apache Hadoop YARN Reference
Apache HBase Overview
Apache HBase overview
Apache Hive 3 ACID transactions
Apache Hive 3 architectural overview
Apache Hive 3 tables
Apache Hive content roadmap
Apache Hive features
Apache Hive Materialized View Commands
Apache Hive Metastore Overview
Apache Hive Overview
Apache Hive Performance Tuning
Apache Hive query basics
Apache Hive Reference
Apache Hive-Kafka integration
Apache Iceberg features
Apache Iceberg Overview
Apache Impala Overview
Apache Impala Overview
Apache Impala Reference
Apache Impala SQL Overview
Apache Impala SQL Reference
Apache Kafka Overview
Apache Knox Authentication
Apache Knox Gateway overview
Apache Knox install role parameters
Apache Knox Install Role Parameters
Apache Knox overview
Apache Kudu Background Operations
Apache Kudu Overview
Apache Kudu usage limitations
Apache Ozone Overview
Apache Phoenix and SQL
Apache Phoenix Command Reference
Apache Phoenix Frequently Asked Questions
Apache Phoenix Performance Tuning
Apache Phoenix SQL command reference
Apache Phoenix-Hive usage examples
Apache Ranger Access Control and Auditing
Apache Ranger APIs
Apache Ranger Auditing
Apache Ranger Authorization
Apache Ranger User Management
Apache Spark 3 integration with Schema Registry
Apache Spark 3.4 Requirements
Apache Spark executor task statistics
Apache Spark integration with Schema Registry
Apache Spark Overview
Apache Spark Overview
Apache Tez and Hive LLAP
Apache Zeppelin (unsupported)
APIs for accessing HDFS
Application Access
Application Access
Application ACL evaluation
Application ACLs
Application logs' ACLs
Application not running message
Application reservations
Applications and permissions reference
Apply custom transformation to a column
APPX_MEDIAN function
Architecture
ARRAY complex type
Assembling a secure JDBC URL for Oozie
Assign Roles
Assigning administrator privileges to users
Assigning or unassigning a node to a partition
Assigning superuser status to an LDAP user
Assigning terms to categories
Associate a table in a non-customized environment without Kerberos
Associate table in a customized Kerberos environment
Associating Business Metadata attributes with entities
Associating classifications with entities
Associating partitions with queues
Associating tables of a schema to a namespace
Associating terms with entities
Atlas
Atlas
Atlas
Atlas
Atlas
Atlas classifications drive Ranger policies
Atlas Export and Import operations
Atlas Hook for Sqoop
Atlas Hook for Sqoop
Atlas index repair configuration
Atlas index repair configuration
Atlas metadata model overview
Atlas NiFi audit entries
Atlas NiFi relationships
Atlas Server Operations
Atlas Type Definitions
Audit aging reference configurations
Audit aging using REST API
Audit enhancements
Audit Operations
Audit Overview
Auditing Atlas Entities
Auditing purged entities
Authenticating with ADLS Gen2
Authentication
Authentication
Authentication
Authentication
Authentication and Kerberos Issues
Authentication using Kerberos
Authentication using Knox SSO
Authentication using LDAP
Authentication using OAuth2 with Kerberos
Authentication using PAM
Authentication using SAML
Authorization
Authorization
Authorization
Authorization
Authorization Exception error
Authorization model
Authorizing external tables
Auto-TLS Agent File Locations
Auto-TLS Requirements and Limitations
Automatic group offset synchronization
Automatic Invalidation of Metadata Cache
Automatic Invalidation of Metadata Cache
Automatic Invalidation/Refresh of Metadata
Automatic Invalidation/Refresh of Metadata
Automating partition discovery and repair
Automating Spark Jobs with Oozie Spark Action
Autoscaling behavior
Autoscaling clusters
AVG
AVG function
Avro
Avro
AWS S3 entity metadata migration
Back up HDFS metadata
Back up HDFS metadata using Cloudera Manager
Back up tables
Backing up a collection from HDFS
Backing up a collection from local file system
Backing up and Recovering Apache Kudu
Backing up and restoring data
Backing Up Encryption Keys
Backing up HDFS metadata
Backing up NameNode metadata
Backing up Ozone
Backing up the Hue database
Backup directory structure
Backup tools
Balancer commands
Balancing data across an HDFS cluster
Balancing data across disks of a DataNode
Basic partitioning
Basic search enhancement
Basics
Batch indexing into offline Solr shards
Batch indexing into online Solr servers using GoLive
Batch indexing to Solr using SparkApp framework
Batch indexing using Morphlines
Before You Begin a Trial Installation
Before You Install
Before You Install
Before You Install
Behavioral Changes In Cloudera Runtime 7.3.1
Benefits and Capabilities
Benefits of centralized cache management in HDFS
Benefits of Open Data Lakehouse
Best practices for building Apache Spark applications
Best practices for Iceberg in Cloudera
Best practices for performance tuning
Best practices for rack and node setup for EC
Best practices when adding new tablet servers
Best practices when using RegionServer grouping
Bidirectional replication example of two active clusters
BIGINT data type
BINARY data type
Bit functions
Block cache size
Block move execution
Block move scheduling
Bloom Filtering
BOOLEAN data type
Bring a tablet that has lost a majority of replicas back online
Broker garbage collection log configuration
Broker log management
Broker migration
Broker Tuning
Brokers
Bucket Layout
BucketCache IO engine
Bucketed tables in Hive
Building and deploying UDFs
Building and running a Spark Streaming application
Building Cloudera Manager charts with Kafka metrics
Building reusable modules in Apache Spark applications
Building Spark Applications
Building the project and upload the JAR
Built-in functions
Bulk and migration import of Hive metadata
Bulk Write Access
Business Metadata overview
Bypass the BlockCache
Cache eviction priorities
Caching manifest files
Caching terminology
Calculating Infra Solr resource needs
Calling Hive user-defined functions (UDFs)
Calling the UDF in a query
Canary test for pyspark command
Cancelling a Query
Cannot alter compressed tables in Hue
Case for implementing backward compatibility
Catalog operations
Centralized cache management architecture
Certmanager Options - Using Cloudera Manager's GenerateCMCA API
Change master hostnames
Change Queue Capacities
Change Queue Properties
Change root user password
Change the HDFS trash settings in Cloudera Manager
Changed Behavior after HDFS Encryption is Enabled
Changing /tmp directory for CLI tools
Changing /tmp directory for Ozone services
Changing a nameservice name for Highly Available HDFS using
Changing directory configuration
Changing Embedded PostgreSQL Database Passwords
Changing Ranger audit storage location and migrating data
Changing resource allocation mode
Changing temporary path for Ozone services and CLI tools
Changing the Anomaly Notifier Class value to self-healing
Changing the column family compression type
Changing the Hive warehouse location
Changing the page logo in Hue
Changing the table metadata location
Channel encryption
Channel encryption
CHAR data type
CHAR data type support
Charting spool alert metrics
Check Cluster Security Settings
Check Job History
Check Job Status
Check MySQL isolation configuration
Check trace table
Checking Host Heartbeats
Checking query execution
Choose the right import method
Choosing an appropriate value for the threshold
Choosing Data Formats
Choosing manual TLS or Auto-TLS
Choosing the number of partitions for a topic
Choosing the Sufficient Security Level for Your Environment
Choosing Transformations to Minimize Shuffles
ClassNotFoundException: com.cloudera.kudu.hive.KuduStorageHandler
Cleaning up after failed jobs
Cleaning up old data to improve performance
Cleaning up old queries
CLI commands to perform snapshot operations
CLI tool support
Client and broker compatibility across Kafka versions
Client authentication to secure Kudu clusters
Client authentication using delegation tokens
Client connections to HiveServer
Client examples
Client examples
client.dns.lookup property options for client
Closing HiveWarehouseSession operations
Cloud Connectors
Cloud Connectors
Cloud Connectors
Cloud storage connectors overview
Cloudera Authorization
Cloudera Base on Premises
Cloudera Base on premises Installation Guide
Cloudera Base on premises Trial Download Information
Cloudera Data Warehouse HPL/SQL stored procedures
Cloudera Manager sudo command options
Cloudera Manager Support Matrix
Cloudera Manager user accounts
Cloudera Runtime
Cloudera Runtime 7.3.1.0-197
Cloudera Runtime Cluster Hosts and Role Assignments
Cloudera Runtime Component Versions
Cloudera Runtime Download Information
Cloudera Runtime Release Notes
Cloudera Runtime Version Information
Cloudera Search and Cloudera
Cloudera Search architecture
Cloudera Search authentication
Cloudera Search config templates
Cloudera Search configuration files
Cloudera Search ETL
Cloudera Search log files
Cloudera Search Morphlines Reference
Cloudera Search Overview
Cloudera Search security aspects
Cloudera Search solrctl Reference
Cloudera Search tasks and processes
Cloudera Security Overview
Cluster and hardware configuration in snapshot deployment
Cluster balancing algorithm
Cluster discovery using DNS records
Cluster discovery using load balancers
Cluster discovery with multiple Apache Kafka clusters
Cluster management limitations
Cluster management limitations
Cluster sizing
CNAME records configuration
Collecting metrics through HTTP
Column compression
Column design
Column encoding
Command Details
Command Line Tools
Commands for configuring storage policies
Commands for managing buckets
Commands for managing buckets
Commands for managing keys
Commands for managing volumes
Commands for managing volumes
Commands for managing volumes and buckets
Commands for using cache pools and directives
COMMENT statement
Comments
Committing a transaction for Direct Reader
Common replication topologies
Common web interface pages
Communication encryption
Compacting on-disk data
Compaction observability in Cloudera Manager
Compaction of Data in FULL ACID Transactional Table
Compaction tasks
Compactor properties
Comparing configurations for a service between clusters
Comparing Hive and Impala queries in Hue
Comparing replication and erasure coding
Comparing tables using ANY/SOME/ALL
Comparison of Fair Scheduler with Capacity Scheduler
Compatibility Considerations for Virtual Private Clusters
Compatibility policies
Completed Hue query shows executing on Cloudera Manager
Complex types
Component types and metrics for alert policies
Components of cache-aware load balancer
Components of Impala
Components of stochastic load balancer
Compound operators
COMPUTE STATS statement
Concepts Used in FULL ACID v2 Tables
Concurrent session verification (Tech Preview)
Conditional functions
Configuration details
Configuration details
Configuration details
Configuration examples
Configuration for enabling mTLS in Ozone
Configuration options for Impala to work with Ozone File System
Configuration options for Oozie to work with Ozone storage
Configuration options for Spark to work with Ozone File System (ofs)
Configuration options to store Hive managed tables on Ozone
Configuration properties
Configuration Properties Reference for Properties not Available in Cloudera Manager
Configuration to create bucket with default layout
Configuration to expose buckets under non-default volumes
Configurations and CLI options for the HDFS Balancer
Configurations for submitting a Hive query to a dedicated queue
Configurations for throttling of tasks
Configurations required to use load balancer with Kerberos enabled
Configurations required to use load balancer with SSL enabled
Configurations used for index recovery
Configure a resource-based policy: Atlas
Configure a resource-based policy: HadoopSQL
Configure a resource-based policy: HBase
Configure a resource-based policy: HDFS
Configure a resource-based policy: Kafka
Configure a resource-based policy: Knox
Configure a resource-based policy: NiFi
Configure a resource-based policy: NiFi Registry
Configure a resource-based policy: S3
Configure a resource-based policy: Solr
Configure a resource-based policy: YARN
Configure a resource-based service: Atlas
Configure a resource-based service: HadoopSQL
Configure a resource-based service: HBase
Configure a resource-based service: HDFS
Configure a resource-based service: Kafka
Configure a resource-based service: Knox
Configure a resource-based service: NiFi
Configure a resource-based service: NiFi Registry
Configure a resource-based service: Solr
Configure a resource-based service: YARN
Configure a resource-based storage handler policy: HadoopSQL
Configure a Spark job for dynamic resource allocation
Configure Access to GCS from Your Cluster
Configure Antivirus Software on Cloudera Hosts
Configure Apache Knox authentication for AD/LDAP
Configure Apache Knox authentication for PAM
Configure Apache Knox Authentication for SAML
Configure archival storage
Configure Atlas authentication for AD
Configure Atlas authentication for LDAP
Configure Atlas file-based authentication
Configure Atlas PAM authentication
Configure Authentication for Amazon S3
Configure authentication using Active Directory
Configure authentication using an external program
Configure authentication using an LDAP-compliant identity service
Configure authentication using Kerberos (SPNEGO)
Configure authentication using SAML
Configure AWS Credentials
Configure Browser-based Interfaces to Require Authentication (SPNEGO)
Configure Browsers for Kerberos Authentication (SPNEGO)
Configure BucketCache IO engine
Configure bulk load replication
Configure clients on a producer or consumer level
Configure clients on an application level
Configure Cloudera Manager for FIPS
Configure Cluster to Use Kerberos Authentication
Configure columns to store MOBs
Configure CPU scheduling and isolation
Configure DataNode memory as storage
Configure DNS
Configure DNS
Configure Encryption for Amazon S3
Configure encryption in HBase
Configure four-letter-word commands in ZooKeeper
Configure FPGA scheduling and isolation
Configure GPU scheduling and isolation
Configure HBase for use with Phoenix
Configure HBase garbage collection
Configure HDFS RPC protection
Configure High Availability for Ranger KMS with DB
Configure Hive to use with HBase
Configure HMS properties for authorization
Configure Impala Daemon to spill to HDFS
Configure Impala Daemon to spill to Ozone
Configure JMX ephemeral ports
Configure Kafka brokers
Configure Kafka brokers
Configure Kafka brokers
Configure Kafka brokers
Configure Kafka brokers
Configure Kafka brokers
Configure Kafka clients
Configure Kafka clients
Configure Kafka clients
Configure Kafka clients
Configure Kafka clients
Configure Kafka clients
Configure Kafka MirrorMaker
Configure Kafka MirrorMaker
Configure Kerberos
Configure Kerberos
Configure Kerberos
Configure Kerberos authentication for Apache Atlas
Configure Kerberos authentication for Apache Ranger
Configure Kerberos authentication for Solr
Configure Kudu processes
Configure Lily HBase Indexer Service to use Kerberos authentication
Configure Lily HBase Indexer to use TLS/SSL
Configure Lily HBase Indexer to use TLS/SSL
Configure memory settings
Configure mountable HDFS
Configure Network Names
Configure Oozie client when TLS/SSL is enabled
Configure optimized rename and recursive delete operations in Ranger Ozone plugin
Configure Oracle Database
Configure Phoenix-Hive connector
Configure Phoenix-Spark connector
Configure PostgreSQL as the backend database for Hue
Configure PostgreSQL for Streaming Components
Configure queue ordering policies
Configure Ranger Admin High Availability
Configure Ranger Admin High Availability with a Load Balancer
Configure Ranger authentication for AD
Configure Ranger authentication for LDAP
Configure Ranger authentication for PAM
Configure Ranger authentication for UNIX
Configure Ranger authorization for Infra Solr
Configure Ranger with SSL/TLS enabled PostgreSQL Database
Configure read replicas using Cloudera Manager
Configure RegionServer grouping
Configure S3 credentials for working with Ozone
Configure SASL Bind in Ranger Usersync
Configure secure replication
Configure source and destination realms in krb5.conf
Configure SQL AI Assistant using Cloudera AI Workbench
Configure SQL AI Assistant using the Amazon Bedrock Service
Configure SQL AI Assistant using the Cloudera AI Inference service
Configure SQL AI Assistant using the Microsoft Azure OpenAI service
Configure SQL AI Assistant using the OpenAI platform
Configure SQL AI Assistant using vLLM
Configure storage balancing for DataNodes using Cloudera Manager
Configure Streams Replication Manager for Failover and Failback
Configure the blocksize for a column family
Configure the compaction speed using Cloudera Manager
Configure the graceful shutdown timeout property
Configure the HBase canary
Configure the HBase client TGT renewal period
Configure the HBase thrift server role
Configure the MOB cache using Cloudera Manager
Configure the off-heap BucketCache using Cloudera Manager
Configure the off-heap BucketCache using the command line
Configure the PostgreSQL server
Configure the resource-based Ranger service used for authorization
Configure the scanner heartbeat using Cloudera Manager
Configure the storage policy for WALs using Cloudera Manager
Configure the storage policy for WALs using the Command Line
Configure TLS encryption manually for Phoenix Query Server
Configure TLS encryption manually for Phoenix Query Server
Configure TLS/SSL encryption for Solr
Configure TLS/SSL encryption for Solr
Configure TLS/SSL encryption manually for Apache Ranger
Configure TLS/SSL encryption manually for Apache Ranger
Configure TLS/SSL encryption manually for Ranger KMS
Configure TLS/SSL encryption manually for Ranger KMS
Configure TLS/SSL encryption manually for Ranger RMS
Configure TLS/SSL encryption manually for Ranger RMS
Configure TLS/SSL for Oozie
Configure TLS/SSL for Oozie
Configure transaction support
Configure ulimit for HBase using Cloudera Manager
Configure ulimit using Pluggable Authentication Modules using the Command Line
Configure User Impersonation for Access to Hive
Configure User Impersonation for Access to Phoenix
Configure ZooKeeper client shell for Kerberos authentication
Configure ZooKeeper server for Kerberos authentication
Configure Zookeeper TLS/SSL support for Kafka
Configure Zookeeper TLS/SSL support for Kafka
Configure ZooKeeper TLS/SSL using Cloudera Manager
Configure ZooKeeper TLS/SSL using Cloudera Manager
Configuring a custom Hive CREATE TABLE statement
Configuring a custom Hive CREATE TABLE statement
Configuring a database for Ranger or Ranger KMS
Configuring a dedicated MIT KDC for cross-realm trust
Configuring a Nexus repository allow list
Configuring a PostgreSQL Database for Ranger or Ranger KMS
Configuring a Ranger audit filter policy
Configuring a Ranger or Ranger KMS Database: MySQL/MariaDB
Configuring a Ranger or Ranger KMS Database: Oracle
Configuring a Ranger or Ranger KMS Database: Oracle using /ServiceName format
Configuring a resource-based policy using Ranger
Configuring a secure Kudu cluster using Cloudera Manager
Configuring Access to Azure in Cloudera Base on premises
Configuring Access to Google Cloud Storage
Configuring access to Hive on YARN
Configuring Access to S3
Configuring Access to S3 in Cloudera Base on premises
Configuring ACLs on HDFS
Configuring Advanced Security Options for Apache Ranger
Configuring an external database for Oozie
Configuring an https endpoint in Ozone S3 Gateway to work with AWS CLI
Configuring an SMT chain
Configuring and Monitoring Atlas
Configuring and running the HDFS balancer using Cloudera Manager
Configuring and Starting the PostgreSQL Server
Configuring and tuning S3A block upload
Configuring and using Queue Manager REST API
Configuring and Using Ranger KMS
Configuring and Using Ranger RMS Hive-HDFS ACL Sync
Configuring and Using Zeppelin Interpreters
Configuring Apache Hadoop YARN High Availability
Configuring Apache Hadoop YARN Log Aggregation
Configuring Apache Hadoop YARN Security
Configuring Apache HBase
Configuring Apache HBase for Apache Phoenix
Configuring Apache HBase High Availability
Configuring Apache Hive
Configuring Apache Impala
Configuring Apache Kafka
Configuring Apache Kudu
Configuring Apache Ranger High Availability
Configuring Apache Spark
Configuring Apache Zeppelin
Configuring Apache ZooKeeper
Configuring Atlas and Schema Registry
Configuring Atlas Authentication
Configuring Atlas Authorization
Configuring Atlas Authorization using Ranger
Configuring Atlas using Cloudera Manager
Configuring audit spool alert notifications
Configuring authentication for long-running Spark Streaming jobs
Configuring Authentication in Cloudera Manager
Configuring Authentication in Cloudera Manager
Configuring authentication with LDAP and Direct Bind
Configuring authentication with LDAP and Search Bind
Configuring Authorization
Configuring auto split policy in an HBase table
Configuring automatic group offset synchronization
Configuring autoscaling
Configuring Basic Authentication for Remote Querying
Configuring Basic Authentication for the Streams Replication Manager service
Configuring block size
Configuring caching for secure access mode
Configuring Catalog
Configuring Client Access to Impala
Configuring client side JWT authentication for Kudu
Configuring Cloudera Runtime services to connect to TLS 1.2/TCPS-enabled databases
Configuring Cloudera Services for HDFS Encryption
Configuring cluster capacity with queues
Configuring coarse-grained authorization with ACLs
Configuring compaction health monitoring
Configuring compaction in Cloudera Manager
Configuring compaction using table properties
Configuring concurrent moves
Configuring connector JAAS configuration and Kerberos principal overrides
Configuring container balancer service
Configuring Cross-Origin Support for YARN UIs and REST APIs
Configuring Cruise Control
Configuring custom Beeline arguments
Configuring custom Beeline arguments
Configuring custom Hive JDBC arguments
Configuring custom Hive JDBC arguments
Configuring custom Hive table properties
Configuring custom Hive table properties
Configuring custom Kerberos principal for Apache Flink
Configuring custom Kerberos principal for Atlas
Configuring custom Kerberos principal for Cloudera SQL Stream Builder
Configuring custom Kerberos principal for Cruise Control
Configuring custom Kerberos principal for Cruise Control
Configuring custom Kerberos principal for HBase
Configuring custom Kerberos principal for HDFS
Configuring custom Kerberos principal for Hive and Hive-on-Tez
Configuring custom Kerberos principal for HttpFS
Configuring custom Kerberos principal for Hue
Configuring custom Kerberos principal for Kafka
Configuring custom Kerberos principal for Kafka
Configuring custom Kerberos principal for Knox
Configuring custom Kerberos principal for Kudu
Configuring custom Kerberos principal for Kudu
Configuring custom Kerberos principal for Livy
Configuring custom Kerberos principal for NiFi and NiFi Registry
Configuring custom Kerberos principal for Omid
Configuring custom Kerberos principal for Oozie
Configuring custom Kerberos principal for Oozie
Configuring custom Kerberos principal for Ozone
Configuring custom Kerberos principal for Ozone
Configuring custom Kerberos principal for Phoenix
Configuring custom Kerberos principal for Ranger
Configuring Custom Kerberos Principal for Ranger KMS
Configuring custom Kerberos principal for Schema Registry
Configuring custom Kerberos principal for Schema Registry
Configuring custom Kerberos principal for Spark
Configuring custom Kerberos principal for Streams Messaging Manager
Configuring custom Kerberos principal for Streams Replication Manager
Configuring custom Kerberos principal for Streams Replication Manager
Configuring custom Kerberos principal for Zeppelin
Configuring custom Kerberos principal for ZooKeeper
Configuring custom Kerberos principals for Solr
Configuring custom Kerberos principals for Solr
Configuring custom Kerberos principals for Solr
Configuring data at rest encryption
Configuring data locality
Configuring Data Protection
Configuring debug delay
Configuring Dedicated Coordinators and Executors
Configuring dedicated Impala coordinator
Configuring Delegation for Clients
Configuring Directories for Intermediate Data
Configuring dynamic resource allocation
Configuring Dynamic Resource Pool
Configuring edge node on AWS
Configuring edge node on Azure
Configuring edge node on GCP
Configuring Encryption for Specific Buckets
Configuring EOS for source connectors
Configuring external authentication and authorization for Cloudera Manager
Configuring Fault Tolerance
Configuring file and directory permissions for Hue
Configuring flow.snapshot
Configuring for HDFS high availability
Configuring for Kudu Tables
Configuring graceful shutdown property for HiveServer
Configuring group mapping in Knox
Configuring group permissions
Configuring HBase BlockCache
Configuring HBase MultiWAL
Configuring HBase persistent BucketCache
Configuring HBase servers to authenticate with a secure HDFS cluster
Configuring HBase snapshots
Configuring HBase to use HDFS HA
Configuring HBase-Spark connector when both are on same cluster
Configuring HBase-Spark connector when HBase is on remote cluster
Configuring HDFS ACLs
Configuring HDFS High Availability
Configuring HDFS plugin to view permissions through getfacl interface
Configuring HDFS properties to optimize log collection
Configuring HDFS trash
Configuring heterogeneous storage in HDFS
Configuring high availability for Hue
Configuring High Availability for Ranger RMS (Hive-HDFS ACL-Sync)
Configuring high availability for Spark History Server with an external load balancer
Configuring high availability for Spark History Server with an internal load balancer
Configuring high availability for Spark History Server with multiple Knox Gateways
Configuring high availability support for Oracle RAC database
Configuring Hive access for S3A
Configuring Hive and Impala for high availability with Hue
Configuring Hive Metastore for Iceberg column changes
Configuring Hive to connect to TLS 1.2/TCPS-enabled databases
Configuring HiveServer for ETL using YARN queues
Configuring HiveServer high availability using a load balancer
Configuring HiveServer high availability using ZooKeeper
Configuring HMS for high availability
Configuring HSTS for HBase Web UIs
Configuring HSTS for HDFS Web UIs
Configuring HSTS for Spark
Configuring HTTPS encryption
Configuring Hue as a TLS/SSL client
Configuring Hue as a TLS/SSL client
Configuring Hue as a TLS/SSL server
Configuring Hue as a TLS/SSL server
Configuring Hue to connect to TLS 1.2/TCPS-enabled databases
Configuring Hue to handle HS2 failover
Configuring Impala
Configuring Impala access for S3A
Configuring Impala for High Availability
Configuring Impala TLS/SSL
Configuring Impala TLS/SSL
Configuring Impala to work with HDFS HA
Configuring Impala Web UI
Configuring Impyla for Impala
Configuring Infra Solr
Configuring JDBC for Impala
Configuring JVM options and system properties for Ranger services
Configuring JWT Authentication
Configuring Kafka brokers
Configuring Kafka clients
Configuring Kafka command line tools in FIPS clusters
Configuring Kafka ZooKeeper chroot
Configuring Kerberos authentication
Configuring Kerberos Authentication for Impala
Configuring Kerberos Authentication for Impala
Configuring Kerberos authentication in Apache Knox shared providers
Configuring Kerberos properties
Configuring LDAP Authentication
Configuring LDAP Group Mappings
Configuring LDAP on unmanaged clusters
Configuring legacy CREATE TABLE behavior
Configuring Lily HBase Indexer Security
Configuring Livy
Configuring Load Balancer for Impala
Configuring Local Package and Parcel Repositories
Configuring log aggregation
Configuring log levels for command line tools
Configuring manifest caching in Cloudera Manager
Configuring MariaDB as the backend database for Hue
Configuring MariaDB for Oozie
Configuring MariaDB server
Configuring Maximum File Descriptors
Configuring metastore database properties
Configuring metastore location and HTTP mode
Configuring Metrics Reporter in Cruise Control
Configuring multiple listeners
Configuring multiple listeners
Configuring MultiWAL support using Cloudera Manager
Configuring mutual TLS for Schema Registry
Configuring MySQL 5 for Oozie
Configuring MySQL 8 for Oozie
Configuring MySQL as the backend database for Hue
Configuring MySQL for Streaming Components
Configuring MySQL server
Configuring nested group hierarchies
Configuring network line-of-sight
Configuring network line-of-sight
Configuring Nginx for basic authentication
Configuring node attribute for application master placement
Configuring NodeManager heartbeat
Configuring ODBC for Impala
Configuring Oozie data purge settings using Cloudera Manager
Configuring Oozie High Availability using Cloudera Manager
Configuring Oozie Sqoop1 Action workflow JDBC drivers
Configuring Oozie to connect to TLS 1.2/TCPS-enabled databases
Configuring Oozie to enable MapReduce jobs to read or write from Amazon S3
Configuring oozie to use HDFS HA
Configuring Oozie to use HDFS HA
Configuring Oracle as backend database for Hue
Configuring Oracle for Oozie
Configuring Oracle for Streaming Components
Configuring other components to use HDFS HA
Configuring Ozone
Configuring Ozone Security
Configuring Ozone services
Configuring Ozone to work as a pure object store
Configuring Ozone to work with Prometheus
Configuring PAM authentication using Apache Knox
Configuring PAM authentication with LDAP and SSSD
Configuring PAM authentication with Linux users
Configuring partitions for transactions
Configuring per queue properties
Configuring Per-Bucket Settings
Configuring Per-Bucket Settings to Access Data Around the World
Configuring PostgreSQL for Oozie
Configuring preemption
Configuring properties for non-Kerberos authentication mechanisms
Configuring properties not exposed in Cloudera Manager
Configuring Proxy Users to Access HDFS
Configuring purge of x_auth_sess data
Configuring query audit logs to include caller context
Configuring queue mapping to use the user name from the application tag using Cloudera Manager
Configuring queue mapping to use the user name from the application tag using Cloudera Manager
Configuring quotas
Configuring Ranger audit log storage to a local file
Configuring Ranger audit properties for HDFS
Configuring Ranger audit properties for Solr
Configuring Ranger audits to show actual client IP address
Configuring Ranger Authentication with UNIX, LDAP, AD, or PAM
Configuring Ranger Authentication with UNIX, LDAP, or AD
Configuring Ranger authorization
Configuring Ranger Authorization for Atlas
Configuring Ranger KMS High Availability
Configuring Ranger KMS to connect to TLS 1.2/TCPS-enabled databases
Configuring Ranger RMS (Hive-HDFS / Hive-OZONE ACL Sync)
Configuring Ranger to connect to TLS 1.2/TCPS-enabled databases
Configuring Ranger Usersync and Tagsync High Availability
Configuring Ranger Usersync for Deleted Users and Groups
Configuring Ranger Usersync for invalid usernames
Configuring Remote Querying
Configuring replication specific REST servers
Configuring replications
Configuring resource-based policies
Configuring resource-based services
Configuring rolling restart checks
Configuring SAML authentication on managed clusters
Configuring scheduler properties at the global level
Configuring Schema Registry instance in NiFi
Configuring Schema Registry to connect to TLS 1.2/TCPS-enabled databases
Configuring secure access between Solr and Hue
Configuring secure HBase replication
Configuring secure HBase replication
Configuring security for Storage Container Managers in High Availability
Configuring server side JWT authentication for Kudu
Configuring Services to Use LZO Compression
Configuring session inactivity timeout for Ranger Admin Web UI
Configuring Simple Authorization in Atlas
Configuring Spark access for S3A
Configuring Spark application logging properties
Configuring Spark application properties in spark-defaults.conf
Configuring Spark Applications
Configuring Spark on YARN Applications
Configuring SPNEGO authentication and trusted proxies
Configuring srm-control
Configuring srm-control in FIPS clusters
Configuring storage balancing for DataNodes
Configuring Streams Messaging Manager
Configuring Streams Messaging Manager for basic authentication
Configuring Streams Messaging Manager to connect to TLS 1.2/TCPS-enabled databases
Configuring Streams Messaging Manager to recognize Prometheus's TLS certificate
Configuring Streams Replication Manager
Configuring Streams Replication Manager Driver for performance tuning
Configuring Streams Replication Manager Driver heartbeat emission
Configuring Streams Replication Manager Driver retry behaviour
Configuring tablet servers
Configuring temporary table storage
Configuring the ABFS Connector
Configuring the advertised information of the Streams Replication Manager Service role
Configuring the Atlas hook in Kafka
Configuring the balancer threshold
Configuring the BI tool
Configuring the client configuration used for rolling restart checks
Configuring the compaction check interval
Configuring the Database for Streaming Components
Configuring the driver role target clusters
Configuring the embedded Jetty Server in Queue Manager
Configuring the Hive Delegation Token Store
Configuring the Hive Metastore to use HDFS HA
Configuring the HiveServer load balancer
Configuring the Hue Query Processor scan frequency
Configuring the Hue Server to Store Data in the Oracle database
Configuring the Kafka Connect Role
Configuring the Kudu master
Configuring the Livy Thrift Server
Configuring the number of objects displayed in Hue
Configuring the number of storage container copies for a DataNode
Configuring the Ozone trash checkpoint values
Configuring the Phoenix classpath
Configuring the queue auto removal expiration time
Configuring the resource capacity of root queue
Configuring the Schema Registry client
Configuring the server work directory path for a Ranger service
Configuring the service role target cluster
Configuring the storage policy for the Write-Ahead Log (WAL)
Configuring the Streams Replication Manager client's secure storage
Configuring timezone for Hue
Configuring TLS 1.2 for Cloudera Manager
Configuring TLS 1.2 for Reports Manager
Configuring TLS Encryption for Cloudera Manager Using Auto-TLS
Configuring TLS encryption manually for Apache Atlas
Configuring TLS encryption manually for Schema Registry
Configuring TLS encryption manually for Schema Registry
Configuring TLS/SSL client authentication
Configuring TLS/SSL encryption
Configuring TLS/SSL encryption
Configuring TLS/SSL encryption for Kudu using
Configuring TLS/SSL encryption for Kudu using Cloudera Manager
Configuring TLS/SSL encryption manually
Configuring TLS/SSL encryption manually for Apache Knox
Configuring TLS/SSL encryption manually for Cloudera Services
Configuring TLS/SSL encryption manually for DAS using Cloudera Manager
Configuring TLS/SSL encryption manually for Livy
Configuring TLS/SSL encryption manually for Ozone
Configuring TLS/SSL encryption manually for Spark
Configuring TLS/SSL encryption manually for Zeppelin
Configuring TLS/SSL for Apache Atlas
Configuring TLS/SSL for Core Hadoop Services
Configuring TLS/SSL for Core Hadoop Services
Configuring TLS/SSL for HBase
Configuring TLS/SSL for HBase
Configuring TLS/SSL for HBase REST Server
Configuring TLS/SSL for HBase REST Server
Configuring TLS/SSL for HBase Thrift Server
Configuring TLS/SSL for HBase Thrift Server
Configuring TLS/SSL for HBase Web UIs
Configuring TLS/SSL for HBase Web UIs
Configuring TLS/SSL for HDFS
Configuring TLS/SSL for HDFS
Configuring TLS/SSL for Hue
Configuring TLS/SSL for Hue
Configuring TLS/SSL for the KMS
Configuring TLS/SSL for YARN
Configuring TLS/SSL for YARN
Configuring TLS/SSL manually
Configuring TLS/SSL properties
Configuring TLSv1.2-enforced MySQL server
Configuring Transparent Data Encryption for Ozone
Configuring ulimit for HBase
Configuring Usersync assignment of Admin users
Configuring Usersync to sync directly with LDAP/AD
Configuring work preserving recovery on NodeManager
Configuring work preserving recovery on ResourceManager
Configuring YARN Queue Manager dependency
Configuring YARN ResourceManager high availability
Configuring YARN Security for Long-Running Applications
Configuring YARN Services API to manage long-running applications
Configuring YARN Services using Cloudera Manager
Configuring Zeppelin caching
Confirm the election status of a ZooKeeper service
Connect to Phoenix Query Server
Connect to Phoenix Query Server through Apache Knox
Connect workers
Connecting Hive to BI tools using a JDBC/ODBC driver
Connecting to a kerberized Impala daemon
Connecting to an Apache Hive endpoint through Apache Knox
Connecting to Impala Daemon in Impala Shell
Connecting to PQS using JDBC
Connecting to the Apache Livy Thrift Server
Connecting to the Kafka cluster using load balancer
Connection failed error when accessing the Search app (Solr) from Hue
Connection to the cluster with configured DNS aliases
Connectors
Connectors
Considerations for backfill inserts
Considerations for configuring High Availability on Storage Container Manager
Considerations for configuring High Availability on the Ozone Manager
Considerations for enabling SCM HA security
Considerations for Knox
Considerations for Oozie to work with AWS
Considerations for working with HDFS snapshots
Consolidating policies created by Authzmigrator
Container balancer CLI commands
Container Balancer overview
ContainerExecutor Error Codes (YARN)
Contents of the BlockCache
Controlling access to queues using ACLs
Controlling Data Access with Tags
Conversion functions
Convert DER, JKS, PEM Files for TLS/SSL Artifacts
ConvertFromBytes
Converting a managed non-transactional table to external
Converting a queue to a Managed Parent Queue
Converting an HDFS file to ORC
Converting an HDFS file to ORC
Converting from an NFS-mounted shared edits directory to Quorum-Based Storage
Converting Hive CLI scripts to Beeline
Converting instance directories to configs
ConvertToBytes
Copy sample tweets to HDFS
Copying data between a secure and an insecure cluster using DistCp and WebHDFS
Copying data with Hadoop DistCp
Corruption: checksum error on CFile block
COUNT
COUNT function
Create a bucket
Create a collection for tweets
Create a Collection in Cloudera Search
Create a Collection in Cloudera Search
Create a Custom Role
Create a GCP Service Account
Create a Hive authorizer URL policy
Create a Kafka Topic to Store your Events
Create a Kafka Topic to Store your Events
Create a new Kudu table from Impala
Create a snapshot
Create a snapshot policy
Create a Streams Cluster on Cloudera Base on premises
Create a Streams Cluster on Cloudera Base on premises
Create a table in Hive
Create a test collection
Create a time-bound policy
Create a topology map
Create a topology script
Create a user-defined function
Create and Run a Note
CREATE DATABASE statement
Create empty table on the destination cluster
CREATE FUNCTION statement
Create indexer Maven project
CREATE MATERIALIZED VIEW
Create partitioned table as select feature
CREATE ROLE statement
Create snapshots on a directory
Create snapshots using Cloudera Manager
Create table as select feature
Create table feature
CREATE TABLE statement
Create table … like feature
CREATE VIEW statement
Creating a CRUD transactional table
Creating a custom access policy
Creating a custom YARN service
Creating a default directory for managed tables
Creating a function
Creating a group in Hue
Creating a Hadoop archive
Creating a Hue user
Creating a JAAS configuration file
Creating a Kafka topic
Creating a Lily HBase Indexer Configuration File
Creating a Lily HBase Indexer Configuration File
Creating a Morphline Configuration File
Creating a Morphline Configuration File
Creating a new Dynamic Configuration
Creating a new Iceberg table from Spark 3
Creating a notifier
Creating a read-only Admin user (Auditor)
Creating a replica of an existing shard
Creating a Solr collection
Creating a SQL policy to query an Iceberg table
Creating a SQL policy to query an Iceberg table
Creating a Sqoop import command
Creating a Sqoop import command
Creating a standard YARN service
Creating a table for a Kafka stream
Creating a temporary table
Creating a trace user in unsecure Accumulo deployment
Creating a truststore file in PEM format
Creating a truststore file in PEM format
Creating an alert policy
Creating an Iceberg partitioned table
Creating an Iceberg table
Creating an Impala user-defined function
Creating an insert-only transactional table
Creating an internal yum repository
Creating an Ozone-based external table
Creating and using a materialized view
Creating and using a partitioned materialized view
Creating Business Metadata
Creating categories
Creating classifications
Creating Encryption Zones
Creating External Table
Creating glossaries
Creating Hue Schema in Oracle database
Creating Iceberg tables using Hue
Creating labels
Creating new YARN services using UI
Creating partitions
Creating partitions dynamically
Creating placement rules
Creating Static Pools
Creating tables in Hue by importing files
Creating terms
Creating the Hue database
Creating the Hue database
Creating the tables and view
Creating the UDF class
Creating, using, and dropping an external table
Credentials for token delegation
Cross data center replication example of multiple clusters
Cruise Control
Cruise Control
Cruise Control
Cruise Control dashboard in Streams Messaging Manager UI
Cruise Control Overview
Cruise Control REST API endpoints
CSE-KMS: Amazon S3-KMS managed encryption keys
CUME_DIST
Customize dynamic resource allocation settings
Customize interpreter settings in a note
Customize the HDFS home directory
Customizing authorization-migration-site.xml
Customizing HDFS
Customizing Kerberos Principals
Customizing Kerberos Principals and System Users
Customizing Kerberos Principals and System Users (Recommended)
Customizing only Kerberos Principals
Customizing Per-Bucket Secrets Held in Credential Files
Customizing the Hue web interface
Customizing time zones
Data Access
Data Access
Data at Rest Encryption Reference Architecture
Data at Rest Encryption Requirements
Data at Rest Encryption Requirements
Data compaction
Data Durability Considerations
Data Encryption Components and Solutions
Data Engineering
Data migration to Apache Hive
Data migration to Apache Hive
Data protection
Data Stewardship with Apache Atlas
Data storage metrics
Data types
Data Warehousing
Database Requirements
Database setup details for cluster services for TLS 1.2/TCPS-enabled databases
Database setup details for Hive Metastore for TLS 1.2/TCPS-enabled databases
Database setup details for Hue for TLS 1.2/TCPS-enabled databases
Database setup details for Oozie for TLS 1.2/TCPS-enabled databases
Database setup details for Ranger for TLS 1.2/TCPS-enabled databases
Database setup details for Ranger KMS for TLS 1.2/TCPS-enabled databases
Database setup details for Schema Registry for TLS 1.2/TCPS-enabled databases
Database setup details for Streams Messaging Manager for TLS 1.2/TCPS-enabled databases
Databases
Databases and Table Names
Dataflow development best practices
Dataflow management with schema-based routing
DataNodes
DataNodes
DataNodes page
Date and time functions
DATE data type
DDL statements
Deactivate and Remove Parcels
Debezium Db2 Source
Debezium MySQL Source
Debezium Oracle Source
Debezium PostgreSQL Source
Debezium SQL Server Source
Debug Web UI for Catalog Server
Debug Web UI for Impala Daemon
Debug Web UI for Query Timeline
Debug Web UI for StateStore
Decide to use the BucketCache
DECIMAL data type
Decimal type
Decommission or remove a tablet server
Decommissioning OM Node
Decommissioning Ozone DataNodes
Decommissioning SCM
Dedicated Coordinator
Default EXPIRES ON tag policy
Default ports of Operational Database
Default Ranger audit filters
Defining a backup target in solr.xml
Defining and adding clusters for replication
Defining Apache Atlas enumerations
Defining co-located Kafka clusters using a service dependency
Defining co-located Kafka clusters using Kafka credentials
Defining external Kafka clusters
Defining related terms
Delegation token based authentication
Delete a bucket
Delete a Key
Delete and Rename Operation
Delete container replica commands
Delete data
Delete data feature
Delete Objects
Delete Queue
Delete snapshots
Delete snapshots using Cloudera Manager
DELETE statement
Delete the Cluster
Deleting a collection
Deleting a group
Deleting a Kafka topic
Deleting a notifier
Deleting a role
Deleting a schema
Deleting a user
Deleting all documents in a collection
Deleting an alert policy
Deleting data from a table
Deleting dynamically created child queues
Deleting dynamically created child queues manually
Deleting Encryption Zone Keys
Deleting Encryption Zones
Deleting partitions
Deleting placement rules
Deleting queues
Deleting Services
Deleting users or groups in bulk
Deletion
DENSE_RANK
Deploy HBase replication
Deploying a dataflow
Deploying and configuring Oozie Sqoop1 Action JDBC drivers
Deploying and managing connectors
Deploying and managing services on YARN
Deploying Atlas service
Deploying Clients
Deployment Planning for Cloudera Search
Deprecation Notices In Cloudera Runtime 7.3.1
DESCRIBE EXTENDED and DESCRIBE FORMATTED
DESCRIBE statement
Describe table metadata feature
Describing a materialized view
Deserializing and serializing data from and to a Kafka topic
Detecting slow DataNodes
Determining the table type
Determining the threshold
Developing a dataflow
Developing and running an Apache Spark WordCount application
Developing Apache Kafka Applications
Developing Apache Spark Applications
Developing Applications with Apache Kudu
Diagnostics logging
Differences between Spark and Spark 3 actions
Dimensioning guidelines
Direct Reader configuration properties
Direct Reader limitations
Direct Reader mode introduction
Directory configurations
Directory permissions when using PAM authentication backend
Disable a provider in an existing provider configuration
Disable loading of coprocessors
Disable Operational Database's use of HDFS trash
Disable proxy for a known service in Apache Knox
Disable RegionServer grouping
Disable replication at the peer level
Disable the BoundedByteBufferPool
Disable the Firewall
Disable the Firewall
Disable weak ciphers for TLS servers
Disabling an alert policy
Disabling and redeploying HDFS HA
Disabling auto queue deletion globally
Disabling automatic compaction
Disabling CA Certificate validation from Hue
Disabling Catalog and StateStore High Availability
Disabling dynamic child creation in weight mode
Disabling impersonation (doas)
Disabling Kerberos authentication for HBase clients
Disabling Oozie High Availability
Disabling Oozie UI using Cloudera Manager
Disabling queue auto removal on a queue level
Disabling redaction
Disabling the Firewall
Disabling the share option in Hue
Disabling the web metric collection for Hue
Disabling TLS protocols on JMX ports
Disabling YARN Ranger authorization support
Disassociating partitions from queues
Disk Balancer commands
Disk management
Disk Removal
Disk Replacement
Disk space usage issue
Disk space versus namespace
DistCp and Proxy Settings
Distcp between secure clusters in different Kerberos realms
Distcp syntax and examples
DISTINCT operator
DML statements
Docker on YARN example: DistributedShell
Docker on YARN example: MapReduce job
Docker on YARN example: Spark-on-Docker-on-YARN
DOUBLE data type
Download a file
Download and install PostgreSQL
Download the Trial version of Cloudera Base on premises
Download the Trial version of Cloudera Base on premises
Download the Trial version of Cloudera Base on premises
Downloading and configuring the client packages
Downloading and exporting data from Hue
Downloading and installing MariaDB database
Downloading and installing MySQL database
Downloading and viewing predefined dataflows
Downloading debug bundles
Downloading Hdfsfindtool from the CDH archives
Downloading query results from Hue takes time
Downloading, staging, and activating the Oracle Instant Client parcel
Driver inter-node coordination
Drop a Kudu table
DROP DATABASE statement
DROP FUNCTION statement
DROP MATERIALIZED VIEW
DROP ROLE statement
DROP STATS statement
Drop table feature
DROP TABLE statement
DROP VIEW statement
Dropping a materialized view
Dropping an external table along with data
Dumping the Oozie database
Dynamic allocation
Dynamic Configurations execution log
Dynamic handling of failure in updating index
Dynamic Queue Scheduling
Dynamic resource allocation properties
Dynamic Resource Pool Settings
Dynamic resource-based column masking in Hive with Ranger policies
Dynamic tag-based column masking in Hive with Ranger policies
Dynamically generating Knox topology files
Dynamically loading a custom filter
EC reconstruction commands
Edit or delete a snapshot policy
Edit query in natural language
Editing a group
Editing a role
Editing a storage handler policy to access Iceberg files on the file system
Editing a storage handler policy to access Iceberg files on the file system
Editing a user
Editing placement rules
Editing rack assignments for hosts
Effects of WAL rolling on replication
Elements of the Recon web user interface
Enable Access Control for Data
Enable Access Control for Interpreter, Configuration, and Credential Settings
Enable Access Control for Notebooks
Enable an NTP Service
Enable an NTP Service
Enable an NTP Service
Enable and disable snapshot creation using Cloudera Manager
Enable authorization for additional HDFS web UIs
Enable authorization for HDFS web UIs
Enable authorization in Kafka with Ranger
Enable bulk load replication using Cloudera Manager
Enable Cgroups
Enable core dump for the Kudu service
Enable detection of slow DataNodes
Enable disk IO statistics
Enable document-level authorization
Enable EC Replication
Enable garbage collector logging
Enable GZipCodec as the default compression codec
Enable HA for a Ranger Postgres database
Enable HBase high availability using Cloudera Manager
Enable HBase indexing
Enable HDFS Storage when Upgrading to HDP-2.6.3+
Enable hedged reads for HBase
Enable high availability
Enable HTTPS communication
Enable Kerberos authentication
Enable Kerberos authentication in Solr
Enable Kerberos for MariaDB
Enable LDAP authentication in Solr
Enable multi-threaded faceting
Enable namespace mapping
Enable or disable authentication with delegation tokens
Enable Phoenix ACLs
Enable proxy for a known service in Apache Knox
Enable Ranger Admin login using kerberos authentication
Enable RegionServer grouping using Cloudera Manager
Enable replication on a specific table
Enable replication on HBase column families
Enable security for Cruise Control
Enable security for Cruise Control
Enable Sensitive Data Redaction
Enable server-server mutual authentication
Enable snapshot creation on a directory
Enable Spark actions
Enable stored procedures in Hue
Enable TCPS for Oracle
Enable the AdminServer
Enable TLS 1.2 for MariaDB
Enable TLS 1.2 for MySQL
Enable TLS 1.2 for PostgreSQL
Enabling a multi-threaded environment for Hue
Enabling Access Control for Zeppelin Elements
Enabling access to HBase browser from Hue
Enabling ACL for RegionServer grouping
Enabling Admission Control
Enabling all scheduled queries
Enabling an alert policy
Enabling and disabling trash
Enabling asynchronous scheduler
Enabling audit aging
Enabling Basic Authentication for the Streams Replication Manager service
Enabling browsing Ozone from Hue
Enabling cache-control HTTP headers when using Hue
Enabling Catalog and StateStore High Availability (HA)
Enabling CSE-KMS
Enabling custom Kerberos principal support in a Queue Manager cluster
Enabling custom Kerberos principal support in a Queue Manager cluster
Enabling custom Kerberos principal support in YARN
Enabling custom Kerberos principal support in YARN
Enabling DEBUG logging for Hue logs
Enabling DSL search for Hue
Enabling dynamic child creation in weight mode
Enabling EC replication configuration cluster-wide
Enabling EC replication configuration on bucket
Enabling EC replication configuration on keys or files
Enabling fault-tolerant processing in Spark Streaming
Enabling feature flag for Custom Kerberos Principals and System Users
Enabling HBase META Replicas
Enabling HDFS and Configuration Storage for Zeppelin Notebooks in HDP-2.6.3+
Enabling HDFS HA
Enabling High Availability and automatic failover
Enabling httpd log rotation for Hue
Enabling Hue applications with Cloudera Manager
Enabling Hue as a TLS/SSL client
Enabling Hue as a TLS/SSL client
Enabling Hue as a TLS/SSL server using Cloudera Manager
Enabling Hue as a TLS/SSL server using Cloudera Manager
Enabling interceptors
Enabling Intra-Queue preemption
Enabling Intra-Queue Preemption for a specific queue
Enabling JWT Authentication for impala-shell
Enabling Kerberos authentication and RPC encryption
Enabling Kerberos Authentication for Cloudera
Enabling Kerberos Authentication for the KMS
Enabling Kerberos for the Streams Replication Manager service
Enabling LazyPreemption
Enabling LDAP Authentication for impala-shell
Enabling LDAP authentication with HiveServer2 and Impala
Enabling LDAP in Hue
Enabling Native Acceleration For MLlib
Enabling node labels on a cluster to configure partition
Enabling Oozie High Availability
Enabling Oozie SLA with Cloudera Manager
Enabling Oozie workflows that access Ozone storage
Enabling or disabling anonymous usage date collection
Enabling override of default queue mappings
Enabling preemption for a specific queue
Enabling prefixless replication
Enabling Ranger authorization
Enabling Ranger HDFS plugin manually on a Cloudera Data Hub
Enabling Ranger Usersync search to generate internally
Enabling Remote Querying
Enabling RMS for Ozone authorization
Enabling S3 Multi-Tenancy
Enabling SASL in HiveServer
Enabling scheduled queries
Enabling security for Apache Flink
Enabling selective debugging for Ranger Admin
Enabling selective debugging for RAZ
Enabling self-healing for all or individual anomaly types
Enabling self-healing in Cruise Control
Enabling Solr clients to authenticate with a secure Solr
Enabling Spark 3 engine in Hue
Enabling Spark authentication
Enabling Spark Encryption
Enabling Speculative Execution
Enabling SSE-C
Enabling SSE-KMS
Enabling SSE-S3
Enabling TCPS on Oracle database
Enabling the Hive Metastore integration
Enabling the Oozie web console on managed clusters
Enabling the Phoenix SQL editor in Hue
Enabling the Query Processor service in Hue
Enabling the SQL editor autocompleter
Enabling TLS 1.2 for database connections
Enabling TLS 1.2 on Cloudera Manager Server
Enabling TLS 1.2 on Database Server
Enabling TLS 1.2 on Database server
Enabling TLS 1.2 on MariaDB
Enabling TLS 1.2 on MySQL database
Enabling TLS 1.2 on PostgreSQL
Enabling TLS Encryption for Streams Messaging Manager on Cloudera on premises
Enabling TLS/SSL communication with HiveServer2
Enabling TLS/SSL communication with HiveServer2
Enabling TLS/SSL communication with Impala
Enabling TLS/SSL communication with Impala
Enabling TLS/SSL for HiveServer
Enabling TLS/SSL for HiveServer
Enabling TLS/SSL for Hue Load Balancer
Enabling TLS/SSL for Hue Load Balancer
Enabling TLS/SSL for the SRM service
Enabling TLS/SSL for the Streams Replication Manager service
Enabling vectorized query execution
Enabling YARN Ranger authorization support
Enabling ZooKeeper-less connection registry for HBase client
Encrypting an S3 Bucket with Amazon S3 Default Encryption
Encrypting and Decrypting Data Using Cloudera Navigator Encrypt
Encrypting Data at Rest
Encrypting Data at Rest
Encrypting Data in Transit
Encrypting Data in Transit
Encrypting Data on S3
Encryption
Encryption
Encryption in Cloudera SQL Stream Builder
Encryption Zones and Keys
End to end latency use case
Enhancements related to bulk glossary terms import
Enhancements with search query
Enter Required Parameters
Environment variables for sizing NameNode heap memory
Erasure coding CLI command
Erasure Coding data
Erasure coding examples
Erasure Coding overview
Erasure coding overview
Erasure Coding Overview
Error Messages and Various Failures
Error validating LDAP user in Hue
Errors during hole punching test
Escaping an invalid identifier
Essential metrics to monitor
Estimating memory limits
ETL with Cloudera Morphlines
Evolving a schema
Example - Placement rules creation
Example for finding parent object for assigned classification or term
Example for using THttpClient API in secure cluster
Example for using THttpClient API in unsecure cluster
Example for using TSaslClientTransport API in secure cluster without HTTP
Example use cases
Example workload
Example: Configuration for work preserving recovery
Example: Running SparkPi on YARN
Example: Using the HBase-Spark connector
Examples
Examples of accessing Amazon S3 data from Spark
Examples of Audit Operations
Examples of controlling data access using classifications
Examples of creating and using UDFs
Examples of DistCp commands using the S3 protocol and hidden credentials
Examples of estimating NameNode heap memory
Examples of interacting with Schema Registry
Examples of overlapping quota policies
Examples of using the AWS CLI for Ozone S3 Gateway
Examples of using the S3A filesystem with Ozone S3 Gateway
Examples of writing data in various file formats
Excluding audits for specific users, groups, and roles
Exit statuses for the HDFS Balancer
Experimental flags
Expire snapshots feature
Expiring snapshots
Explain query in natural language
EXPLAIN statement
Exploring using Lineage
Export a Note
Export a snapshot to another cluster
Export all resource-based policies for all services
Export Ranger reports
Export resource-based policies for a specific service
Export tag-based policies
Exporting and importing schemas
Exporting data using Connected type
Exporting schemas using Schema Registry API
Expose HBase metrics to a Ganglia server
Extending Atlas to Manage Metadata from Additional Sources
Extending Cloudera Manager
External table access
External tables based on a non-default schema
Extracting KRaft metadata
Failure detection for Catalog and StateStore
Failures during INSERT, UPDATE, UPSERT, and DELETE operations
Feature comparison
Feature Comparisons
Fetching Spark Maven dependencies
File descriptor limits
File descriptors
File System Credentials
Files and directories
Files and directories
Files and Objects together
Filesystems
Filter HMS results
Filter service access logs from Ranger UI
Filter types
Find latest Operational Database keytab
Finding issues
Finding the list of Hue superusers
Finding the list of Hue superusers
Fine-tuning Oozie's database connection
FIRST_VALUE
Fixed Common Vulnerabilities and Exposures 7.3.1
Fixed Issues In Cloudera Runtime 7.3.1
Fixing a query in Hue
Fixing a warning related to accessing non-optimized Hue
Fixing authentication issues between HBase and Hue
Fixing block inconsistencies
Fixing incorrect start time and duration on Hue Job Browser
Fixing issues
Flexible partitioning
FLOAT data type
Flush options
Flushing data to disk
Force deletion of external users and groups from the Ranger database
Format for using Hadoop archives with MapReduce
Frequently asked questions
FSO operations
Functions
General Quota Syntax
General Settings
Generate a table list
Generate and configure a signing keystore for Knox in HA
Generate comment for a SQL query
Generate SQL from NQL
Generate tokens
Generating and viewing Hive statistics
Generating collection configuration using configs
Generating Hive statistics
Generating Kerberos keytab file for Navigator Encrypt
Generating Solr collection configuration using instance directories
Generating surrogate keys
Generating Table and Column Statistics
Getting scheduled query information and monitor the query
Getting Started on your Streams Cluster
Getting Started on your Streams Cluster
Getting the JDBC driver
Getting the ODBC driver
Glossaries overview
Glossary performance improvements
Governance
Governance
Governance Overview
Graceful HBase shutdown
Gracefully shut down an HBase RegionServer
Gracefully shut down the HBase service
GRANT ROLE statement
GRANT statement
GROUP BY clause
GROUPING() and GROUPING_ID() functions
Groups and fetching
GROUP_CONCAT function
Guidelines for Schema Design
Hadoop
Hadoop
Hadoop archive components
Hadoop File Formats Support
Hadoop File System commands
Hadoop Users (user:group) and Kerberos Principals
Handling datanode disk failure
Handling disk failures
Handling Dynamic Configuration conflicts
Handling large messages
Handling Rollback issues with ZDU
Hardware Requirements
Hash and hash partitioning
Hash and range partitioning
Hash partitioning
Hash partitioning
HashTable/SyncTable tool configuration
HAVING clause
HBase
HBase
HBase
HBase
HBase actions that produce Atlas entities
HBase audit entries
HBase authentication
HBase authorization
HBase backup and disaster recovery strategies
HBase cache-aware load balancer configuration
HBase entities created in Atlas
HBase filtering
HBase I/O components
HBase is using more disk space than expected
Hbase lineage
HBase load balancer
HBase MCC Configurations
HBase MCC Restrictions
HBase MCC Usage in Spark with Java
HBase MCC Usage in Spark with Scala
HBase MCC Usage with Kerberos
HBase metadata collection
HBase metrics
HBase online merge
HBase persistent BucketCache
HBase read replicas
HBase Shell example
HBase stochastic load balancer configuration
HBaseMapReduceIndexerTool command line reference
HBCK2 tool command reference
HDFS
HDFS
HDFS
HDFS
HDFS ACLs
HDFS Block Skew
HDFS Caching
HDFS commands for metadata files and directories
HDFS Encryption Issues
HDFS entity metadata migration
HDFS lineage commands
HDFS lineage data extraction in Atlas
HDFS Metrics
HDFS Sink
HDFS Stateless Sink
HDFS storage demands due to retained HDFS trash
HDFS storage policies
HDFS storage types
HDFS storage types
HDFS to Apache Hive data migration
HDFS to Apache Hive data migration
HDFS Transparent Encryption
Head a bucket
Head an object
Heap sampling
HeapDumpPath (/tmp) in Hive data nodes gets full due to .hprof files
Hierarchical namespaces vs. non-namespaces
Hierarchical queue characteristics
High Availability on HDFS clusters
Hive
Hive
Hive
Hive
Hive
Hive access authorization
Hive ACID metric properties for compaction observability
Hive authentication
Hive demo data
Hive entity metadata migration
Hive Metastore leader election
Hive on Tez configurations
Hive on Tez introduction
Hive unsupported interfaces and features
Hive Warehouse Connector for accessing Apache Spark data
Hive Warehouse Connector Interfaces
Hive Warehouse Connector streaming for transactional tables
HiveServer actions that produce Atlas entities
HiveServer audit entries
HiveServer entities created in Atlas
HiveServer is unresponsive due to large queries running in parallel
HiveServer lineage
HiveServer metadata collection
HiveServer relationships
HMS table storage
How Atlas works with Iceberg
How Cloudera Search works
How Ignore and Prune feature works
How Integration works
How Lineage strategy works
How NameNode manages blocks on a failed DataNode
How NFS Gateway authenticates and maps users
How Ozone manages delete operations
How Ozone manages read operations
How Ozone manages write operations
How Range-aware replica placement in Kudu works
How tag-based access control works
How the reporting task runs in a NiFi cluster
How to access Spark files on Ozone
How to add a coarse URI check for Hive agent
How to Add Root and Intermediate CAs to Truststore for TLS/SSL
How to Authenticate Kerberos Principals Using Java
How to change the password for Ranger users
How to clear Ranger Admin access logs
How to configure Ranger HDFS plugin configs per (NameNode) Role Group
How to connect Cloudera components to a TCPS-enabled Oracle database
How to download results using Basic and Advanced search options
How to full sync the Ranger RMS database
How to manage log rotation for Ranger Services
How to optimally configure Ranger RAZ client performance
How to pass JVM options to Ranger KMS services
How to read the Configurations table
How to read the Placement Rules table
How to set audit filters in Ranger Admin Web UI
How to Set Up a Gateway Host to Restrict Access to the Cluster
How to Set up Failover and Failback
How to suppress database connection notifications
How to: Compute
How to: Data Access
How to: Data Engineering
How to: Data Warehousing
How to: Governance
How to: Jobs Management
How to: Next-Gen Storage
How to: Operational Database
How to: Security
How to: Storage
How to: Streams Messaging
HTTP 403 error while accessing Hue
HTTP SInk
HTTP Source
HttpFS authentication
Hue
Hue
Hue
Hue
Hue Advanced Configuration Snippet
Hue configuration files
Hue configurations in Cloudera Runtime
Hue Limitation
Hue Load Balancer does not start
Hue load balancer does not start after enabling TLS
Hue logs
Hue Overview
Hue overview
Hue service Django logs
Hue service does not start after a fresh installation or upgrade
Hue support for Oozie
Hue supported browsers
HWC and DataFrame API limitations
HWC and DataFrame APIs
HWC API Examples
HWC authorization
HWC authorization
HWC integration pyspark, sparklyr, and Zeppelin
HWC limitations
HWC supported types mapping
IAM Role permissions for working with SSE-KMS
Iceberg
Iceberg
Iceberg
Iceberg data types
Iceberg for Atlas
Iceberg library dependencies for Spark applications
Iceberg overview
Iceberg support for Atlas
Iceberg table properties
ID ranges in Schema Registry
Identifiers
Identify Roles that Use the Embedded Database Server
Identifying problems
Identity Management
Ignore or Prune pattern to filter Hive metadata entities
Impact of quota violation policy
Impala
Impala
Impala
Impala
Impala
Impala actions that produce Atlas entities
Impala aliases
Impala audit entries
Impala Authentication
Impala Authorization
Impala database containment model
Impala DDL for Kudu
Impala DML for Kudu Tables
Impala entities created in Atlas
Impala entity metadata migration
Impala fault tolerance mechanisms
Impala hash functions
Impala integration limitations
Impala integration limitations
Impala lineage
Impala lineage
Impala Logs
Impala metadata collection
Impala Requirements
Impala reserved words
Impala Shell Command Reference
Impala Shell Configuration File
Impala Shell Configuration Options
Impala Shell Tool
Impala SQL and Hive SQL
Impala Startup Options for Client Connections
Impala string functions
Impala tables
Impala with Amazon S3
Impala with Azure Data Lake Store (ADLS)
Impala with HBase
Impala with HDFS
Impala with Kudu
Impala with Ozone
Implementing your own Custom Command
Import a Note
Import and sync LDAP users and groups
Import command options
Import command options
Import External Packages
Import resource-based policies for a specific service
Import resource-based policies for all services
Import tag-based policies
Importance of a Secure Cluster
Importance of logical types in Avro
Importing and exporting resource-based policies
Importing and exporting tag-based policies
Importing and migrating Iceberg table format v2
Importing and migrating Iceberg table in Spark 3
Importing Business Metadata associations in bulk
Importing Confluent Schema Registry schemas into Schema Registry
Importing data into HBase
Importing Glossary terms in bulk
Importing Hive Metadata using Command-Line (CLI) utility
Importing Kafka entities into Atlas
Importing RDBMS data into Hive
Importing RDBMS data into Hive
Importing RDBMS data to HDFS
Importing RDBMS data to HDFS
Importing schemas using Schema Registry API
Imports into Hive
Imports into Hive
Improving Performance for S3A
Improving performance in Schema Registry
Improving performance with centralized cache management
Improving performance with short-circuit local reads
Improving Software Performance
Inclusion and exclusion operation for HDFS files
Increasing StateStore Timeout
Increasing storage capacity with HDFS compression
Increasing the maximum number of processes for Oracle database
Incrementally updating an imported table
Incrementally updating an imported table
Index sample data
Indexing data
Indexing Data Using Morphlines
Indexing Data Using Spark-Solr Connector
Indexing data with MapReduceIndexerTool in Solr backup format
Indexing sample tweets with Cloudera Search
InfluxDB SInk
Information and debugging
Initializing Solr and creating HDFS home directory
Initiate replication when data already exist
Initiating automatic compaction in Cloudera Manager
INSERT and primary key uniqueness violations
Insert data
INSERT statement
Insert table data feature
Inserting data into a table
Inserting data into a table
Install Accumulo
Install Accumulo 1.10 parcel
Install Accumulo CSD file
Install Accumulo parcel using Local Parcel Repository
Install Accumulo using Remote Parcel Repository
Install and configure additional required components
Install and Configure Databases
Install and Configure MariaDB for Cloudera Software
Install and Configure MySQL for Cloudera Software
Install and Configure PostgreSQL for Cloudera Base on premises
Install Cloudera
Install Cloudera
Install Cloudera
Install Cloudera Runtime
Install Cloudera Runtime
Install Cloudera Runtime
Install Operational Database
Install Operational Database
Install Operational Database CSD file
Install Operational Database CSD file
Install Operational Database parcel
Install Operational Database parcel
Install Operational Database parcel using Local Parcel Repository
Install Operational Database parcel using Local Parcel Repository
Install Operational Database parcel using Remote Parcel Repository
Install Operational Database parcel using Remote Parcel Repository
Install the NFS Gateway
Installation Reference
Installing a Trial Cluster
Installing a Trial Streaming Cluster
Installing a Trial Streaming Cluster
Installing Accumulo Parcel 1.1.0
Installing Accumulo Parcel 1.10
Installing Accumulo Parcel 2.1.2
Installing and Configuring Cloudera with FIPS
Installing and configuring MariaDB on RHEL 8
Installing and configuring MySQL on RHEL 8
Installing Apache Knox
Installing Apache Knox
Installing Apache Zeppelin
Installing Atlas in HA using Cloudera Base on premises cluster
Installing Atlas using Add Service
Installing Cloudera Base on Premises
Installing Cloudera Manager
Installing Cloudera Navigator Encrypt
Installing Cloudera Runtime
Installing connectors
Installing Hive on Tez and adding a HiveServer role
Installing MySQL client for MariaDB databases
Installing MySQL client for MySQL databases
Installing Operational Database powered by Apache Accumulo
Installing Postgres JDBC Driver
Installing PostgreSQL Server
Installing psycopg2 from source (FIPS - RHEL 8)
Installing Ranger KMS backed by a Database and HA
Installing Ranger RMS
Installing Ranger using Add Service
Installing Streams Messaging Manager in Cloudera Base on premises
Installing the client packages
Installing the GPL Extras Parcel
Installing the psycopg2 Python package for PostgreSQL database
Installing the REST Server using Cloudera Manager
Installing the UDF development package
Installing/Verifying RMS for Ozone configuration
INT data type
Integrating Apache Hive with Apache Spark and BI
Integrating Atlas with Ozone
Integrating MIT Kerberos and Active Directory
Integrating Ranger KMS DB with CipherTrust Manager HSM
Integrating Ranger KMS DB with Google Cloud HSM
Integrating Ranger KMS DB with SafeNet Keysecure HSM
Integrating Schema Registry with Atlas
Integrating Schema Registry with Flink and Cloudera SQL Stream Builder
Integrating Schema Registry with Kafka
Integrating Schema Registry with NiFi
Integrating the Hive Metastore with Apache Kudu
Integrating with Schema Registry
Integrating your identity provider's SAML server with Hue
Inter-broker security
Inter-broker security
Interacting with Hive views
Internal and external Impala tables
Interoperability Between S3 and FS APIs
Introducing the S3A Committers
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction to Apache HBase
Introduction to Apache Phoenix
Introduction to Azure Storage and the ABFS Connector
Introduction to HBase Multi-cluster Client
Introduction to HBase Multi-cluster Client
Introduction to HDFS metadata files and directories
Introduction to Hive metastore
Introduction to Operational Database
Introduction to Ozone
Introduction to Parcels
Introduction to Ranger RMS
Introduction to Streams Messaging Manager
Introduction to the HBase stochastic load balancer
Introduction to Virtual Private Clusters and Cloudera SDX
Invalid method name: 'GetLog' error
Invalid query handle
INVALIDATE METADATA statement
ISR management
Issues starting or restarting the master or the tablet server
Java API example
Java client
Java Requirements
JBOD
JBOD Disk migration
JBOD setup
JDBC connection string syntax
JDBC connection string syntax
JDBC mode configuration properties
JDBC mode limitations
JDBC read mode introduction
JDBC Sink
JDBC Source
JMS Source
Job cleanup
Job cleanup
Job summaries in _SUCCESS files
Job summaries in _SUCCESS files
Joins in Impala SELECT statements
JournalNodes
JournalNodes
JVM and garbage collection
JWT algorithms
JWT authentication for Kudu
Kafka
Kafka
Kafka
Kafka
Kafka
Kafka ACL APIs support in Ranger
Kafka actions that produce Atlas entities
Kafka Architecture
Kafka audit entries
Kafka brokers and Zookeeper
Kafka clients and ZooKeeper
Kafka cluster load balancing using Cruise Control
Kafka Connect
Kafka Connect connector configuration security
Kafka Connect log files
Kafka Connect Overview
Kafka Connect property configuration in Cloudera Manager for Prometheus
Kafka Connect REST API security
Kafka Connect Secrets Storage
Kafka Connect tasks
Kafka Connect to Kafka broker security
Kafka Connect worker assignment
Kafka consumers
Kafka credentials property reference
Kafka disaster recovery
Kafka FAQ
Kafka Introduction
Kafka KRaft [Technical Preview]
Kafka KRaft [Technical Preview]
Kafka lineage
Kafka metadata collection
Kafka producers
Kafka property configuration in Cloudera Manager for Prometheus
Kafka public APIs
Kafka rack awareness
Kafka relationships
Kafka security hardening with Zookeeper ACLs
Kafka storage handler and table properties
Kafka Streams
Kafka stretch clusters
kafka-*-perf-test
kafka-cluster
kafka-configs
kafka-console-consumer
kafka-console-producer
kafka-consumer-groups
kafka-delegation-tokens
kafka-features
kafka-log-dirs
kafka-reassign-partitions
kafka-topics
Kafka-ZooKeeper performance tuning
KafkaAvroDeserializer properties reference
KafkaAvroSerializer properties reference
Keep replicas current
Kerberos
Kerberos
Kerberos authentication
Kerberos authentication for non-default users
Kerberos configuration for Ozone
Kerberos Configuration Strategies for Cloudera
Kerberos configurations for HWC
Kerberos principal and keytab properties for Ozone service daemons
Kerberos Security Artifacts Overview
Kerberos setup guidelines for Distcp between secure clusters
Kernel stack watchdog traces
Key Concepts and Architecture
Key Differences between INSERT-ONLY and FULL ACID Tables
Key Features
Key management using ofs
kite-morphlines-avro
kite-morphlines-core-stdio
kite-morphlines-core-stdlib
kite-morphlines-hadoop-core
kite-morphlines-hadoop-parquet-avro
kite-morphlines-hadoop-rcfile
kite-morphlines-hadoop-sequencefile
kite-morphlines-json
kite-morphlines-maxmind
kite-morphlines-metrics-servlets
kite-morphlines-protobuf
kite-morphlines-saxon
kite-morphlines-solr-cell
kite-morphlines-solr-core
kite-morphlines-tika-core
kite-morphlines-tika-decompress
kite-morphlines-useragent
KMS ACL Configuration for Hive
Known issue and its workaround
Known issues and limitations
Known Issues In Cloudera Runtime 7.3.1
Known limitations in Hue
Knox
Knox
Knox
Knox CLI testing tools
Knox Gateway token integration
Knox Gateway UI: incorrect username or password
Knox Properties for TLS
Knox SSO Cookie Invalidation
Knox Supported Services Matrix
Knox Token API
Knox Topology Management in Cloudera Manager
KRaft setup
KTS
Kudu
Kudu
Kudu
Kudu and Apache Ranger integration
Kudu architecture in a Cloudera Base on premises deployment
Kudu authentication
Kudu authentication tokens
Kudu authentication with Kerberos
Kudu authorization policies
Kudu authorization tokens
Kudu backup
Kudu coarse-grained authorization
Kudu concepts
Kudu example applications
Kudu fine-grained authorization
Kudu integration with Spark
Kudu introduction
Kudu master web interface
Kudu metrics
Kudu network architecture
Kudu Python client
Kudu recovery
Kudu schema design
Kudu security considerations
Kudu security limitations
Kudu security limitations
Kudu Sink
Kudu tablet server web interface
Kudu tracing
Kudu transaction semantics
Kudu web interfaces
Kudu-Impala integration
LAG
LAST_VALUE
Late Materialization of Columns
Lateral View
Launch distcp
Launch Zeppelin
Launching a YARN service
Launching Apache Phoenix Thin Client
LAZY_PERSIST memory storage policy
LDAP authentication
LDAP import and sync options
LDAP properties
LDAP search fails with invalid credentials error
LDAP Settings
LEAD
Leader positions and in-sync replicas
Lengthy BalancerMember Route length
Leveraging Business Metadata
Lily HBase batch indexing for Cloudera Search
Lily HBase NRT indexing
LIMIT clause
Limit CPU usage with Cgroups
Limitation for Spark History Server with high availability
Limitations
Limitations
Limitations
Limitations and restrictions for Impala UDFs
Limitations in browsing Ozone from Hue
Limitations of Amazon S3
Limitations of Atlas-NiFi integration
Limitations of erasure coding
Limitations of Phoenix-Hive connector
Limitations of the S3A Committers
Limiting concurrent connections
Limiting the speed of compactions
Lineage lifecycle
Lineage overview
Linux Container Executor
List and Create Keys
List buckets
List files in Hadoop archives
List of APIs verified
List of model-related configurations
List of Thrift API and HBase configurations
List restored snapshots
List snapshots
Listing available metrics
Listing Repositories
Literals
Live write access
Livy
Livy
Livy
Livy
Livy
Livy API reference for batch jobs
Livy API reference for interactive sessions
Livy batch object
Livy high availability support
Livy interpreter configuration
Livy objects for interactive sessions
Load balancer in front of Schema Registry instances
Load balancing between Hue and Impala
Load balancing for Apache Knox
Load data inpath feature
LOAD DATA statement
Load or replace partition data feature
Loading data into an unpartitioned table
Loading ORC data into DataFrames using predicate push-down
Loading the Oozie database
Local file system support
Locating Hive tables and changing the location
Locking an account after invalid login attempts
Log a Security Support Case
Log aggregation file controllers
Log aggregation properties
Log cleaner
Logical Architecture
Logical operators, comparison operators and comparators
Logs and log segments
Main Use Cases
Maintaining Cloudera Navigator Encrypt
Maintenance manager
Making row-level changes on V2 tables only
Manage HBase snapshots using COD CLI
Manage HBase snapshots using the HBase shell
Manage individual delegation tokens
Manage Knox Gateway tokens
Manage Knox metadata
Manage Ranger authorization in Solr
Managed Parent Queues
Management basics
Management of existing Apache Knox shared providers
Management of Knox shared providers in Cloudera Manager
Management of Service Parameters for Apache Knox via Cloudera Manager
Management of services for Apache Knox through Cloudera Manager
Managing Access Control Lists
Managing Alert Policies and Notifiers
Managing and Allocating Cluster Resources using Capacity Scheduler
Managing and monitoring Cruise Control rebalance
Managing and monitoring Kafka Connect
Managing Apache Hadoop YARN Services
Managing Apache HBase
Managing Apache HBase Security
Managing Apache Hive
Managing Apache Impala
Managing Apache Kafka
Managing Apache Kudu
Managing Apache Kudu Security
Managing Apache Phoenix Security
Managing Apache Phoenix security
Managing Apache ZooKeeper
Managing Apache ZooKeeper Security
Managing Auditing with Ranger
Managing Business Terms with Atlas Glossaries
Managing Cloudera Runtime Services
Managing Cloudera Search
Managing Clusters
Managing collection configuration
Managing collections
Managing Cruise Control
Managing Data Storage
Managing dynamic child creation enabled parent queues
Managing Dynamic Configurations
Managing dynamic queues
Managing dynamically created child queues
Managing Encryption Keys and Zones
Managing high partition workloads
Managing high partition workloads
Managing Hue permissions
Managing Kafka topics
Managing Kerberos credentials using Cloudera Manager
Managing Kudu tables with range-specific hash schemas
Managing logging properties for Ranger services
Managing Logs
Managing Metadata in Impala
Managing Metadata in Impala
Managing Navigator Encrypt Access Control List
Managing Operational Database powered by Apache Accumulo
Managing Ozone quota
Managing partition retention time
Managing placement rules
Managing query rewrites
Managing queues
Managing Re-encryption Operations
Managing Resources in Impala
Managing secrets using the REST API
Managing snapshot policies using Cloudera Manager
Managing storage elements by using the command line interface
Managing streaming with Hive Warehouse Connector
Managing the YARN service life cycle through the REST API
Managing topics across multiple Kafka clusters
Managing YARN Docker Containers
Managing YARN Queue Manager
Managing YARN queue users
Managing, Deploying and Monitoring Connectors
Manifest committer for ABFS and GCS
Manifest committer for ABFS and GCS
Manually configuring SAML authentication
Manually Configuring TLS Encryption for Cloudera Manager
Manually Configuring TLS Encryption on the Agent Listening Port
Manually failing over to the standby NameNode
MAP complex type
Mapping Apache Phoenix schemas to Apache HBase namespaces
Mapping Kerberos Principals to Short Names
Mapping Sentry permissions for Solr to Ranger policies
MapReduce indexing
MapReduce Job ACLs
MapReduce Job History Server
MapReduce, YARN and YARN Queue Manager
MapReduceIndexerTool
MapReduceIndexerTool input splits
MapReduceIndexerTool metadata
MapReduceIndexerTool usage syntax
Master node decommissioning in Ozone
Materialized view feature
Materialized view rebuild feature
Materialized views
Mathematical functions
Maven artifacts
MAX
MAX function
Memory
Memory limits
Merge feature
Merge process stops during Sqoop incremental imports
Merging data in tables
Metadata layout format
Metrics
Metrics and Insight
Migrate brokers by modifying broker IDs in meta.properties
Migrate Databases from the Embedded Database Server to the External PostgreSQL Database Server
Migrate Hive table to Iceberg feature
Migrate Kudu data from one directory to another on the same host
Migrate the Ranger Admin role instance to a new host
Migrate the Ranger KMS DB role instance to a new host
Migrate to a multiple Kudu master configuration
Migrate to strongly consistent indexing
Migrating a Hive table to Iceberg
Migrating Consumer Groups Between Clusters
Migrating Data Using Sqoop
Migrating Data Using Sqoop
Migrating database configuration to a new location
Migrating from H2 to PostgreSQL database in YARN Queue Manager
Migrating from Sentry to Ranger
Migrating from the Cloudera Manager Embedded PostgreSQL Database Server to an External PostgreSQL Database
Migrating Hue service by adding new role instances
Migrating Hue service using Add Service wizard
Migrating Impala Catalog to another host
Migrating Ranger Key Management Server Role Instances to a New Host
Migrating Ranger Usersync and Tagsync role groups
Migrating ResourceManager to another host
Migrating Solr replicas
Migrating Spark applications
Migrating the Master Key from HSM to Ranger KMS DB
Migrating the Master Key from Ranger KMS DB to Luna HSM
Migration from Fair Scheduler to Capacity Scheduler
Migration Guide
Migration of Spark 2 applications
MIN
MIN function
Min/Max Filtering
Minimize cluster distruption during planned downtime
Miscellaneous functions
Missing Containers page
Mixed resource allocation mode (Technical Preview)
MOB cache properties
Modify a provider in an existing provider configuration
Modify custom service parameter in descriptor
Modify GCS Bucket Permissions
Modify interpreter settings
Modifying a collection configuration generated using an instance directory
Modifying a Kafka topic
Modifying Cloudera Manager Server database configuration file
Modifying Impala Startup Options
Modifying the workflow file manually
Monitor cluster health with ksck
Monitor EC Metrics
Monitor RegionServer grouping
Monitor the BlockCache
Monitor the performance of hedged reads
Monitor your Cluster from the Streams Messaging Manager UI
Monitor your Cluster from the Streams Messaging Manager UI
Monitoring
Monitoring and Debugging Spark Applications
Monitoring and metrics
Monitoring Apache Impala
Monitoring Apache Kudu
Monitoring checkpoint latency for cluster replication
Monitoring compaction health in Cloudera Manager
Monitoring end to end latency for Kafka topic
Monitoring end-to-end latency
Monitoring heap memory usage
Monitoring Kafka
Monitoring Kafka brokers
Monitoring Kafka cluster replications (Streams Replication Manager)
Monitoring Kafka cluster replications by quick ranges
Monitoring Kafka clusters
Monitoring Kafka consumers
Monitoring Kafka producers
Monitoring Kafka topics
Monitoring lineage information
Monitoring log size information
Monitoring replication latency for cluster replication
Monitoring replication throughput and latency by values
Monitoring Replication with Streams Messaging Manager
Monitoring status of the clusters to be replicated
Monitoring throughput for cluster replication
Monitoring topics to be replicated
More Resources
Morphline commands overview
Move HBase Master Role to another host
Moving a NameNode to a different host using Cloudera Manager
Moving highly available NameNode, failover controller, and JournalNode roles using the Migrate Roles wizard
Moving NameNode roles
Moving the Hue service to a different host
Moving the JournalNode edits directory for a role group using Cloudera Manager
Moving the JournalNode edits directory for a role instance using Cloudera Manager
Moving the Oozie service to a different host
MQTT Source
Multi Protocol Access operations using AWS Client
Multi Protocol Aware System overview
Multi-Raft configuration for efficient write performances
Multi-row transactions
Multi-server LDAP/AD autentication
Multilevel partitioning
Multipart upload
Multiple Namenodes configurations
Multiple NameNodes overview
MySQL: 1040, 'Too many connections' exception
NameNode architecture
NameNodes
NameNodes
Namespace quota considerations
Navigator Encrypt
Navigator Encrypt
Navigator Encrypt
Navigator Encrypt
Navigator Encrypt Overview
NDV function
Network and I/O threads
Networking and Security Requirements
Networking Considerations for Virtual Private Clusters
Networking parameters
New topic and consumer group discovery
Nginx configuration for Prometheus
Nginx installtion
Nginx proxy configuration over Prometheus
NiFi lineage
NiFi metadata collection
NiFi record-based Processors and Controller Services
NiFi Registry TLS/SSL properties
NiFi TLS/SSL properties
Node blacklisting in Impala
Node maintenance
Non-covering range partitions
Non-unique primary key index
Notes about replication
NTILE
Number-of-Regions Quotas
Number-of-Tables Quotas
OAuth2 authentication
Object Store operations using AWS client
OBS as Pure Object Store
Obtain and Deploy Keys and Certificates for TLS/SSL
Obtaining client to Ozone through session
Obtaining resources to Ozone
Off-heap BucketCache
Offloading Application Logs to Ozone
OFFSET clause
Offsets Subcommand
OM decommissioning
OMDBInsights
On-demand Metadata
On-demand Metadata
Oozie
Oozie
Oozie
Oozie
Oozie
Oozie and client configurations
Oozie configurations with Cloudera services
Oozie database configurations
Oozie High Availability
Oozie Java-based actions with Java 17
Oozie Load Balancer configuration
Oozie scheduling examples
Oozie security enhancements
Open approach for passing token
Open Data Lakehouse
Opening Ranger in Cloudera Data Hub
OpenJPA upgrade
Operating System Requirements
Operating system requirements
Operational Database
Operational Database
Operational Database Overview
Operational Database overview
Operational Database powered by Apache Accumulo Overview
Operational Database powered by Apache Accumulo Reference
Operators
Optimize mountable HDFS
Optimize performance for evaluating SQL predicates
Optimize SQL query
Optimizer hints
Optimizing data storage
Optimizing HBase I/O
Optimizing NameNode disk space with Hadoop archives
Optimizing performance
Optimizing Performance for HDFS Transparent Encryption
Optimizing queries using partition pruning
Optimizing S3A read performance for different file types
Option 1
Option 2
Option 3
Options to determine differences between contents of snapshots
Options to monitor compactions
Options to monitor transaction locks
Options to monitor transactions
Options to rerun Oozie workflows in Hue
Options to restart the Hue service
Oracle TCPS
ORC file format
ORC vs Parquet formats
Orchestrate a rolling restart with no downtime
ORDER BY clause
Other known issues
OVER
Overriding custom keystore alias on a Ranger KMS Server
Overview
Overview
Overview
Overview
Overview
Overview
Overview
Overview
Overview of Hadoop archives
Overview of HDFS
Overview of Oozie
Overview of proxy usage and load balancing for Search
Overview of Storage Container Manager in High Availability
Overview of the Ozone Manager in High Availability
Overview page
Ozone
Ozone
Ozone
Ozone architecture
Ozone Cloudera Replication Manager overview
Ozone Cloudera Replication Manager throttling of tasks
Ozone configuration options to work with Cloudera components
Ozone containers
Ozone FS namespace optimization with prefix
Ozone Manager nodes in High Availability
Ozone OMDBInsights
Ozone Placement Policy
Ozone Ranger Integration
Ozone Ranger policy
Ozone recon heatmap
Ozone recon heatmap
Ozone S3 Multitenancy overview (Technical Preview)
Ozone security architecture
Ozone topology awareness
Ozone trash overview
Ozone volume scanner
Package management tools
Packaging different versions of libraries with an Apache Spark application
PAM authentication
Parameters to configure the Disk Balancer
Parquet
Parquet
Partition configuration
Partition evolution feature
Partition pruning
Partition Pruning for Queries
Partition refresh and configuration
Partition transform feature
Partitioning
Partitioning
Partitioning examples
Partitioning for Kudu Tables
Partitioning guidelines
Partitioning limitations
Partitioning limitations
Partitioning tables
Partitions
Partitions and performance
Pausing a Cluster in AWS
PERCENT_RANK
Perform a backup of the HDFS metadata
Perform a disk hot swap for DataNodes using Cloudera Manager
Perform ETL by ingesting data from Kafka into Hive
Perform master hostname changes
Perform scans using HBase Shell
Perform the recovery
Perform the removal
Performance and Scalability
Performance and storage considerations for Spark SQL DROP TABLE PURGE
Performance Best Practices
Performance comparison between Cloudera Manager and Prometheus
Performance considerations
Performance Considerations
Performance considerations for UDFs
Performance Impact of Encryption
Performance improvement using partitions
Performance issues
Performance Trade Offs
Performance tuning
Performance tuning
Performance tuning for Ozone
Performant .NET producer
Performing Bucket Layout operations in Apache Ozone using CLI
Periodically rebuilding a materialized view
Phoenix
Phoenix
Phoenix
Phoenix
Phoenix is FIPS compliant
Phoenix-Spark connector usage examples
Physical backups of an entire node
Pillars of Security
Pipelines page
Placement Policy for Erasure Coded Containers
Placement Policy for Ratis Containers
Placement rule policies
Placing Ozone DataNodes in offline mode
Plan the data movement across disks
Planner changes for CPU usage
Planning for Apache Impala
Planning for Apache Kafka
Planning for Apache Kudu
Planning for Infra Solr
Planning for Streams Replication Manager
Planning overview
Platform and OS
Pluggable authentication modules in HiveServer
Populating an HBase Table
Ports
Ports Used by Cloudera Manager
Ports Used by Cloudera Runtime Components
Ports Used by DistCp
Ports Used by Impala
Ports Used by Third-Party Components
POST /admin/audits/ API
Post-migration verification
Predefined access policies for Schema Registry
Predicate push-down optimization
Preloaded resource-based services and policies
Prepare for master hostname changes
Prepare for removal
Prepare for the recovery
Prepare to back up the HDFS metadata
Preparing a thrift server and client
Preparing for Encryption Using Cloudera Navigator Encrypt
Preparing Ozone for upgrade
Preparing the hardware resources for HDFS High Availability
Prerequisites
Prerequisites
Prerequisites
Prerequisites
Prerequisites
Prerequisites
Prerequisites and Assumptions
Prerequisites and limitations for using Iceberg in Spark
Prerequisites for configuring short-ciruit local reads
Prerequisites for configuring SQL AI Assistant
Prerequisites for configuring TLS/SSL for Oozie
Prerequisites for enabling erasure coding
Prerequisites for enabling HDFS HA using
Prerequisites for HDFS lineage extraction
Prerequisites for installing Atlas
Prerequisites for Prometheus configuration
Prerequisites for setting up Atlas HA
Prerequisites for using FIPS
Prerequisites to configure TLS/SSL for HBase
Prerequisites to configure TLS/SSL for HBase
Prerequisites to enable S3 Multitenancy
Preventing inadvertent deletion of directories
Primary key design
Primary key index
Principal name mapping
Principal name mapping
Production Installation
Prometheus configuration for Streams Messaging Manager
Prometheus for Streams Messaging Manager limitations
Prometheus properties configuration
Propagating classifications through lineage
Propagation of tags as deferred actions
Properties for configuring centralized caching
Properties for configuring short-circuit local reads on HDFS
Properties for configuring the Balancer
Properties to set the size of the NameNode edits directory
Protocol between consumer and broker
Provide user permissions
Providing read-only access to Queue Manager UI
Providing the Hive password through a file
Providing the Hive password through a file
Providing the Hive password through a prompt
Providing the Hive password through a prompt
Providing the Hive password through an alias
Providing the Hive password through an alias
Providing the Hive password through an alias in a file
Providing the Hive password through an alias in a file
Proxied RPCs in Kudu
Proxy Cloudera Manager through Apache Knox
Public key and secret storage
Purging deleted entities
Purposely using a stale materialized view
PUT /admin/purge/ API
Query an existing Kudu table from Impala
Query fails with "Counters limit exceeded" error message
Query Join Performance
Query metadata tables feature
Query options
Query Process fails to start intermittently due to access issues in Java 9 and later
Query results cache
Query sample data
Query scheduling
Query vectorization
Query vectorization properties
Querying a schema
Querying arrays
Querying correlated data
Querying data in an Iceberg table
Querying existing HBase tables
Querying files into a DataFrame
Querying Kafka data
Querying live data from Kafka
Querying the information_schema database
Queue ACLs
Quick Start Deployment for a Streams Cluster
Quota enforcement
Quota violation policies
Quotas
Rack awareness
Rack awareness (Location awareness)
Range partitioning
Range partitioning
Range-specific hash schemas example: Using impala-shell
Range-specific hash schemas example: Using Kudu C++ client API
Range-specific hash schemas example: Using Kudu Java client API
Ranger
Ranger
Ranger
Ranger
Ranger
Ranger
Ranger access conditions
Ranger AD Integration
Ranger Admin Metrics API
Ranger API Overview
Ranger Audit Filters
Ranger audit log event summarization
Ranger audit schema reference
Ranger console navigation
Ranger database schema reference
Ranger Hive Plugin
Ranger integration
Ranger Kafka Plugin
Ranger KMS
Ranger KMS
Ranger KMS
Ranger KMS
Ranger KMS
Ranger KMS overview
Ranger plugin overview
Ranger policies allowing create privilege for Hadoop_SQL databases
Ranger policies allowing create privilege for Hadoop_SQL tables
Ranger policies for Kudu
Ranger Policies Overview
Ranger REST API documentation
Ranger RMS (Hive-HDFS ACL-Sync) Use Cases
Ranger RMS Assumptions and Limitations
Ranger RMS field issues - HDFS latency
Ranger Security Zones
Ranger special entities
Ranger tag-based policies
Ranger UI authentication
Ranger UI authorization
Ranger Usersync
Ranger-HBase Plugin
RANK
RATIS/THREE Data
Re-encrypting an EDEK
Re-encrypting Encrypted Data Encryption Keys (EDEKs)
Re-encrypting secrets
Read access
Read and write operations
Read and write requests with Ozone Manager in High Availability
Read operations (scans)
Read replica properties
Read the Events
Read the Events
Reading and writing Hive tables in R
Reading and writing Hive tables in Zeppelin
Reading data from HBase
Reading data through HWC
Reading Hive ORC tables
Reads (scans)
REAL data type
Reassigning replicas between log directories
Rebalance after adding Kafka broker
Rebalance after demoting Kafka broker
Rebalance after removing Kafka broker
Rebalancing partitions
Rebalancing with Cruise Control
Rebuild a Kudu filesystem layout
Recommendations for client development
Recommended configurations
Recommended configurations for the Balancer
Recommended configurations for the balancer
Recommended deployment architecture
Recommended Hive configurations when using Ozone
Recommissioning an Ozone DataNode
Recommissioning Kudu masters through Cloudera Manager
Record management
Record order and assignment
Record User Data Paths
Records
Recover data from a snapshot
Recover from a dead Kudu master
Recover from disk failure
Recover from full disks
Redeploying the Oozie ShareLib
Redeploying the Oozie sharelib using Cloudera Manager
Reducing the Size of Data Structures
Refer to a table using dot notation
Reference architecture
Referencing S3 Data in Applications
Referer checking failed
REFRESH AUTHORIZATION statement
REFRESH FUNCTIONS statement
REFRESH statement
Registering a Lily HBase Indexer Configuration with the Lily HBase Indexer Service
Registering and querying a schema for a Kafka topic
Registering Cloudera Navigator Encrypt
Registering the UDF
Reinstall Apache Zeppelin in 7.3.1
Relax WAL durability
Release notes
Reloading, viewing, and filtering functions
Remote Querying
Remote topic discovery
Remove a DataNode
Remove a provider parameter in an existing provider configuration
Remove a RegionServer from RegionServer grouping
Remove Cloudera Manager, User Data, and Databases
Remove custom service parameter from descriptor
Remove Kudu masters through CLI
Remove or add storage directories for NameNode data directories
Remove storage directories using Cloudera Manager
Removing Accumulo 1.1.0
Removing Accumulo 1.10.3
Removing and updating Accumulo parcels
Removing and updating Accumulo parcels
Removing Kudu masters through Cloudera Manager
Removing Ozone DataNodes from the cluster
Removing Query Processor service from cluster
Removing scratch directories
Renaming a service
Renew and Redistribute Certificates
Reordering placement rules
Repairing partitions manually using MSCK repair
Replace a disk on a DataNode host
Replace a ZooKeeper disk
Replace a ZooKeeper role on an unmanaged cluster
Replace a ZooKeeper role with ZooKeeper service downtime
Replace a ZooKeeper role without ZooKeeper service downtime
Replicate container commands
Replicate pre-exist data in an active-active deployment
Replicating Data
Replication
Replication across three or more clusters
Replication caveats
Replication flows and replication policies
Replication requirements
Report Kudu crashes using breakpad
Repository Configuration Files
Repository configuration files
Request a timeline-consistent read
Required Databases
Requirements and recommendations
Requirements for compressing and extracting files using Hue File Browser
Requirements for Oozie High Availability
Resetting Hue user password
Resolving "The user authorized on the connection does not match the session username" error
Resource allocation overview
Resource distribution workflow
Resource Planning for Data at Rest Encryption
Resource scheduling and management
Resource Tuning Example
Resource-based Services and Policies
REST API
REST endpoints supported on Ozone S3 Gateway
Restarting a Cloudera Runtime Service
Restore a snapshot
Restore data from a replica
Restore HDFS metadata from a backup using Cloudera Manager
Restore tables from backups
Restoring a collection
Restoring NameNode metadata
Restricting access to Kafka metadata in Zookeeper
Restricting classifications based on user permission
Restricting supported ciphers for Hue
Restricting user login
Retries
Retrieving log directory replica assignment information
Retrieving the clusterstate.json file
Reuse the subnets created for Cloudera
Reuse the subnets created for Cloudera
Revalidating Dynamic Configurations
Review Changes
REVOKE ROLE statement
REVOKE statement
RHEL
ROLE statements in Impala integrated with Ranger
Roll Over an Existing Key
Rollback table feature
Rolling Encryption Keys
Rolling Restart
Rolling restart checks
Rotate Auto-TLS Certificate Authority and Host Certificates
Rotate the master key/secret
Rotating Ranger KMS access log files
Row-level filtering and column masking in Hive
Row-level filtering in Hive with Ranger policies
Row-level filtering in Impala with Ranger policies
ROW_NUMBER
RPC timeout traces
Rule configurations
Run a Hive command
Run a tablet rebalancing tool in Cloudera Manager
Run a tablet rebalancing tool in command line
Run a tablet rebalancing tool on a rack-aware cluster
Run stored procedure from Hue
Run the Cloudera Manager Server Installer
Run the Cloudera Manager Server Installer
Run the Cloudera Manager Server Installer
Run the Disk Balancer plan
Run the spark-submit job
Run the tablet rebalancing tool
Running a MapReduce Job
Running a query
Running a Spark MLlib example
Running an interactive session with the Livy REST API
Running Apache Spark Applications
Running Commands and SQL Statements in Impala Shell
Running Dockerized Applications on YARN
Running HDFS lineage commands
Running PySpark in a virtual environment
Running sample Spark applications
Running shell commands
Running Spark 3.4 Applications
Running Spark applications on secure clusters
Running Spark applications on YARN
Running Spark Python applications
Running the balancer
Running the HBaseMapReduceIndexerTool
Running the HBCK2 tool
Running time travel queries
Running YARN Services
Running your first Spark application
Runtime environment for UDFs
Runtime error: Could not create thread: Resource temporarily unavailable (error 11)
Runtime Filtering
S3 Performance Checklist
S3 Sink
S3A and Checksums (Advanced Feature)
Safely Writing to S3 Through the S3A Committers
SAML properties
Sample pom.xml file for Spark Streaming with Kafka
SAN Certificates
Saving a YARN service definition
Saving aliases
Saving searches
Saving the password to Hive Metastore
Saving the password to Hive Metastore
Scalability Considerations
Scaling Kafka brokers
Scaling Kudu
Scaling Limits and Guidelines
Scaling recommendations and limitations
Scaling recommendations and limitations
Scheduler performance improvements
Scheduling among queues
Scheduling in Oozie using cron-like syntax
Schema alterations
Schema design limitations
Schema design limitations
Schema entities
Schema evolution feature
Schema inference feature
Schema objects
Schema Registry
Schema Registry
Schema Registry
Schema Registry actions that produce Atlas entities
Schema Registry audit entries
Schema Registry authentication through OAuth2 JWT tokens
Schema Registry authorization through Ranger access policies
Schema Registry component architecture
Schema Registry concepts
Schema Registry metadata collection
Schema Registry Overview
Schema Registry overview
Schema Registry Reference
Schema Registry server configuration
Schema Registry TLS properties
Schema Registry use cases
Schema replationships
Schemaless mode overview and best practices
SchemaRegistryClient properties reference
SCM decommissioning
Script with HBase Shell
SDX
Search and other Cloudera Runtime components
Search Ranger reports
Search Tutorial
Searching applications
Searching by topic name
Searching for entities using Business Metadata attributes
Searching for entities using classifications
Searching Kafka cluster replications by source
Searching metadata tags
Searching using terms
Searching with Metadata
Secondary Sort
Secure access mode introduction
Secure by Design
Secure options to provide Hive password during a Sqoop import
Secure options to provide Hive password during a Sqoop import
Secure Prometheus for Streams Messaging Manager
Secure Your Cluster
Securing access to Hadoop cluster: Apache Knox
Securing an endpoint under AutoTLS
Securing Apache Hive
Securing Apache Impala
Securing Apache Kafka
Securing Atlas
Securing Atlas
Securing Cloudera Search
Securing configs with ZooKeeper ACLs and Ranger
Securing Cruise Control
Securing database connections with TLS/SSL
Securing database connections with TLS/SSL
Securing DataNodes
Securing Hive metastore
Securing HiveServer using LDAP
Securing Hue
Securing Hue from CWE-16
Securing Hue passwords with scripts
Securing Impala
Securing Kafka Connect
Securing KRaft
Securing Schema Registry
Securing sessions
Securing Streams Messaging Manager
Securing Streams Messaging Manager
Securing Streams Replication Manager
Securing the Key Management System (KMS)
Securing the S3A Committers
Security considerations for UDFs
Security examples
Security examples
Security Levels
Security Management Model
Security Model and Operations on S3
Security overview
Security Terms
Security tokens in Ozone
Security Zones Administration
Security Zones Example Use Cases
Select Iceberg data feature
Select Services
SELECT statement
Select the repository strategy
Selecting an Iceberg table
Server management limitations
Server management limitations
Service Dependencies in Cloudera Manager
Services backed by PostgreSQL fail or stop responding
Services support client RPMs for CDP Private Cloud Base 7.3.1
Set HADOOP_CONF to the destination cluster
Set HDFS quotas
Set properties in Cloudera Manager
Set proxy server authentication for clusters using Kerberos
SET statement
Set Up a Cluster Using the Wizard
Set Up a Cluster Using the Wizard
Set up a PostgreSQL database
Set up a storage policy for HDFS
Set Up a Streaming Cluster
Set Up a Streaming Cluster
Set Up Access to Cloudera EDH (Microsoft Azure Marketplace)
Set up an Oracle database
Set up Luna 10.5 HSM Client for Ranger KMS
Set up Luna 7 HSM for Ranger KMS
Set up MariaDB or MySQL database
Set up MirrorMaker in Cloudera Manager
Set up SQL AI Assistant
Set up SSD storage using Cloudera Manager
Set up WebHDFS on a secure cluster
Setting a default partition expression
Setting a Schema Registry ID range
Setting Application-Master resource-limit for a specific queue
Setting capacity estimations and goals
Setting capacity using mixed resource allocation mode (Technical Preview)
Setting consumer and producer table properties
Setting credentials for Ranger Usersync custom keystore
Setting default Application Master resource limit
Setting default credentials using Cloudera Manager
Setting file system credentials through hadoop properties
Setting global application limits
Setting global maximum application priority
Setting HDFS quotas in Cloudera Manager
Setting Java system properties for Solr
Setting Maximum Application limit for a specific queue
Setting Maximum Parallel Application
Setting maximum parallel application limits
Setting maximum parallel application limits for a specific queue
Setting Oozie permissions
Setting ordering policies within a specific queue
Setting Python path variables for Livy
Setting queue priorities
Setting schema access strategy in NiFi
Setting SELinux Mode
Setting the cache timeout
Setting the Idle Query and Idle Session Timeouts
Setting the Oozie database timezone
Setting the secure storage password as an environment variable
Setting the Solr Critical State Cores Percentage parameter
Setting the Solr Recovering Cores Percentage parameter
Setting the trash interval
Setting Timeout and Retries for Thrift Connections to Backend Client
Setting Timeouts in Impala
Setting up a Cloudera Data Warehouse client
Setting up a Hive client
Setting up a Hue service account with a custom name
Setting up a JDBC URL connection override
Setting up and configuring the ABFS connector
Setting up Atlas High Availability
Setting up Atlas Kafka import tool
Setting up basic authentication with TLS for Prometheus
Setting Up Data at Rest Encryption for HDFS
Setting up Data Cache for Remote Reads
Setting up Data Cache for Remote Reads
Setting Up HDFS Caching
Setting up HWC with build systems
Setting up JdbcStorageHandler for Postgres
Setting up Kafka Connect
Setting up mTLS for Prometheus
Setting up o3fs
Setting up ofs
Setting up Prometheus for Streams Messaging Manager
Setting up secure access mode
Setting Up Sqoop
Setting Up Sqoop
Setting up the backend Hive metastore database
Setting up the certificate in Cloudera Manager
Setting up the cost-based optimizer and statistics
Setting up the development environment
Setting up the metastore database
Setting up TLS for Prometheus
Setting user limits for HBase
Setting user limits for Kafka
Setting user limits within a queue
Settings to avoid data loss
Setup database
Setup for SASL with Kerberos
Setup for TLS/SSL encryption
SFTP Source
Shell action for Spark 3
Shell commands
Shiro Settings: Reference
shiro.ini Example
SHOW CURRENT ROLES statement
SHOW MATERIALIZED VIEWS
SHOW ROLE GRANT GROUP statement
SHOW ROLES statement
SHOW statement
Showing Atlas Server status
Showing materialized views
Showing Role|Grant definitions from Ranger HiveAuthorizer
Shut Down Impala
SHUTDOWN statement
Shutting Down and Starting Up the Cluster
Simple .NET consumer
Simple .Net consumer using Schema Registry
Simple .NET producer
Simple .Net producer using Schema Registry
Simple Java consumer
Simple Java producer
Single Message Transforms
Single tablet write operations
Size the BlockCache
Sizing estimation based on network and disk message throughput
Sizing NameNode heap memory
SLES
Slow name resolution and nscd
SMALLINT data type
SMM
SMM
SMM
Snapshot management
Snapshot support in Ozone
Solr
Solr
Solr
Solr
Solr and HDFS - the block cache
Solr server tuning categories
solrctl Reference
Solution
Solution
Solution
Space quotas
Spark
Spark
Spark
Spark
Spark
Spark 2
Spark 3 compatibility action executor
Spark 3 examples with Python or Java application
Spark 3 Oozie action schema
Spark 3 support in Oozie
Spark actions that produce Atlas entities
Spark application model
Spark audit entries
Spark cluster execution overview
Spark Connector configuration in Apache Atlas
Spark Dynamic Partition overwriting
Spark Dynamic Partition overwriting
Spark entities created in Apache Atlas
Spark entity metadata migration
Spark execution model
Spark indexing using morphlines
Spark integration best practices
Spark integration known issues and limitations
Spark integration limitations
Spark integration with Hive
Spark Job ACLs
Spark jobs failing with memory issues
Spark lineage
Spark metadata collection
Spark on YARN deployment modes
Spark relationships
Spark security
Spark SQL example
Spark Streaming and Dynamic Allocation
Spark Streaming Example
Spark troubleshooting
Spark tuning
spark-submit command options
Specify the JDBC connection string
Specify truststore properties
Specifying domains or pages to which Hue can redirect users
Specifying HTTP request methods
Specifying Impala Credentials to Access S3
Specifying racks for hosts
Specifying trusted users
Speeding up Job Commits by Increasing the Number of Threads
Splitting a shard on HDFS
Spooling Query Results
SQL migration to Impala
SQL statements
SQLContext and HiveContext
Sqoop
Sqoop enhancements to the Hive import process
Sqoop enhancements to the Hive import process
Sqoop Hive import stops when HS2 does not use Kerberos authentication
SRM
SRM
SRM
srm-control
srm-control Options Reference
SSE-C: Server-Side Encryption with Customer-Provided Encryption Keys
SSE-KMS: Amazon S3-KMS Managed Encryption Keys
SSE-S3: Amazon S3-Managed Encryption Keys
Standard stream logs
Start and stop Kudu processes
Start and stop the NFS Gateway services
Start HBase
Start Hive on an insecure cluster
Start Hive using a password
Start Prometheus
Start Queue
Start SQL AI Assistant
Start the NFS Gateway services
Starting a Cloudera Runtime service on all hosts
Starting and Stopping Apache Impala
Starting and stopping HBase using Cloudera Manager
Starting and stopping queues
Starting Apache Hive
Starting compaction manually
Starting the Embedded PostgreSQL Database
Starting the Lily HBase NRT indexer service
Starting the Oozie server
Stateless NiFi Source and Sink
Statistics generation and viewing commands
STDDEV, STDDEV_SAMP, STDDEV_POP functions
Step 1: Install Cloudera Manager and Cloudera
Step 1: Recover files appended during ZDU
Step 1: Worker host configuration
Step 2: Create the Kerberos Principal for Cloudera Manager Server
Step 2: Recover previous files Hsync'ed during ZDU
Step 2: Worker host planning
Step 3: Cluster size
Step 3: Enable Kerberos using the wizard
Step 3: Recover open files in corrupt state
Step 4: Create the HDFS superuser
Step 5: Get or create a Kerberos principal for each user account
Step 6: Prepare the cluster for each user
Step 6: Verify container settings on cluster
Step 6A: Cluster container capacity
Step 6B: Container parameters checking
Step 7: MapReduce configuration
Step 7: Verify that Kerberos security is working
Step 7A: MapReduce settings checking
Step 8: (Optional) Enable authentication for HTTP web consoles for Hadoop roles
Steps 4 and 5: Verify settings
Stop all Services
Stop HBase
Stop Queue
Stop replication in an emergency
Stop the NFS Gateway services
Stopping a Cloudera Runtime Service on All Hosts
Stopping the Embedded PostgreSQL Database
Stopping the Oozie server
Storage
Storage Container Manager operations in High Availability
Storage group classification
Storage group pairing
Storage reduction for Atlas
Storage Space level quota considerations
Storage Systems Supports
Stored procedure examples
Storing Data Using Ozone
Storing medium objects (MOBs)
Streams Messaging
Streams Messaging
Streams Messaging Manager Overview
Streams Messaging Manager property configuration in Cloudera Manager for Prometheus
Streams Replication Manager Architecture
Streams Replication Manager Command Line Tools
Streams Replication Manager Driver
Streams Replication Manager Overview
Streams Replication Manager Reference
Streams Replication Manager requirements
Streams Replication Manager security example
Streams Replication Manager Service
Streams Replication Manager Service data traffic reference
Stretch cluster reference architecture
STRING data type
STRUCT complex type
Submitting a Python app
Submitting a Scala or Java application
Submitting batch applications using the Livy REST API
Submitting Spark applications
Submitting Spark Applications to YARN
Submitting Spark applications using Livy
Subqueries in Impala SELECT statements
Subquery restrictions
Subscribing to a topic
SUM
SUM function
Summary
Summary
Support for On-Demand lineage
Support for packages
Support for validating the AttributeName in parent and child TypeDef
Supported HDFS entities and their hierarchies
Supported non-ASCII and special characters in Hue
Supported operators
Symbolizing stack traces
Synchronize table data using HashTable/SyncTable tool
Synchronizing the contents of JournalNodes
Syslog TCP Source
Syslog UDP Source
System Level Broker Tuning
System metadata migration
System requirements
System Requirements
System Requirements for POC Streams Cluster
System Requirements for POC Streams Cluster
Table and Column Statistics
TABLESAMPLE clause
Tablet history garbage collection and the ancient history mark
Tag-based Services and Policies
Tags and policy evaluation
Take a snapshot using a shell script
Task architecture and load-balancing
TaskController Error Codes (MRv1)
Tenant Commands
Terminating Hive queries
Terms
Terms and concepts
Test driving Iceberg from Hive
Test driving Iceberg from Impala
Test MOB storage and retrieval performance
Testing the Installation
Testing the LDAP configuration
Testing with Hue
Text-editor for Atlas parameters
Tez
The Cloud Storage Connectors
The HDFS mover command
The Hue load balancer not distributing users evenly across various Hue servers
The Kafka Connect UI
The perfect schema
The S3A Committers and Third-Party Object Stores
Third-party filesystems
Thread Tuning for S3A Data Upload
Threads
Thrift Server crashes after receiving invalid data
Throttle quota examples
Throttle quotas
Time travel feature
Timeline consistency
TIMESTAMP compatibility for Parquet files
TIMESTAMP data type
TINYINT data type
TLS certificate requirements and recommendations
TLS encryption for Schema Registry
TLS Mutual Authentication
TLS/SSL client authentication
TLS/SSL client authentication
TLS/SSL Issues
TLS/SSL settings for Streams Messaging Manager
Token configurations
Token-based authentication for Cloudera Data Warehouse integrations
Tombstoned or STOPPED tablet replicas
Top-down process for adding a new metadata source
Topics
Topics and Groups Subcommand
topics/impala-troubleshoot-dynamic-memory.xml
Topology hierarchy
Tracking an Apache Hive query in YARN
Tracking Hive on Tez query execution
Transactional table access
Transactions
Transactions
Transparent Encryption Recommendations for HBase
Transparent Encryption Recommendations for Hive
Transparent Encryption Recommendations for Hue
Transparent Encryption Recommendations for Impala
Transparent Encryption Recommendations for MapReduce and YARN
Transparent Encryption Recommendations for Search
Transparent Encryption Recommendations for Spark
Transparent Encryption Recommendations for Sqoop
Transparent query retries in Impala
Trash behavior with HDFS Transparent Encryption enabled
Trial Installation
Triggering HDFS audit files rollover
Troubleshoot RegionServer grouping
Troubleshooting
Troubleshooting ABFS
Troubleshooting Apache Atlas
Troubleshooting Apache Hadoop YARN
Troubleshooting Apache HBase
Troubleshooting Apache HDFS
Troubleshooting Apache Hive
Troubleshooting Apache Impala
Troubleshooting Apache Kudu
Troubleshooting Apache Spark
Troubleshooting Apache Sqoop
Troubleshooting Cloudera Search
Troubleshooting common issues in Impala
Troubleshooting Docker on YARN
Troubleshooting for mixed resource allocation mode in YARN Queue Manager
Troubleshooting HBase
Troubleshooting Hue
Troubleshooting Installation Problems
Troubleshooting Linux Container Executor
Troubleshooting NTP stability problems
Troubleshooting on YARN
Troubleshooting Operational Database powered by Apache Accumulo
Troubleshooting Prometheus for Streams Messaging Manager
Troubleshooting S3
Troubleshooting SAML authentication
Troubleshooting Schema Registry
Troubleshooting Security Issues
Troubleshooting Security Issues
Troubleshooting the S3A Committers
Truncate table feature
TRUNCATE TABLE statement
Tuning Apache Hadoop YARN
Tuning Apache Impala
Tuning Apache Kafka Performance
Tuning Apache Spark
Tuning Apache Spark Applications
Tuning Cloudera Search
Tuning garbage collection
Tuning Hue
Tuning replication
Tuning Resource Allocation
Tuning S3A Uploads
Tuning Spark Shuffle Operations
Tuning the metastore
Tuning the Number of Partitions
Turning safe mode on HA NameNodes
Tutorial
Tutorial: developing and deploying a JDBC Source dataflow
Ubuntu
UDF concepts
UI Tools
Unable to access Hue after upgrading
Unable to access Hue from Knox Gateway UI
Unable to authenticate users in Hue using SAML
Unable to connect Oracle database to Hue using SCAN
Unable to connect to database with provided credential
Unable to execute queries due to atomic block
Unable to log into Hue with Knox
Unable to read Sqoop metastore created by an older HSQLDB version
Unable to run the freeze command
Unable to terminate Hive queries from Job Browser
Unable to use pip command in Cloudera
Unable to view or create Oozie workflows
Unable to view Snappy-compressed files
Unaffected Components in this release
Understand Query Performance
Understanding --go-live and HDFS ACLs
Understanding co-located and external clusters
Understanding CREATE TABLE behavior
Understanding CREATE TABLE behavior
Understanding erasure coding policies
Understanding HBase garbage collection
Understanding Hue users and groups
Understanding Impala integration with Kudu
Understanding Keystores and Truststores
Understanding Package Management
Understanding Performance using EXPLAIN Plan
Understanding Performance using Query Profile
Understanding Performance using SUMMARY Report
Understanding quota
Understanding Ranger policies with RMS
Understanding Streams Replication Manager properties, their configuration and hierarchy
Understanding the data that flow into Atlas
Understanding the extractHBaseCells Morphline Command
Understanding the extractHBaseCells Morphline Command
Understanding the kafka-run-class Bash Script
Understanding the Phoenix JDBC URL
Understanding YARN architecture
Under‐replicated block exceptions or cluster failure occurs on small clusters
Uninstall Cloudera Manager Agent and Managed Software
Uninstall the Cloudera Manager Server
Uninstalling a Cloudera Runtime Component From a Single Host
Uninstalling Cloudera Manager and Managed Software
UNION clause
UNION, INTERSECT, and EXCEPT clauses
Unlocking access to Kafka metadata in Zookeeper
Unlocking locked out user accounts in Hue
Unsupported Apache Spark Features
Unsupported command line tools
Unsupported features and limitations
Unsupported features in Hue
Update data
UPDATE statement
Updating a notifier
Updating an alert policy
Updating an Iceberg partition
Updating data in a table
Updating Iceberg table data
Updating the schema in a collection
Updating YARN Queue Manager Database Password
Upgrade from Spark 2.4.7
Upgrade from Spark 2.4.7 (with CDS 3.2.3 and connectors)
Upgrade from Spark 2.4.7 (with CDS 3.2.3)
Upgrade from Spark 2.4.7 (with connectors)
Upgrade from Spark 2.4.8
Upgrade from Spark 2.4.8
Upgrade from Spark 2.4.8 (with CDS 3.3.2)
Upgrade from Spark 2.4.8 (with CDS 3.3.x and connectors)
Upgrade from Spark 2.4.8 (with CDS 3.3.x)
Upgrade from Spark 2.4.8 (with connectors)
Upgrade from Spark 3.2.3 (CDS)
Upgrade from Spark 3.3.2 (CDS)
Upgrade from Spark 3.3.x (CDS)
Upgrading Accumulo from 1.1.0 to 2.1.2
Upgrading Accumulo from 1.10.3 to 2.1.2
Upgrading Apache Spark
Upgrading existing Kudu tables for Hive Metastore integration
Upgrading from 7.1.7
Upgrading from 7.1.8
Upgrading from 7.1.9 SP1
Upgrading Ozone overview
Upgrading Ozone parcels
Upgrading Spark 2 to Spark 3 for Cloudera on premises 7.3.1
Upgrading this feature
Upload a file
Uploading Oozie ShareLib to Ozone
Upsert a row
Upsert option in Kudu Spark
UPSERT statement
Usability issues
Use a CTE in a query
Use a custom MapReduce job
Use advanced LDAP authentication
Use BulkLoad
Use case 1: Use Cloudera Manager to generate internal CA and corresponding certificates
Use case 2: Enabling Auto-TLS with an intermediate CA signed by an existing Root CA
Use case 3: Enabling Auto-TLS with Existing Certificates
Use case architectures
Use cases
Use cases and sample payloads
Use cases for ACLs on HDFS
Use cases for BulkLoad
Use cases for centralized cache management
Use Cgroups
Use cluster names in the kudu command line tool
Use cluster replication
Use CopyTable
Use CPU scheduling
Use CPU scheduling with distributed shell
Use CREATE TABLE AS SELECT
Use curl to access a URL protected by Kerberos HTTP SPNEGO
Use Digest Authentication Provider
Use FPGA scheduling
Use FPGA with distributed shell
Use GPU scheduling
Use GPU scheduling with distributed shell
Use GZipCodec with a one-time job
Use HashTable and SyncTable Tool
Use Local Storage when Upgrading to HDP-2.6.3+
Use multiple ZooKeeper services
Use rsync to copy files from one broker to another
Use Self-Signed Certificates for TLS
Use snapshots
Use Spark
Use Spark 3 actions with a custom Python executable
Use Spark actions with a custom Python executable
Use Spark with a secure Kudu cluster
Use Sqoop
USE statement
Use strongly consistent indexing
Use the Charts Library with the Kudu service
Use the HBase APIs for Java
Use the HBase command-line utilities
Use the HBase REST server
Use the HBase shell
Use the Hue HBase app
Use the JDBC interpreter to access Hive
Use the Livy interpreter to access Spark
Use the Network Time Protocol (NTP) with HBase
Use the YARN REST APIs to manage applications
Use transactions with tables
Use wildcards with SHOW DATABASES
User Account Requirements
User authentication in Hue
User authorization configuration for Oozie
User management in Hue
Using --go-live with SSL or Kerberos
Using a credential provider to secure S3 credentials
Using a custom Kerberos configuration path
Using a custom Kerberos keytab retrieval script
Using a load balancer
Using a load balancer
Using a subquery
Using ABFS using CLI
Using advanced search
Using Apache HBase Backup and Disaster Recovery
Using Apache HBase Hive integration
Using Apache Hive
Using Apache Iceberg
Using Apache Iceberg with Spark
Using Apache Iceberg with Spark
Using Apache Impala with Apache Kudu
Using Apache Phoenix to Store and Access Data
Using Apache Phoenix-Hive connector
Using Apache Phoenix-Spark connector
Using Apache Zeppelin
Using Atlas-Hive import utility with Ozone entities
Using audit aging
Using auth-to-local rules to isolate cluster users
Using Avro Data Files
Using Basic search
Using Breakpad Minidumps for Crash Reporting
Using CLI commands to create and list ACLs
Using common table expressions
Using Configuration Properties to Authenticate
Using constraints
Using custom audit aging
Using custom audit filters
Using custom JAR files with Cloudera Search
Using custom libraries with Spark
Using default audit aging
Using dfs.datanode.max.transfer.threads with HBase
Using Direct Reader mode
Using DistCp
Using DistCp between HA clusters using Cloudera Manager
Using DistCp to copy files
Using DistCp with Amazon S3
Using DistCp with Highly Available remote clusters
Using DNS with HBase
Using EC2 Instance Metadata to Authenticate
Using Environment Variables to Authenticate
Using erasure coding for existing data
Using erasure coding for new data
Using Free-text Search
Using functions
Using governance-based data discovery
Using HBase blocksize
Using HBase coprocessors
Using HBase Hive integration
Using HBase replication
Using HBase scanner heartbeat
Using HDFS snapshots for data protection
Using HdfsFindTool to find files
Using hedged reads
Using Hive Metastore with Apache Kudu
Using Hive Warehouse Connector with Oozie Spark Action
Using HttpFS to provide access to HDFS
Using Hue
Using Hue scripts
Using HWC for streaming
Using Ignore and Prune patterns
Using Impala to query Kudu tables
Using import utility tools with Atlas
Using JDBC API
Using JDBC read mode
Using JdbcStorageHandler to query RDBMS
Using JdbcStorageHandler to query RDBMS
Using JMX for accessing HBase metrics
Using JMX for accessing HDFS metrics
Using Kafka Connect
Using Livy with Spark
Using Load Balancer with HttpFS
Using MapReduce batch indexing to index sample Tweets
Using MariaDB database with Hue
Using metadata for cluster governance
Using Morphlines to index Avro
Using Morphlines with Syslog
Using MySQL database with Hue
Using non-JDBC drivers
Using Oozie with Ozone
Using optimizations from a subquery
Using Oracle database with Hue
Using ORC Data Files
Using Ozone S3 Gateway to work with storage elements
Using Parquet Data Files
Using partitions when submitting a job
Using Per-Bucket Credentials to Authenticate
Using PostgreSQL database with Hue
Using PySpark
Using quota management
Using rack awareness for read replicas
Using Ranger client libraries
Using Ranger to Provide Authorization in CDP
Using Ranger to Provide Authorization in Cloudera
Using Ranger with Ozone
Using RCFile Data Files
Using RegionServer grouping
Using Relationship Search
Using Schema Registry
Using Search filters
Using secondary indexing
Using secure access mode
Using SequenceFile Data Files
Using session cookies to validate Ranger policies
Using solrctl with an HTTP proxy
Using Spark History Servers with high availability
Using Spark Hive Warehouse and HBase Connector Client .jar files with Livy
Using Spark MLlib
Using Spark SQL
Using Spark Streaming
Using SQL to query HBase from Hue
Using Sqoop actions with Oozie
Using Streams Messaging Manager
Using Streams Replication Manager
Using Sweep out configurations
Using tag attributes and values in Ranger tag-based policy conditions
Using Text Data Files
Using the Apache Thrift Proxy API
Using the AvroConverter
Using the AWS CLI with Ozone S3 Gateway
Using the CldrCopyTable utility to copy data
Using the Cloudera Runtime Maven repository 7.3.1
Using the cursor to return record sets
Using the Directory Committer in MapReduce
Using the HBase-Spark connector
Using the HBCK2 tool to remediate HBase clusters
Using the Impala shell
Using the indexer HTTP interface
Using the Lily HBase NRT indexer service
Using the Livy API to run Spark jobs
Using the manifest committer
Using the manifest committer
Using the NFS Gateway for accessing HDFS
Using the Note Toolbar
Using the Phoenix JDBC Driver
Using the Ranger Admin Web UI
Using the Ranger Key Management Service
Using the Rebalance Wizard in Cruise Control
Using the REST API
Using the REST API
Using the REST proxy API
Using the Spark DataFrame API
Using the Spark shell
Using the YARN CLI to viewlogs for applications
Using the yarn rmadmin tool to administer ResourceManager high availability
Using to manage HDFS HA
Using transactions
Using Unique Filenames to Avoid File Update Inconsistency
Using YARN Web UI and CLI
Using your schema in MariaDB
Using your schema in MS SQL
Using your schema in Oracle
Using your schema in PostgreSQL
Using Zeppelin Interpreters
UTF-8 Support
Validating Hadoop Key Operations
Validating the Cloudera Search deployment
Validations for parent types
VALUES statement
VARCHAR data type
Varchar type
VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP functions
Variations on Put
Vectorization default
Verifing use of a query rewrite
Verify that replication works
Verify the DNS configuration
Verify the DNS configuration
Verify the EC Policies
Verify the network line-of-sight
Verify the network line-of-sight
Verify the Related Query Option
Verify the ZooKeeper authentication
Verify validity of the NFS services
Verify your Accumulo installation
Verify your Operational Database installation
Verify your Operational Database installation
Verifying if a memory limit is sufficient
Verifying That an S3A Committer Was Used
Verifying that Indexing Works
Verifying the Impala dependency on Kudu
Verifying the setup
Version and Download Information
Versions
View Ranger reports
View the API documentation
Viewing all applications
Viewing and modifying log levels for Cloudera Search and related services
Viewing and modifying Solr configuration using Cloudera Manager
Viewing application details
Viewing audit details
Viewing audit metrics
Viewing configurations for a Hive query
Viewing DAG information for a Hive query
Viewing existing collections
Viewing explain plan for a Hive query
Viewing Hive query details
Viewing Hive query history
Viewing Hive query information
Viewing Hive query timeline
Viewing Impala profiles in Hue
Viewing Impala query details
Viewing Impala query history
Viewing Impala query information
Viewing Kafka cluster replication details
Viewing lineage
Viewing nodes and node details
Viewing partitions
Viewing queues and queue details
Viewing racks assigned to cluster hosts
Viewing the Cluster Overview
Viewing the Impala query execution plan
Viewing the Impala query metrics
Views
Virtual Clusters on premises and Cloudera SDX
Virtual column
Virtual machine options for HBase Shell
Virtual memory handling
Volume and bucket management using ofs
Web User Interface for Debugging
What is Cloudera on premises?
What is Cloudera Search
What is Open Data Lakehouse?
What's New
What's new in Platform Support
When Shuffles Do Not Occur
When to Add a Shuffle Transformation
When to use Atlas classifications for access control
Whitelisting Configurations at the Session Level
Why HDFS data becomes unbalanced
Why one scheduler?
Wildcards and variables in resource-based policies
WINDOW
WITH clause
Work preserving recovery for YARN components
Working with Amazon S3
Working with an HSM
Working with Apache Hive Metastore
Working with Atlas classifications and labels
Working with Azure ADLS Gen2 storage
Working with Classifications and Labels
Working with Google Cloud Storage
Working with Google Cloud Storage
Working with Ozone File System (o3fs)
Working with Ozone File System (ofs)
Working with S3 buckets in the same AWS region
Working with the ABFS Connector
Working with the Oozie server
Working with the Recon web user interface
Working with Third-party S3-compatible Object Stores
Working with versioned S3 buckets
Working with Zeppelin Notes
Write a few Events into the Topic
Write a few Events into the Topic
Write-ahead log garbage collection
Writes
Writing data in a Kerberos and TLS/SSL enabled cluster
Writing data in an unsecured cluster
Writing data through HWC
Writing data to HBase
Writing data to Kafka
Writing Kafka data to Ozone with Kafka Connect
Writing to multiple tablets
Writing transformed Hive data to Kafka
Writing UDFs
Writing user-defined aggregate functions (UDAFs)
YARN
YARN ACL rules
YARN ACL syntax
YARN ACL types
YARN and YARN Queue Manager
YARN and YARN Queue Manager
YARN Configuration Properties
YARN Features
YARN log aggregation overview
YARN Queue Manager UI behavior in mixed resource allocation mode
YARN Ranger authorization support
YARN Ranger authorization support compatibility matrix
YARN resource allocation of multiple resource-types
YARN ResourceManager high availability
YARN ResourceManager high availability architecture
YARN services API examples
YARN tuning overview
YARN, MRv1, and Linux OS Security
Zeppelin
Zeppelin
Zeppelin Overview
Zipping unnest on arrays from views
Zookeeper
Zookeeper
ZooKeeper
ZooKeeper ACLs Best Practices
ZooKeeper ACLs Best Practices: Atlas
ZooKeeper ACLs Best Practices: Cruise Control
ZooKeeper ACLs Best Practices: HBase
ZooKeeper ACLs Best Practices: HDFS
ZooKeeper ACLs Best Practices: Kafka
ZooKeeper ACLs Best Practices: Oozie
ZooKeeper ACLs Best Practices: Ranger
ZooKeeper ACLs best practices: Search
ZooKeeper ACLs Best Practices: YARN
ZooKeeper ACLs Best Practices: ZooKeeper
ZooKeeper Authentication
Zookeeper Configurations
zookeeper-security-migration
«
Filter topics
Upgrading from 7.1.7
▼
Upgrading Spark 2 to Spark 3 for Cloudera on premises 7.3.1
▶︎
Upgrading from 7.1.9 SP1
▶︎
Upgrade from Spark 2.4.8
Pre-application migration tasks
Spark application migration (from Spark 2 to Spark 3)
Post-application migration tasks
In-place cluster upgrade
Spark application migration (from Spark 3.x to Spark 3.4.1)
Final steps
▶︎
Upgrade from Spark 2.4.8 (with CDS 3.3.2)
Application migration tasks from Spark 2 to Spark 3
Post-application migration tasks
In-place cluster upgrade
Spark application migration (from Spark 3.x to Spark 3.4.1)
Final steps
▶︎
Upgrade from Spark 3.3.2 (CDS)
In-place cluster upgrade
Spark application migration (from Spark 3.x to Spark 3.4.1)
Final steps
▶︎
Upgrading from 7.1.8
▶︎
Upgrade from Spark 2.4.8
Pre-application migration tasks
Application migration tasks (Spark 2 to Spark 3)
Post-application migration tasks
In-place cluster upgrade
Application migration tasks (Spark 3.x to Spark 3.4.1)
Final steps
▶︎
Upgrade from Spark 2.4.8 (with connectors)
Intermediate in-place cluster upgrade
▶︎
Upgrade from Spark 2.4.8 (with CDS 3.3.x)
Application migration tasks (from Spark 2 to 3)
Post-application migration tasks
In-place cluster upgrade
Application migration tasks (from Spark 3.x to 3.4.1)
Final steps
▶︎
Upgrade from Spark 2.4.8 (with CDS 3.3.x and connectors)
Intermediate in-place cluster upgrade
▶︎
Upgrade from Spark 3.3.x (CDS)
In-place cluster upgrade
Application migration tasks (Spark 3.x to Spark 3.4.1)
Final steps
▼
Upgrading from 7.1.7
▶︎
Upgrade from Spark 2.4.7
Pre-application migration tasks
Application migration tasks (Spark 2 to 3)
Post-application migration tasks
In-place cluster upgrade
Application migration tasks (Spark 3.x to 3.4.1)
Fianl steps
▶︎
Upgrade from Spark 2.4.7 (with connectors)
Intermediate in-place cluster upgrade
▶︎
Upgrade from Spark 2.4.7 (with CDS 3.2.3)
Application migration tasks (Spark 2 to 3)
Post-application migration tasks
In-place cluster upgrade
Application migration tasks (Spark 3.x to 3.4.1)
Final steps
▶︎
Upgrade from Spark 2.4.7 (with CDS 3.2.3 and connectors)
Intermediate in-place cluster upgrade
▶︎
Upgrade from Spark 3.2.3 (CDS)
In-place cluster upgrade
Application migration tasks (Spark 3.x to 3.4.1)
Final steps
▶︎
Migrating Spark applications
Java versions
Scala versions
Python versions
Spark commands
Spark connectors
Logging
Third-party libraries
Spark behavior changes
Apache Spark Migration guides
Spark 2 to Spark 3 workload refactoring
Unsupported features
Post-migration checklist
Benchmark testing
Troubleshooting
»
Upgrading Apache Spark
Upgrading from 7.1.7
Upgrading Apache Spark 2.4.7 on 7.1.7 SP3 to Spark 3 on 7.3.1
Upgrading Apache Spark 2.4.7 (with connectors) on 7.1.7 SP3 to Spark 3 on 7.3.1
Upgrading Apache Spark 2.4.7 (with CDS 3.2.3) on 7.1.7 SP3 to Spark 3 on 7.3.1
Upgrading Apache Spark 2.4.7 (with CDS 3.2.3) on 7.1.7 SP3 to Spark 3 on 7.3.1
Upgrading Apache Spark 3.2.3 on 7.1.7 SP3 to Spark 3 on 7.3.1
Parent topic:
Upgrading Spark 2 to Spark 3 for Cloudera on premises 7.3.1
This site uses cookies and related technologies, as described in our
privacy policy
, for purposes that may include site operation, analytics, enhanced user experience, or advertising. You may choose to consent to our use of these technologies, or
manage your own preferences.
Accept all
7.3.1
7.1
7.1.9
7.1.8
7.1.7
7.1.6
7.1.5
7.1.4
7.1.3
7.1.2
7.1.1
7.0.3