Homepage
/
Cloudera Runtime
7.2.18
(Public Cloud)
Search Documentation
▶︎
Cloudera
Reference Architectures
▼
Cloudera Public Cloud
Getting Started
Patterns
Preview Features
Data Catalog
Data Engineering
DataFlow
Data Hub
Data Warehouse
Data Warehouse Runtime
Cloudera AI
Management Console
Operational Database
Replication Manager
Cloudera Manager
CDF for Data Hub
Runtime
▶︎
Cloudera Private Cloud
Data Services
Getting Started
Cloudera Manager
Management Console
Replication Manager
Data Catalog
Data Engineering
Data Warehouse
Data Warehouse Runtime
Machine Learning
Base
Getting Started
Runtime
Upgrade
Storage
Flow Management
Streaming Analytics
Flow Management Operator
Streaming Analytics Operator
Streams Messaging Operator
▶︎
Cloudera Manager
Cloudera Manager
▶︎
Applications
Streaming Community Edition
Data Science Workbench
Data Visualization
Edge Management
Observability
Observability on premises
Workload XM On-Prem
▶︎
Legacy
Cloudera Enterprise
Flow Management
Stream Processing
HDP
HDF
Streams Messaging Manager
Streams Replication Manager
▶︎
Getting Started
Patterns
Preview Features
Data Catalog
Data Engineering
DataFlow
Data Hub
Data Warehouse
Data Warehouse Runtime
Cloudera AI
Management Console
Operational Database
Replication Manager
Cloudera Manager
CDF for Data Hub
Runtime
«
Filter topics
Cloudera Runtime
▶︎
Cloudera Runtime Release Notes
Overview
Cloudera Runtime Component Versions
▶︎
Using the Cloudera Runtime Maven repository 7.2.18
Runtime 7.2.18.0-641
▶︎
What's New
Atlas
Cloud Connectors
Cruise Control
HBase
Hive
Hue
Iceberg
Impala
Kafka
Knox
Kudu
Livy
Oozie
Phoenix
Ranger
Schema Registry
Solr
Spark
Sqoop
Streams Messaging Manager
Streams Replication Manager
YARN and YARN Queue Manager
ZooKeeper
Unaffected Components in this release
▶︎
Fixed Issues In Cloudera Runtime 7.2.18
Atlas
Avro
Cloud Connectors
Cruise Control
Hadoop
HBase
HDFS
Hive
Hive Warehouse Connector
Hue
Impala
Iceberg
Kafka
Knox
Kudu
Livy
Oozie
Phoenix
Parquet
Ranger
Schema Registry
Solr
Spark
Spark3
Sqoop
Streams Messaging Manager
Streams Replication Manager
Tez
YARN
Zeppelin
ZooKeeper
▶︎
Known Issues In Cloudera Runtime 7.2.18
Atlas
Avro
Cloud Connectors
Cruise Control
HBase
HDFS
Hive
Hue
Iceberg
Impala
Kafka
Knox
Kudu
Oozie
Phoenix
Ranger
Schema Registry
Solr
Spark
Sqoop
Streams Messaging Manager
Streams Replication Manager
YARN
Zeppelin
ZooKeeper
Fixed Common Vulnerabilities and Exposures 7.2.18
▶︎
Public Cloud Service Pack Releases
▶︎
Cloudera Runtime 7.2.18.100
Fixed Issues In Cloudera Runtime 7.2.18.100
▶︎
Cloudera Runtime 7.2.18.200
Fixed Issues in Cloudera Runtime 7.2.18.200
▶︎
Cloudera Runtime 7.2.18.300
Fixed Issues in Cloudera Runtime 7.2.18.300
▶︎
Cloudera Runtime 7.2.18.400
What's New In Cloudera Runtime 7.2.18.400
Fixed Issues in Cloudera Runtime 7.2.18.400
▶︎
Cloudera Runtime 7.2.18.500
What's New In Cloudera Runtime 7.2.18.500
Fixed Issues in Cloudera Runtime 7.2.18.500
Behavioral Changes In Cloudera Runtime 7.2.18.500
Known Issues in Cloudera Runtime 7.2.18.500
▶︎
Behavioral Changes In Cloudera Runtime 7.2.18
Hive
Kafka
Ranger
▶︎
Deprecation Notices In Cloudera Runtime 7.2.18
DAS
Kafka
Oozie
Spark 2
Zeppelin
Cloudera Manager Release Notes
▼
Concepts
▶︎
Storage
▶︎
HDFS Overview
▶︎
Introduction
Overview of HDFS
▶︎
NameNodes
▶︎
Moving NameNode roles
Moving highly available NameNode, failover controller, and JournalNode roles using the Migrate Roles wizard
Moving a NameNode to a different host using Cloudera Manager
▶︎
Sizing NameNode heap memory
Environment variables for sizing NameNode heap memory
Monitoring heap memory usage
Files and directories
Disk space versus namespace
Replication
Examples of estimating NameNode heap memory
Remove or add storage directories for NameNode data directories
▶︎
DataNodes
How NameNode manages blocks on a failed DataNode
Replace a disk on a DataNode host
Remove a DataNode
Fixing block inconsistencies
Add storage directories using Cloudera Manager
Remove storage directories using Cloudera Manager
▶︎
Configuring storage balancing for DataNodes
Configure storage balancing for DataNodes using Cloudera Manager
Perform a disk hot swap for DataNodes using Cloudera Manager
▶︎
JournalNodes
Moving the JournalNode edits directory for a role group using Cloudera Manager
Moving the JournalNode edits directory for a role instance using Cloudera Manager
Synchronizing the contents of JournalNodes
▶︎
Multiple NameNodes overview
Multiple Namenodes configurations
Known issue and its workaround
Adding multiple namenodes using the HDFS service
▶︎
Apache HBase Overview
Apache HBase overview
▶︎
Apache Kudu Overview
Kudu introduction
Kudu architecture in a CDP public cloud deployment
Kudu network architecture
Kudu-Impala integration
Example use cases
Kudu concepts
▶︎
Apache Kudu usage limitations
Schema design limitations
Partitioning limitations
Scaling recommendations and limitations
Server management limitations
Cluster management limitations
Impala integration limitations
Spark integration limitations
Kudu security limitations
Other known issues
More Resources
▶︎
Apache Kudu Background Operations
Maintenance manager
Flushing data to disk
Compacting on-disk data
Write-ahead log garbage collection
Tablet history garbage collection and the ancient history mark
▶︎
Apache Hadoop YARN Overview
Introduction
YARN Features
Understanding YARN architecture
▶︎
Data Access
▶︎
Apache Hive Metastore Overview
Apache Hive storage in public clouds
▶︎
Apache Hive Overview
Apache Hive features in Data Hub
Spark integration with Hive
Hive on Tez introduction
Hive unsupported interfaces and features in public clouds
Apache Hive 3 in Data Hub architectural overview
Apache Hive content roadmap
▶︎
Apache Iceberg Overview
Iceberg overview
▶︎
Apache Phoenix Overview
Introduction to Apache Phoenix
Apache Phoenix and SQL
Using secondary indexing
▶︎
Apache Impala Overview
Apache Impala Overview
Components of Impala
▶︎
Hue Overview
Hue overview
About Hue Query Processor
About the Hue SQL AI Assistant
▶︎
Cloudera Search Overview
What is Cloudera Search
How Cloudera Search works
Cloudera Search and CDP
Search and other Runtime components
Cloudera Search architecture
Local file system support
Cloudera Search tasks and processes
Backing up and restoring data
▶︎
Operational Database
▶︎
Operational Database Overview
▶︎
Introduction to Operational Database
Introduction to Apache HBase
▶︎
Introduction to Apache Phoenix
Apache Phoenix and SQL
▶︎
Introduction to HBase Multi-cluster Client
▶︎
Introduction to HBase Multi-cluster Client
HBase MCC Usage with Kerberos
HBase MCC Usage in Spark with Scala
HBase MCC Usage in Spark with Java
Zookeeper Configurations
HBase MCC Configurations
HBase MCC Restrictions
▶︎
Data Engineering
▶︎
Apache Spark Overview
Apache Spark Overview
Unsupported Apache Spark Features
▶︎
Apache Zeppelin Overview
Overview
▶︎
CDP Security Overview
▶︎
Introduction
Importance of a Secure Cluster
Secure by Design
▶︎
Pillars of Security
Authentication
Authorization
Encryption
Identity Management
Security Management Model
▶︎
Security Levels
Choosing the Sufficient Security Level for Your Environment
Logical Architecture
SDX
Security Terms
▶︎
Governance
▶︎
Governance Overview
Using metadata for cluster governance
Data Stewardship with Apache Atlas
Apache Atlas dashboard tour
Apache Atlas metadata collection overview
Atlas metadata model overview
▶︎
Controlling Data Access with Tags
Atlas classifications drive Ranger policies
When to use Atlas classifications for access control
▶︎
How tag-based access control works
Propagation of tags as deferred actions
Examples of controlling data access using classifications
▶︎
Extending Atlas to Manage Metadata from Additional Sources
Top-down process for adding a new metadata source
▼
Streams Messaging
▶︎
Apache Kafka Overview
Kafka Introduction
▶︎
Kafka Architecture
Brokers
Topics
Records
Partitions
Record order and assignment
Logs and log segments
Kafka brokers and Zookeeper
Leader positions and in-sync replicas
Kafka stretch clusters
Kafka disaster recovery
Kafka rack awareness
Kafka KRaft [Technical Preview]
▶︎
Kafka FAQ
Basics
Use cases
▶︎
Cruise Control Overview
Kafka cluster load balancing using Cruise Control
▼
Schema Registry Overview
▼
Schema Registry overview
Examples of interacting with Schema Registry
▼
Schema Registry use cases
Registering and querying a schema for a Kafka topic
Deserializing and serializing data from and to a Kafka topic
Dataflow management with schema-based routing
Schema Registry component architecture
▶︎
Schema Registry concepts
Schema entities
Compatibility policies
Importance of logical types in Avro
▶︎
Streams Messaging Manager Overview
Introduction to Streams Messaging Manager
▶︎
Streams Replication Manager Overview
Overview
Key Features
Main Use Cases
Use case architectures
▶︎
Streams Replication Manager Architecture
▶︎
Streams Replication Manager Driver
Connect workers
Connectors
Task architecture and load-balancing
Driver inter-node coordination
▶︎
Streams Replication Manager Service
Remote Querying
Monitoring and metrics
REST API
Replication flows and replication policies
Remote topic discovery
Automatic group offset synchronization
Understanding co-located and external clusters
Understanding SRM properties, their configuration and hierarchy
▶︎
Planning
▶︎
Deployment Planning for Cloudera Search
Planning overview
Dimensioning guidelines
Schemaless mode overview and best practices
Advantages of defining a schema for production use
▶︎
Planning for Apache Impala
Guidelines for Schema Design
User Account Requirements
▶︎
Planning for Apache Kudu
▶︎
Kudu schema design
The perfect schema
▶︎
Column design
Decimal type
Varchar type
Column encoding
Column compression
▶︎
Primary key design
Primary key index
Non-unique primary key index
Considerations for backfill inserts
▶︎
Partitioning
▶︎
Range partitioning
Adding and Removing Range Partitions
Hash partitioning
Multilevel partitioning
Partition pruning
▶︎
Partitioning examples
Range partitioning
Hash partitioning
Hash and range partitioning
Hash and hash partitioning
Schema alterations
Schema design limitations
Partitioning limitations
▶︎
Kudu transaction semantics
Single tablet write operations
Writing to multiple tablets
Read operations (scans)
▶︎
Known issues and limitations
Writes
Reads (scans)
▶︎
Scaling Kudu
Terms
Example workload
▶︎
Memory
Verifying if a memory limit is sufficient
File descriptors
Threads
Scaling recommendations and limitations
▶︎
Planning for Streams Replication Manager
Streams Replication Manager requirements
Recommended deployment architecture
▶︎
Planning for Apache Kafka
Stretch cluster reference architecture
▶︎
How To
▶︎
Storage
▶︎
Managing Data Storage
▶︎
Optimizing data storage
▶︎
Balancing data across disks of a DataNode
▶︎
Plan the data movement across disks
Parameters to configure the Disk Balancer
Run the Disk Balancer plan
Disk Balancer commands
▶︎
Erasure coding overview
Understanding erasure coding policies
Comparing replication and erasure coding
Best practices for rack and node setup for EC
Prerequisites for enabling erasure coding
Limitations of erasure coding
Using erasure coding for existing data
Using erasure coding for new data
Advanced erasure coding configuration
Erasure coding CLI command
Erasure coding examples
▶︎
Increasing storage capacity with HDFS compression
Enable GZipCodec as the default compression codec
Use GZipCodec with a one-time job
▶︎
Set HDFS quotas
Setting HDFS quotas in Cloudera Manager
▶︎
Configuring heterogeneous storage in HDFS
HDFS storage types
HDFS storage policies
Commands for configuring storage policies
Set up a storage policy for HDFS
Set up SSD storage using Cloudera Manager
Configure archival storage
The HDFS mover command
▶︎
Balancing data across an HDFS cluster
Why HDFS data becomes unbalanced
▶︎
Configurations and CLI options for the HDFS Balancer
Properties for configuring the Balancer
Balancer commands
Recommended configurations for the Balancer
▶︎
Configuring and running the HDFS balancer using Cloudera Manager
Configuring the balancer threshold
Configuring concurrent moves
Recommended configurations for the balancer
Running the balancer
Configuring block size
▶︎
Cluster balancing algorithm
Storage group classification
Storage group pairing
Block move scheduling
Block move execution
Exit statuses for the HDFS Balancer
HDFS
▶︎
Optimizing performance
▶︎
Improving performance with centralized cache management
Benefits of centralized cache management in HDFS
Use cases for centralized cache management
Centralized cache management architecture
Caching terminology
Properties for configuring centralized caching
Commands for using cache pools and directives
▶︎
Specifying racks for hosts
Viewing racks assigned to cluster hosts
Editing rack assignments for hosts
▶︎
Customizing HDFS
Customize the HDFS home directory
Properties to set the size of the NameNode edits directory
▶︎
Optimizing NameNode disk space with Hadoop archives
Overview of Hadoop archives
Hadoop archive components
Creating a Hadoop archive
List files in Hadoop archives
Format for using Hadoop archives with MapReduce
▶︎
Detecting slow DataNodes
Enable disk IO statistics
Enable detection of slow DataNodes
▶︎
Allocating DataNode memory as storage
HDFS storage types
LAZY_PERSIST memory storage policy
Configure DataNode memory as storage
▶︎
Improving performance with short-circuit local reads
Prerequisites for configuring short-ciruit local reads
Properties for configuring short-circuit local reads on HDFS
▶︎
Configure mountable HDFS
Add HDFS system mount
Optimize mountable HDFS
Configuring Proxy Users to Access HDFS
▶︎
Using DistCp to copy files
Using DistCp
Distcp syntax and examples
Using DistCp with Highly Available remote clusters
▶︎
Using DistCp with Amazon S3
Using a credential provider to secure S3 credentials
Examples of DistCp commands using the S3 protocol and hidden credentials
Kerberos setup guidelines for Distcp between secure clusters
▶︎
Distcp between secure clusters in different Kerberos realms
Configure source and destination realms in krb5.conf
Configure HDFS RPC protection
Specify truststore properties
Set HADOOP_CONF to the destination cluster
Launch distcp
Copying data between a secure and an insecure cluster using DistCp and WebHDFS
Post-migration verification
Using DistCp between HA clusters using Cloudera Manager
▶︎
Using the NFS Gateway for accessing HDFS
Install the NFS Gateway
Configure the NFS Gateway
▶︎
Start and stop the NFS Gateway services
Start the NFS Gateway services
Stop the NFS Gateway services
Verify validity of the NFS services
▶︎
Access HDFS from the NFS Gateway
How NFS Gateway authenticates and maps users
▶︎
APIs for accessing HDFS
Set up WebHDFS on a secure cluster
▶︎
Using HttpFS to provide access to HDFS
Add the HttpFS role
Using Load Balancer with HttpFS
▶︎
HttpFS authentication
Use curl to access a URL protected by Kerberos HTTP SPNEGO
▶︎
Data storage metrics
Using JMX for accessing HDFS metrics
▶︎
Configure the G1GC garbage collector
Recommended settings for G1GC
Switching from CMS to G1GC
HDFS Metrics
▶︎
Using HdfsFindTool to find files
Downloading Hdfsfindtool from the CDH archives
▶︎
Configuring Data Protection
▶︎
Data protection
▶︎
Backing up HDFS metadata
▶︎
Introduction to HDFS metadata files and directories
▶︎
Files and directories
NameNodes
JournalNodes
DataNodes
▶︎
HDFS commands for metadata files and directories
Configuration properties
▶︎
Back up HDFS metadata
Prepare to back up the HDFS metadata
Backing up NameNode metadata
Back up HDFS metadata using Cloudera Manager
Restoring NameNode metadata
Restore HDFS metadata from a backup using Cloudera Manager
Perform a backup of the HDFS metadata
▶︎
Configuring HDFS trash
Trash behavior with HDFS Transparent Encryption enabled
Enabling and disabling trash
Setting the trash interval
▶︎
Using HDFS snapshots for data protection
Considerations for working with HDFS snapshots
Enable snapshot creation on a directory
Create snapshots on a directory
Recover data from a snapshot
Options to determine differences between contents of snapshots
CLI commands to perform snapshot operations
▶︎
Managing snapshot policies using Cloudera Manager
Create a snapshot policy
Edit or delete a snapshot policy
Enable and disable snapshot creation using Cloudera Manager
Create snapshots using Cloudera Manager
Delete snapshots using Cloudera Manager
Preventing inadvertent deletion of directories
▶︎
Accessing Cloud Data
Cloud storage connectors overview
The Cloud Storage Connectors
▶︎
Working with Amazon S3
Limitations of Amazon S3
▶︎
Configuring Access to S3
Configuring Access to S3 on CDP Public Cloud
▶︎
Configuring Access to S3 on Cloudera Private Cloud Base
Using Configuration Properties to Authenticate
Using Per-Bucket Credentials to Authenticate
Using Environment Variables to Authenticate
Using EC2 Instance Metadata to Authenticate
Referencing S3 Data in Applications
▶︎
Configuring Per-Bucket Settings
Customizing Per-Bucket Secrets Held in Credential Files
Configuring Per-Bucket Settings to Access Data Around the World
▶︎
Encrypting Data on S3
▶︎
SSE-S3: Amazon S3-Managed Encryption Keys
Enabling SSE-S3
▶︎
SSE-KMS: Amazon S3-KMS Managed Encryption Keys
Enabling SSE-KMS
IAM Role permissions for working with SSE-KMS
▶︎
SSE-C: Server-Side Encryption with Customer-Provided Encryption Keys
Enabling SSE-C
▶︎
CSE-KMS: Amazon S3-KMS managed encryption keys
Enabling CSE-KMS
Configuring Encryption for Specific Buckets
Encrypting an S3 Bucket with Amazon S3 Default Encryption
Performance Impact of Encryption
▶︎
Safely Writing to S3 Through the S3A Committers
Introducing the S3A Committers
Configuring Directories for Intermediate Data
Using the Directory Committer in MapReduce
Verifying That an S3A Committer Was Used
Cleaning up after failed jobs
Using the S3Guard Command to List and Delete Uploads
▶︎
Advanced Committer Configuration
Enabling Speculative Execution
Using Unique Filenames to Avoid File Update Inconsistency
Speeding up Job Commits by Increasing the Number of Threads
Securing the S3A Committers
The S3A Committers and Third-Party Object Stores
Limitations of the S3A Committers
Troubleshooting the S3A Committers
Security Model and Operations on S3
S3A and Checksums (Advanced Feature)
A List of S3A Configuration Properties
Working with versioned S3 buckets
Working with Third-party S3-compatible Object Stores
▶︎
Improving Performance for S3A
Working with S3 buckets in the same AWS region
▶︎
Configuring and tuning S3A block upload
Tuning S3A Uploads
Thread Tuning for S3A Data Upload
Optimizing S3A read performance for different file types
S3 Performance Checklist
Troubleshooting S3
▶︎
Working with Google Cloud Storage
▶︎
Configuring Access to Google Cloud Storage
Create a GCP Service Account
Create a Custom Role
Modify GCS Bucket Permissions
Configure Access to GCS from Your Cluster
▶︎
Manifest committer for ABFS and GCS
Using the manifest committer
Spark Dynamic Partition overwriting
Job summaries in _SUCCESS files
Job cleanup
Working with Google Cloud Storage
Advanced topics
Additional Configuration Options for GCS
▶︎
Working with the ABFS Connector
▶︎
Introduction to Azure Storage and the ABFS Connector
Feature Comparisons
Setting up and configuring the ABFS connector
▶︎
Configuring the ABFS Connector
▶︎
Authenticating with ADLS Gen2
Configuring Access to Azure on CDP Public Cloud
Configuring Access to Azure on Cloudera Private Cloud Base
ADLS Proxy Setup
▶︎
Manifest committer for ABFS and GCS
Using the manifest committer
Spark Dynamic Partition overwriting
Job summaries in _SUCCESS files
Job cleanup
Working with Azure ADLS Gen2 storage
Advanced topics
▶︎
Performance and Scalability
Hierarchical namespaces vs. non-namespaces
Flush options
▶︎
Using ABFS using CLI
Hadoop File System commands
Create a table in Hive
Accessing Azure Storage account container from spark-shell
Copying data with Hadoop DistCp
DistCp and Proxy Settings
ADLS Trash Folder Behavior
Troubleshooting ABFS
▶︎
Configuring Fault Tolerance
▶︎
High Availability on HDFS clusters
▶︎
Configuring HDFS High Availability
NameNode architecture
Preparing the hardware resources for HDFS High Availability
▶︎
Using Cloudera Manager to manage HDFS HA
Enabling HDFS HA
Prerequisites for enabling HDFS HA using Cloudera Manager
Enabling High Availability and automatic failover
Disabling and redeploying HDFS HA
▶︎
Configuring other CDP components to use HDFS HA
Configuring HBase to use HDFS HA
Configuring the Hive Metastore to use HDFS HA
Configuring Impala to work with HDFS HA
Configuring oozie to use HDFS HA
Changing a nameservice name for Highly Available HDFS using Cloudera Manager
Manually failing over to the standby NameNode
Additional HDFS haadmin commands to administer the cluster
Turning safe mode on HA NameNodes
Converting from an NFS-mounted shared edits directory to Quorum-Based Storage
Administrative commands
▶︎
Configuring HDFS ACLs
HDFS ACLs
Configuring ACLs on HDFS
Using CLI commands to create and list ACLs
ACL examples
ACLS on HDFS features
Use cases for ACLs on HDFS
▶︎
Enable authorization for HDFS web UIs
Enable authorization for additional HDFS web UIs
Configuring HSTS for HDFS Web UIs
▶︎
Configuring Apache Kudu
▶︎
Configure Kudu processes
Experimental flags
Configuring the Kudu master
Configuring tablet servers
Rack awareness (Location awareness)
▶︎
Directory configurations
Changing directory configuration
▶︎
Managing Apache Kudu
▶︎
Limitations
Server management limitations
Cluster management limitations
Start and stop Kudu processes
▶︎
Orchestrate a rolling restart with no downtime
Minimize cluster distruption during planned downtime
▶︎
Kudu web interfaces
Kudu master web interface
Kudu tablet server web interface
Common web interface pages
Best practices when adding new tablet servers
Decommission or remove a tablet server
Use cluster names in the kudu command line tool
Migrate data on the same host
Migrate to a multiple Kudu master configuration
▶︎
Change master hostnames
Prepare for master hostname changes
Perform master hostname changes
▶︎
Removing Kudu masters through Cloudera Manager
Recommissioning Kudu masters through Cloudera Manager
▶︎
Remove Kudu masters through CLI
Prepare for removal
Perform the removal
How Range-aware replica placement in Kudu works
▶︎
Run the tablet rebalancing tool
Run a tablet rebalancing tool on a rack-aware cluster
Run a tablet rebalancing tool in Cloudera Manager
Run a tablet rebalancing tool in command line
▶︎
Managing Kudu tables with range-specific hash schemas
Range-specific hash schemas example: Using impala-shell
Range-specific hash schemas example: Using Kudu C++ client API
Range-specific hash schemas example: Using Kudu Java client API
▶︎
Managing Apache Kudu Security
▶︎
Kudu security considerations
Proxied RPCs in Kudu
Kudu security limitations
▶︎
Kudu authentication
Kudu authentication with Kerberos
Kudu authentication tokens
Client authentication to secure Kudu clusters
▶︎
JWT authentication for Kudu
Configuring server side JWT authentication for Kudu
Configuring client side JWT authentication for Kudu
Kudu coarse-grained authorization
▶︎
Kudu fine-grained authorization
Kudu and Apache Ranger integration
Kudu authorization tokens
Specifying trusted users
Kudu authorization policies
Ranger policies for Kudu
Disabling redaction
▶︎
Configuring a secure Kudu cluster using Cloudera Manager
Enabling Kerberos authentication and RPC encryption
Configuring custom Kerberos principal for Kudu
Configuring coarse-grained authorization with ACLs
Configuring TLS/SSL encryption for Kudu using Cloudera Manager
Enabling Ranger authorization
Configuring HTTPS encryption
Configuring data at rest encryption
▶︎
Backing up and Recovering Apache Kudu
▶︎
Kudu backup
Back up tables
Backup tools
Generate a table list
Backup directory structure
Physical backups of an entire node
▶︎
Kudu recovery
Restore tables from backups
Recover from disk failure
Recover from full disks
Bring a tablet that has lost a majority of replicas back online
Rebuild a Kudu filesystem layout
▶︎
Developing Applications with Apache Kudu
View the API documentation
Kudu example applications
Maven artifacts
Kudu Python client
▶︎
Kudu integration with Spark
Spark integration known issues and limitations
Spark integration best practices
Upsert option in Kudu Spark
Use Spark with a secure Kudu cluster
Spark tuning
▶︎
Using Hive Metastore with Apache Kudu
Integrating the Hive Metastore with Apache Kudu
Databases and Table Names
Administrative tools for Hive Metastore integration
Upgrading existing Kudu tables for Hive Metastore integration
Enabling the Hive Metastore integration
▶︎
Using Apache Impala with Apache Kudu
▶︎
Understanding Impala integration with Kudu
Impala database containment model
Internal and external Impala tables
Verifying the Impala dependency on Kudu
Impala integration limitations
▶︎
Using Impala to query Kudu tables
Query an existing Kudu table from Impala
Create a new Kudu table from Impala
Use CREATE TABLE AS SELECT
▶︎
Partitioning tables
Basic partitioning
Advanced partitioning
Non-covering range partitions
Partitioning guidelines
Optimize performance for evaluating SQL predicates
Insert data
INSERT and primary key uniqueness violations
Update data
Upsert a row
Alter a table
Delete data
Failures during INSERT, UPDATE, UPSERT, and DELETE operations
Drop a Kudu table
▶︎
Monitoring Apache Kudu
▶︎
Kudu metrics
Listing available metrics
Collecting metrics through HTTP
Diagnostics logging
Monitor cluster health with ksck
Report craches using breakpad
Enable core dump
Use the Charts Library
▶︎
Compute
▶︎
Using YARN Web UI and CLI
Accessing the YARN Web User Interface
Viewing the Cluster Overview
Viewing nodes and node details
Viewing queues and queue details
▶︎
Viewing all applications
Searching applications
Viewing application details
UI Tools
Using the YARN CLI to viewlogs for applications
▶︎
Configuring Apache Hadoop YARN Security
Linux Container Executor
▶︎
Managing Access Control Lists
YARN ACL rules
YARN ACL syntax
▶︎
YARN ACL types
Admin ACLs
Queue ACLs
▶︎
Application ACLs
Application ACL evaluation
MapReduce Job ACLs
Spark Job ACLs
Application logs' ACLs
▶︎
Configuring TLS/SSL for Core Hadoop Services
Configuring TLS/SSL for HDFS
Configuring TLS/SSL for YARN
Enable HTTPS communication
Configuring Cross-Origin Support for YARN UIs and REST APIs
Configuring YARN Security for Long-Running Applications
▶︎
YARN Ranger authorization support
YARN Ranger authorization support compatibility matrix
Enabling YARN Ranger authorization support
Disabling YARN Ranger authorization support
Enabling custom Kerberos principal support in YARN
Enabling custom Kerberos principal support in a Queue Manager cluster
▶︎
Configuring Apache Hadoop YARN High Availability
▶︎
YARN ResourceManager high availability
YARN ResourceManager high availability architecture
Configuring YARN ResourceManager high availability
Using the yarn rmadmin tool to administer ResourceManager high availability
Migrating ResourceManager to another host
▶︎
Work preserving recovery for YARN components
Configuring work preserving recovery on ResourceManager
Configuring work preserving recovery on NodeManager
Example: Configuration for work preserving recovery
▶︎
Managing and Allocating Cluster Resources using Capacity Scheduler
▶︎
Resource scheduling and management
YARN resource allocation of multiple resource-types
Hierarchical queue characteristics
Scheduling among queues
Application reservations
Resource distribution workflow
Resource allocation overview
▶︎
Use CPU scheduling
Configure CPU scheduling and isolation
Use CPU scheduling with distributed shell
▶︎
Use GPU scheduling
Configure GPU scheduling and isolation
Use GPU scheduling with distributed shell
▶︎
Use FPGA scheduling
Configure FPGA scheduling and isolation
Use FPGA with distributed shell
▶︎
Limit CPU usage with Cgroups
Use Cgroups
Enable Cgroups
▶︎
Managing YARN Queue Manager
Configuring YARN Queue Manager dependency
Updating YARN Queue Manager Database Password
Accessing the YARN Queue Manager UI
Providing read-only access to Queue Manager UI
Configuring the embedded Jetty Server in Queue Manager
▶︎
Managing queues
Adding queues using YARN Queue Manager UI
Configuring cluster capacity with queues
Configuring the resource capacity of root queue
▶︎
Mixed resource allocation mode (Technical Preview)
Setting capacity using mixed resource allocation mode (Technical Preview)
Changing resource allocation mode
Starting and stopping queues
Deleting queues
Setting queue priorities
▶︎
Configuring scheduler properties at the global level
Setting global maximum application priority
Configuring preemption
Enabling Intra-Queue preemption
Enabling LazyPreemption
Setting global application limits
Setting default Application Master resource limit
Enabling asynchronous scheduler
Configuring queue mapping to use the user name from the application tag using Cloudera Manager
Configuring NodeManager heartbeat
Configuring data locality
▶︎
Setting Maximum Parallel Application
Setting maximum parallel application limits
▶︎
Configuring per queue properties
Setting user limits within a queue
Setting Maximum Application limit for a specific queue
Setting Application-Master resource-limit for a specific queue
Setting maximum parallel application limits for a specific queue
Controlling access to queues using ACLs
Enabling preemption for a specific queue
Enabling Intra-Queue Preemption for a specific queue
▶︎
Setting ordering policies within a specific queue
Configure queue ordering policies
▶︎
Autoscaling clusters
Autoscaling behavior
Configuring autoscaling
▶︎
Dynamic Queue Scheduling
Creating a new Dynamic Configuration
Managing Dynamic Configurations
How to read the Configurations table
Handling Dynamic Configuration conflicts
Revalidating Dynamic Configurations
Dynamic Configurations execution log
▶︎
Managing placement rules
Placement rule policies
How to read the Placement Rules table
▶︎
Creating placement rules
Example - Placement rules creation
Reordering placement rules
Editing placement rules
Deleting placement rules
Enabling override of default queue mappings
▶︎
Managing dynamic queues
Managed Parent Queues
Converting a queue to a Managed Parent Queue
Enabling dynamic child creation in weight mode
Disabling dynamic child creation in weight mode
Managing dynamic child creation enabled parent queues
Managing dynamically created child queues
▶︎
Deleting dynamically created child queues
Disabling auto queue deletion globally
Disabling queue auto removal on a queue level
Configuring the queue auto removal expiration time
Deleting dynamically created child queues manually
▶︎
Partition configuration
Enabling node labels on a cluster to configure partition
Creating partitions
Assigning or unassigning a node to a partition
Viewing partitions
Associating partitions with queues
Disassociating partitions from queues
Deleting partitions
Setting a default partition expression
Using partitions when submitting a job
▶︎
Managing Apache Hadoop YARN Services
Configuring YARN Services API to manage long-running applications
Configuring YARN Services using Cloudera Manager
Configuring node attribute for application master placement
Migrating database configuration to a new location
▶︎
Running YARN Services
Deploying and managing services on YARN
Launching a YARN service
Saving a YARN service definition
▶︎
Creating new YARN services using UI
Creating a standard YARN service
Creating a custom YARN service
Managing the YARN service life cycle through the REST API
YARN services API examples
▶︎
Configuring Apache Hadoop YARN Log Aggregation
YARN log aggregation overview
Log aggregation file controllers
Configuring log aggregation
Log aggregation properties
Configuring debug delay
▶︎
Managing Apache ZooKeeper
Add a ZooKeeper service
Use multiple ZooKeeper services
Replace a ZooKeeper disk
Replace a ZooKeeper role with ZooKeeper service downtime
Replace a ZooKeeper role without ZooKeeper service downtime
Replace a ZooKeeper role on an unmanaged cluster
Confirm the election status of a ZooKeeper service
▶︎
Configuring Apache ZooKeeper
Enable the AdminServer
Configure four-letter-word commands in ZooKeeper
▶︎
Managing Apache ZooKeeper Security
▶︎
ZooKeeper Authentication
Configure ZooKeeper server for Kerberos authentication
Configure ZooKeeper client shell for Kerberos authentication
Verify the ZooKeeper authentication
Enable server-server mutual authentication
Use Digest Authentication Provider
Configure ZooKeeper TLS/SSL using Cloudera Manager
▶︎
ZooKeeper ACLs Best Practices
ZooKeeper ACLs Best Practices: Atlas
ZooKeeper ACLs Best Practices: Cruise Control
ZooKeeper ACLs Best Practices: HBase
ZooKeeper ACLs Best Practices: HDFS
ZooKeeper ACLs Best Practices: Kafka
ZooKeeper ACLs Best Practices: Oozie
ZooKeeper ACLs Best Practices: Ranger
ZooKeeper ACLs best practices: Search
ZooKeeper ACLs Best Practices: YARN
ZooKeeper ACLs Best Practices: ZooKeeper
▶︎
Data Access
▶︎
Working with Apache Hive Metastore
HMS table storage
Configuring HMS for high availability
Hive Metastore leader election
▶︎
Starting Apache Hive
Start Hive on an insecure cluster
Start Hive using a password
Accessing Hive from an external node
Run a Hive command
Configuring graceful shutdown property for HiveServer
▶︎
Using Apache Hive
▶︎
Apache Hive 3 tables
Hive table locations
Refer to a table using dot notation
Understanding CREATE TABLE behavior
Creating a CRUD transactional table
Creating an insert-only transactional table
Creating an S3-based external table
Dropping an external table along with data
Converting a managed non-transactional table to external
▶︎
Accessing StorageHandler and other external tables
Creating secure external tables
Check for required Ranger features in Data Hub
Enable authorization of StorageHandler-based tables in Data Hub
Examples of creating secure external tables
Using constraints
Determining the table type
Apache Hive 3 ACID transactions
▶︎
Apache Hive query basics
Querying the information_schema database
Inserting data into a table
Updating data in a table
Merging data in tables
Deleting data from a table
▶︎
Using a subquery
Subquery restrictions
Use wildcards with SHOW DATABASES
Aggregating and grouping data
Querying correlated data
▶︎
Using common table expressions
Use a CTE in a query
Comparing tables using ANY/SOME/ALL
Escaping an invalid identifier
CHAR data type support
ORC vs Parquet formats
Creating a default directory for managed tables
Generating surrogate keys
▶︎
Partitions and performance
Creating partitions dynamically
▶︎
Partition refresh and configuration
Automating partition discovery and repair
Managing partition retention time
Repairing partitions manually using MSCK repair
▶︎
Query scheduling
Enabling scheduled queries
Periodically rebuilding a materialized view
Getting scheduled query information and monitor the query
▶︎
Materialized views
▶︎
Creating and using a materialized view
Creating the tables and view
Verifing use of a query rewrite
Using optimizations from a subquery
Dropping a materialized view
Showing materialized views
Describing a materialized view
Managing query rewrites
Purposely using a stale materialized view
Creating and using a partitioned materialized view
▶︎
HPL/SQL stored procedures
Setting up a Hive client
Creating a function
Using the cursor to return record sets
Stored procedure examples
▶︎
Using functions
Reloading, viewing, and filtering functions
▶︎
Create a user-defined function
Setting up the development environment
Creating the UDF class
Building the project and upload the JAR
Registering the UDF
Calling the UDF in a query
▶︎
Managing Apache Hive
▶︎
ACID operations in Data Hub
Configuring partitions for transactions
Options to monitor transactions
Options to monitor transaction locks
▶︎
Data compaction
Compaction tasks
Initiating automatic compaction in Cloudera Manager
Starting compaction manually
Options to monitor compactions
Disabling automatic compaction
Configuring compaction using table properties
Configuring compaction in Cloudera Manager
Configuring the compaction check interval
Compactor properties
▶︎
Compaction observability in Cloudera Manager
Configuring compaction health monitoring
Monitoring compaction health in Cloudera Manager
Hive ACID metric properties for compaction observability
▶︎
Query vectorization
Vectorization default
▶︎
Securing Apache Hive
Hive access authorization
Transactional table access
External table access
HWC authorization
▶︎
Integrating Apache Hive with Spark and Kafka
▶︎
Hive Warehouse Connector for accessing Apache Spark data
Set up
HWC limitations
▶︎
Reading data through HWC
Direct Reader mode introduction
Using Direct Reader mode
Direct Reader configuration properties
Direct Reader limitations
Secure access mode introduction
Setting up secure access mode in Data Hub
Using secure access mode
Configuring caching for secure access mode
JDBC read mode introduction
Using JDBC read mode
JDBC mode configuration properties
JDBC mode limitations
Kerberos configurations for HWC
Writing data through HWC
Apache Spark executor task statistics
▶︎
HWC and DataFrame APIs
HWC and DataFrame API limitations
HWC supported types mapping
Catalog operations
Read and write operations
Committing a transaction for Direct Reader
Closing HiveWarehouseSession operations
Using HWC for streaming
HWC API Examples
Hive Warehouse Connector Interfaces
Submitting a Scala or Java application
Examples of writing data in various file formats
▶︎
HWC integration pyspark, sparklyr, and Zeppelin
Submitting a Python app
Reading and writing Hive tables in R
Livy interpreter configuration
Reading and writing Hive tables in Zeppelin
▶︎
Apache Hive-Kafka integration
Creating a table for a Kafka stream
▶︎
Querying Kafka data
Querying live data from Kafka
Perform ETL by ingesting data from Kafka into Hive
▶︎
Writing data to Kafka
Writing transformed Hive data to Kafka
Setting consumer and producer table properties
Kafka storage handler and table properties
▶︎
Integrating Apache Hive with BI
▶︎
Connecting Hive to BI tools using a JDBC/ODBC driver in Data Hub
Getting the JDBC or ODBC driver
Configuring the BI tool
▶︎
Apache Hive Performance Tuning
Query results cache
Managing high partition workloads
Best practices for performance tuning
▶︎
ORC file format
Advanced ORC properties
Performance improvement using partitions
Bucketed tables in Hive
▶︎
Using Apache Iceberg
▶︎
Apache Iceberg features
Alter table feature
Create table feature
Create table as select feature
Create partitioned table as select feature
Create table … like feature
Delete data feature
Describe table metadata feature
Drop partition feature
Drop table feature
Expire snapshots feature
Insert table data feature
Load data inpath feature
Load or replace partition data feature
Materialized view feature
Materialized view rebuild feature
Merge feature
▶︎
Migrate Hive table to Iceberg feature
Changing the table metadata location
▶︎
Flexible partitioning
Partition evolution feature
Partition transform feature
Query metadata tables feature
Rollback table feature
Select Iceberg data feature
Schema evolution feature
Schema inference feature
Snapshot management
Time travel feature
Truncate table feature
▶︎
Best practices for Iceberg in CDP
Making row-level changes on V2 tables only
▶︎
Performance tuning
Caching manifest files
Configuring manifest caching in Cloudera Manager
Unsupported features and limitations
▶︎
Accessing Iceberg tables
Opening Ranger in Data Hub
Editing a storage handler policy to access Iceberg files on the file system
Creating a SQL policy to query an Iceberg table
Creating an Iceberg table
Creating an Iceberg partitioned table
Expiring snapshots
Inserting data into a table
Migrating a Hive table to Iceberg
Selecting an Iceberg table
Running time travel queries
Updating an Iceberg partition
Test driving Iceberg from Impala
Hive demo data
Test driving Iceberg from Hive
Iceberg data types
Iceberg table properties
▶︎
Migrating Data Using Sqoop
Data migration to Apache Hive
▶︎
Sqoop enhancements to the Hive import process
Configuring custom Beeline arguments
Configuring custom Hive JDBC arguments
Configuring a custom Hive CREATE TABLE statement
Configuring custom Hive table properties
▶︎
Secure options to provide Hive password during a Sqoop import
Providing the Hive password through a prompt
Providing the Hive password through a file
Providing the Hive password through an alias
Providing the Hive password through an alias in a file
Saving the password to Hive Metastore
▶︎
Imports into Hive
Creating a Sqoop import command
Importing RDBMS data into Hive
Import command options
▶︎
Starting and Stopping Apache Impala
Modifying Impala Startup Options
▶︎
Configuring Client Access to Impala
Impala Startup Options for Client Connections
▶︎
Impala Shell Tool
Impala Shell Configuration Options
Impala Shell Configuration File
Connecting to Impala Daemon in Impala Shell
Running Commands and SQL Statements in Impala Shell
Impala Shell Command Reference
Configuring ODBC for Impala
Configuring JDBC for Impala
Configuring Impyla for Impala
Configuring Delegation for Clients
Spooling Query Results
Shut Down Impala
▶︎
Setting Timeouts in Impala
Setting Timeout and Retries for Thrift Connections to Backend Client
Increasing StateStore Timeout
Setting the Idle Query and Idle Session Timeouts
Adjusting Heartbeat TCP Timeout Interval
▶︎
Securing Apache Impala
Securing Impala
Configuring Impala TLS/SSL
▶︎
Impala Authentication
Configuring Kerberos Authentication for Impala
▶︎
Configuring LDAP Authentication
Enabling LDAP in Hue
Enabling LDAP Authentication for impala-shell
▶︎
Configuring JWT Authentication
Enabling JWT Authentication for impala-shell
▶︎
Impala Authorization
Configuring Authorization
Row-level filtering in Impala with Ranger policies
▶︎
Configuring Apache Impala
Configuring Impala
Configuring Load Balancer for Impala
▶︎
Tuning Apache Impala
Setting Up HDFS Caching
Setting up Data Cache for Remote Reads
Configuring Dedicated Coordinators and Executors
▶︎
Managing Apache Impala
▶︎
ACID Operation
Concepts Used in FULL ACID v2 Tables
Key Differences between INSERT-ONLY and FULL ACID Tables
Compaction of Data in FULL ACID Transactional Table
▶︎
Managing Resources in Impala
Estimating memory limits
Admission Control and Query Queuing
Enabling Admission Control
Creating Static Pools
Configuring Dynamic Resource Pool
Dynamic Resource Pool Settings
Admission Control Sample Scenario
Cancelling a Query
Using HLL Datasketch Algorithms in Impala
Using KLL Datasketch Algorithms in Impala
▶︎
Managing Metadata in Impala
On-demand Metadata
Automatic Invalidation of Metadata Cache
Automatic Invalidation/Refresh of Metadata
Synchronization between Impala Clusters
▶︎
Monitoring Apache Impala
▶︎
Impala Logs
Managing Logs
Impala lineage
▶︎
Web User Interface for Debugging
Debug Web UI for Impala Daemon
Debug Web UI for StateStore
Debug Web UI for Catalog Server
Configuring Impala Web UI
Debug Web UI for Query Timeline
▶︎
Using Hue
▶︎
About using Hue
Accessing and using Hue
▶︎
Viewing Hive query details
Viewing Hive query history
Viewing Hive query information
Viewing explain plan for a Hive query
Viewing Hive query timeline
Viewing configurations for a Hive query
Viewing DAG information for a Hive query
▶︎
Viewing Impala query details
Viewing Impala query history
Viewing Impala query information
Viewing the Impala query execution plan
Viewing the Impala query metrics
Viewing Impala profiles in Hue
Terminating Hive queries
Comparing Hive and Impala queries in Hue
▶︎
Start SQL AI Assistant
Generate SQL from NQL
Edit query in natural language
Explain query in natural language
Optimize SQL query
Fixing a query in Hue
Generate comment for a SQL query
Enable stored procedures in Hue
Run stored procedure from Hue
Using SQL to query HBase from Hue
Querying existing HBase tables
Enabling the SQL editor autocompleter
Rerunning a query from the Job Browser page
▶︎
Using governance-based data discovery
Searching metadata tags
▶︎
Using Amazon S3 with Hue
Enabling S3 browser for Hue configured with IDBroker
Enabling S3 browser for Hue configured without IDBroker
Enabling S3 File Browser for Hue with RAZ in DataHub
▶︎
Using Azure Data Lake Storage Gen2 with Hue
Enabling ABFS file browser for Hue configured with IDBroker
Enabling ABFS file browser for Hue configured without IDBroker
Enabling ABFS File Browser in Hue with RAZ in DataHub
▶︎
Using Google Cloud Storage with Hue
Prerequisites for enabling GS File Browser
Enabling GS File Browser with RAZ
Disabling the automatic creation of user home directories
Granting permission to access S3, ABFS, GS File Browser in Hue
Creating tables in Hue by importing files
Supported non-ASCII and special characters in Hue
Options to rerun Oozie workflows in Hue
Unsupported features in Hue
Known limitations in Hue
▶︎
Administering Hue
Reference architecture
Hue configuration files
Hue configurations in CDP Runtime
Hue Advanced Configuration Snippet
▶︎
Set up SQL AI Assistant
Configure SQL AI Assistant using Cloudera AI Workbench
Configure SQL AI Assistant using the Cloudera AI Inference service
Configure SQL AI Assistant using the Microsoft Azure OpenAI service
Configure SQL AI Assistant using the Amazon Bedrock Service
Configure SQL AI Assistant using the OpenAI platform
Configure SQL AI Assistant using vLLM
Complete list of model-related configurations for setting up the Hue SQL AI Assistant
▶︎
Hue logs
Standard stream logs
Hue service Django logs
Enabling DEBUG logging for Hue logs
Enabling httpd log rotation for Hue
Hue supported browsers
Enabling cache-control HTTP headers when using Hue
Setting up a Hue service account with a custom name
Options to restart the Hue service
▶︎
Customizing the Hue web interface
Adding a custom banner in Hue
Changing the page logo in Hue
Adding a splash screen in Hue
Setting the cache timeout
Enabling or disabling anonymous usage date collection
Disabling the share option in Hue
Enabling Hue applications with Cloudera Manager
Running shell commands
Downloading and exporting data from Hue
Enabling a multi-threaded environment for Hue
Adding Query Processor service to a cluster
Removing Query Processor service from cluster
Enabling the Query Processor service in Hue
Adding Query Processor admin users and groups
Cleaning up old queries
Downloading debug bundles
Configuring Hue to handle HS2 failover
Enabling Spark 3 engine in Hue
Using Hue scripts
Configurations for submitting a Hive query to a dedicated queue
Configuring timezone for Hue
▶︎
Securing Hue
▶︎
User management in Hue
Understanding Hue users and groups
Finding the list of Hue superusers
Creating a Hue user
Restricting user login
▶︎
LDAP import and sync options
Import and sync LDAP users and groups
Locking an account after invalid login attempts
Unlocking locked out user accounts in Hue
Creating a group in Hue
Managing Hue permissions
Resetting Hue user password
Assigning superuser status to an LDAP user
Configuring file and directory permissions for Hue
▶︎
User authentication in Hue
Authentication using Kerberos
▶︎
Authentication using LDAP
Configuring authentication with LDAP and Search Bind
Configuring authentication with LDAP and Direct Bind
Multi-server LDAP/AD autentication
Testing the LDAP configuration
Configuring group permissions
Enabling LDAP authentication with HiveServer2 and Impala
LDAP properties
Configuring LDAP on unmanaged clusters
▶︎
Authentication using SAML
Configuring SAML authentication on managed clusters
Manually configuring SAML authentication
Integrating your identity provider's SAML server with Hue
SAML properties
Troubleshooting SAML authentication
Authentication using Knox SSO
Authentication using PAM
Applications and permissions reference
Securing Hue passwords with scripts
Directory permissions when using PAM authentication backend
▶︎
Configuring TLS/SSL for Hue
Creating a truststore file in PEM format
Configuring Hue as a TLS/SSL client
Enabling Hue as a TLS/SSL client
Configuring Hue as a TLS/SSL server
Enabling Hue as a TLS/SSL server using Cloudera Manager
Enabling TLS/SSL for Hue Load Balancer
Enabling TLS/SSL communication with HiveServer2
Enabling TLS/SSL communication with Impala
Securing database connections with TLS/SSL
Disabling CA Certificate validation from Hue
Enforcing TLS version 1.2 for Hue
Securing sessions
Specifying HTTP request methods
Restricting supported ciphers for Hue
Specifying domains or pages to which Hue can redirect users
Securing Hue from CWE-16
Setting Oozie permissions
Configuring secure access between Solr and Hue
▶︎
Tuning Hue
Adding a load balancer
▶︎
Configuring high availability for Hue
Configuring Hive and Impala for high availability with Hue
Configuring for HDFS high availability
Configuring dedicated Impala coordinator
Configuring the Hue Query Processor scan frequency
▶︎
Search Tutorial
Tutorial
▶︎
Validating the Cloudera Search deployment
Create a test collection
Index sample data
Query sample data
▶︎
Indexing sample tweets with Cloudera Search
Create a collection for tweets
Copy sample tweets to HDFS
▶︎
Using MapReduce batch indexing to index sample Tweets
Batch indexing into offline Solr shards
▶︎
Securing Cloudera Search
Cloudera Search security aspects
Enabling ZooKeeper SSL/TLS for Solr and HBase Indexer
Enable LDAP authentication in Solr
Creating a JAAS configuration file
Manage Ranger authorization in Solr
Configuring Ranger authorization
Enable document-level authorization
▶︎
Tuning Cloudera Search
Solr server tuning categories
Setting Java system properties for Solr
Enable multi-threaded faceting
Tuning garbage collection
Enable garbage collector logging
Solr and HDFS - the block cache
▶︎
Tuning replication
Adjust the Solr replication factor for index files stored in HDFS
▶︎
Managing Cloudera Search
Viewing and modifying log levels for Search and related services
▶︎
Viewing and modifying Solr configuration using Cloudera Manager
Setting the Solr Critical State Cores Percentage parameter
Setting the Solr Recovering Cores Percentage parameter
▶︎
Managing collection configuration
Cloudera Search config templates
Generating collection configuration using configs
Securing configs with ZooKeeper ACLs and Ranger
Generating Solr collection configuration using instance directories
Modifying a collection configuration generated using an instance directory
Converting instance directories to configs
Using custom JAR files with Search
Retrieving the clusterstate.json file
▶︎
Managing collections
Creating a Solr collection
Viewing existing collections
Deleting all documents in a collection
Deleting a collection
Updating the schema in a collection
Creating a replica of an existing shard
Migrating Solr replicas
Splitting a shard on HDFS
Backing up a collection from HDFS
Backing up a collection from local file system
Restoring a collection
Defining a backup target in solr.xml
Cloudera Search log files
Cloudera Search configuration files
▶︎
Cloudera Search ETL
ETL with Cloudera Morphlines
Using Morphlines to index Avro
Using Morphlines with Syslog
▶︎
Indexing Data Using Morphlines
Indexing data
▶︎
Lily HBase NRT indexing
Adding the Lily HBase indexer service
Starting the Lily HBase NRT indexer service
▶︎
Using the Lily HBase NRT indexer service
Enable replication on HBase column families
Create a Collection in Cloudera Search
Creating a Lily HBase Indexer Configuration File
Creating a Morphline Configuration File
Understanding the extractHBaseCells Morphline Command
Registering a Lily HBase Indexer Configuration with the Lily HBase Indexer Service
Verifying that Indexing Works
Using the indexer HTTP interface
▶︎
Configuring Lily HBase Indexer Security
Configure Lily HBase Indexer to use TLS/SSL
Configure Lily HBase Indexer Service to use Kerberos authentication
▶︎
Batch indexing using Morphlines
Spark indexing using morphlines
▶︎
MapReduce indexing
▶︎
MapReduceIndexerTool
MapReduceIndexerTool input splits
MapReduceIndexerTool metadata
MapReduceIndexerTool usage syntax
Indexing data with MapReduceIndexerTool in Solr backup format
▶︎
Lily HBase batch indexing for Cloudera Search
Populating an HBase Table
Create a Collection in Cloudera Search
Creating a Lily HBase Indexer Configuration File
Creating a Morphline Configuration File
Understanding the extractHBaseCells Morphline Command
Running the HBaseMapReduceIndexerTool
HBaseMapReduceIndexerTool command line reference
Using --go-live with SSL or Kerberos
Understanding --go-live and HDFS ACLs
▶︎
Indexing Data Using Spark-Solr Connector
▶︎
Batch indexing to Solr using SparkApp framework
Create indexer Maven project
Run the spark-submit job
▶︎
Operational Database
▶︎
Getting Started with Operational Database
Operational database cluster
Before you create an operational database cluster
▶︎
Creating an operational database cluster
Default operational database cluster definition
Provision an operational database cluster
▶︎
Configuring Apache HBase
Using DNS with HBase
Use the Network Time Protocol (NTP) with HBase
Configure the graceful shutdown timeout property
▶︎
Setting user limits for HBase
Configure ulimit for HBase using Cloudera Manager
Configuring ulimit for HBase
Configure ulimit using Pluggable Authentication Modules using the Command Line
Using dfs.datanode.max.transfer.threads with HBase
Configure encryption in HBase
▶︎
Using hedged reads
Enable hedged reads for HBase
Monitor the performance of hedged reads
▶︎
Understanding HBase garbage collection
Configure HBase garbage collection
Disable the BoundedByteBufferPool
▶︎
Configuring edge node on AWS
Prerequisites
▶︎
Configuring network line-of-sight
Reuse the subnets created for CDP
Verify the network line-of-sight
Configure DNS
Verify the DNS configuration
Configure Kerberos
▶︎
Configuring edge node on Azure
Prerequisites
▶︎
Configuring network line-of-sight
Reuse the subnets created for CDP
Verify the network line-of-sight
Configure DNS
Verify the DNS configuration
Configure Kerberos
Configuring edge node on GCP
Configure the HBase canary
Configuring auto split policy in an HBase table
▶︎
Using HBase blocksize
Configure the blocksize for a column family
▶︎
Configuring HBase BlockCache
Contents of the BlockCache
Size the BlockCache
Decide to use the BucketCache
▶︎
About the Off-heap BucketCache
Off-heap BucketCache
BucketCache IO engine
Configure BucketCache IO engine
Configure the off-heap BucketCache using Cloudera Manager
Configure the off-heap BucketCache using the command line
Cache eviction priorities
Bypass the BlockCache
Monitor the BlockCache
▶︎
HBase persistent BucketCache
Configuring HBase persistent BucketCache
Configuration details
▶︎
Using quota management
Configuring quotas
General Quota Syntax
▶︎
Throttle quotas
Throttle quota examples
Space quotas
Quota enforcement
Quota violation policies
▶︎
Impact of quota violation policy
Live write access
Bulk Write Access
Read access
Metrics and Insight
Examples of overlapping quota policies
Number-of-Tables Quotas
Number-of-Regions Quotas
▶︎
Using HBase scanner heartbeat
Configure the scanner heartbeat using Cloudera Manager
▶︎
Storing medium objects (MOBs)
Prerequisites
Configure columns to store MOBs
Configure the MOB cache using Cloudera Manager
Test MOB storage and retrieval performance
MOB cache properties
▶︎
Limiting the speed of compactions
Configure the compaction speed using Cloudera Manager
Enable HBase indexing
▶︎
Using HBase coprocessors
Add a custom coprocessor
Disable loading of coprocessors
▶︎
Configuring HBase MultiWAL
Configuring MultiWAL support using Cloudera Manager
▶︎
Configuring the storage policy for the Write-Ahead Log (WAL)
Configure the storage policy for WALs using Cloudera Manager
Configure the storage policy for WALs using the Command Line
▶︎
Using RegionServer grouping
Enable RegionServer grouping using Cloudera Manager
Configure RegionServer grouping
Monitor RegionServer grouping
Remove a RegionServer from RegionServer grouping
Enabling ACL for RegionServer grouping
Best practices when using RegionServer grouping
Disable RegionServer grouping
▶︎
HBase load balancer
▶︎
HBase cache-aware load balancer configuration
Overview
Components of cache-aware load balancer
Configuration details
▶︎
HBase stochastic load balancer configuration
Introduction to the HBase stochastic load balancer
Components of stochastic load balancer
Configuration details
▶︎
Optimizing HBase I/O
HBase I/O components
Advanced configuration for write-heavy workloads
Enabling HBase META Replicas
Enabling ZooKeeper-less connection registry for HBase client
▶︎
Managing Apache HBase Security
▶︎
HBase authentication
Configuring HBase servers to authenticate with a secure HDFS cluster
Configuring secure HBase replication
Configure the HBase client TGT renewal period
Disabling Kerberos authentication for HBase clients
HBase authorization
▶︎
Configuring TLS/SSL for HBase
Prerequisites to configure TLS/SSL for HBase
Configuring TLS/SSL for HBase Web UIs
Configuring TLS/SSL for HBase REST Server
Configuring TLS/SSL for HBase Thrift Server
Configuring HSTS for HBase Web UIs
▶︎
Accessing Apache HBase
▶︎
Use the HBase shell
Virtual machine options for HBase Shell
Script with HBase Shell
Use the HBase command-line utilities
Use the HBase APIs for Java
▶︎
Use the HBase REST server
Installing the REST Server using Cloudera Manager
Using the REST API
Using the REST proxy API
▶︎
Using the Apache Thrift Proxy API
Preparing a thrift server and client
List of Thrift API and HBase configurations
Example for using THttpClient API in secure cluster
Example for using THttpClient API in unsecure cluster
Example for using TSaslClientTransport API in secure cluster without HTTP
▶︎
Using Apache HBase Hive integration
Configure Hive to use with HBase
Using HBase Hive integration
▶︎
Using the HBase-Spark connector
Configure HBase-Spark connector
Example: Using the HBase-Spark connector
▶︎
Use the Hue HBase app
Configure the HBase thrift server role
▶︎
Managing Apache HBase
▶︎
Starting and stopping HBase using Cloudera Manager
Start HBase
Stop HBase
▶︎
Graceful HBase shutdown
Gracefully shut down an HBase RegionServer
Gracefully shut down the HBase service
▶︎
Importing data into HBase
Choose the right import method
Use snapshots
Use CopyTable
▶︎
Use BulkLoad
Use cases for BulkLoad
Use cluster replication
Use Sqoop
Use Spark
Use a custom MapReduce job
▶︎
Use HashTable and SyncTable Tool
HashTable/SyncTable tool configuration
Synchronize table data using HashTable/SyncTable tool
▶︎
Writing data to HBase
Variations on Put
Versions
Deletion
Examples
▶︎
Reading data from HBase
Perform scans using HBase Shell
▶︎
HBase filtering
Dynamically loading a custom filter
Logical operators, comparison operators and comparators
Compound operators
Filter types
HBase Shell example
Java API example
HBase online merge
Move HBase Master Role to another host
Expose HBase metrics to a Ganglia server
▶︎
HBase metrics
Using JMX for accessing HBase metrics
Accessing HBase metrics in Prometheus format
▶︎
Configuring Apache HBase High Availability
Enable HBase high availability using Cloudera Manager
HBase read replicas
Timeline consistency
Keep replicas current
Read replica properties
Configure read replicas using Cloudera Manager
▶︎
Using rack awareness for read replicas
Create a topology map
Create a topology script
Activate read replicas on a table
Request a timeline-consistent read
▶︎
Using Apache HBase Backup and Disaster Recovery
HBase backup and disaster recovery strategies
▶︎
Configuring HBase snapshots
About HBase snapshots
▶︎
Manage HBase snapshots using COD CLI
Create a snapshot
List snapshots
Restore a snapshot
List restored snapshots
Delete snapshots
▶︎
Manage HBase snapshots using the HBase shell
Shell commands
Take a snapshot using a shell script
Export a snapshot to another cluster
Information and debugging
▶︎
Using HBase replication
Common replication topologies
Notes about replication
Replication requirements
▶︎
Deploy HBase replication
Replication across three or more clusters
Enable replication on a specific table
Configure secure replication
▶︎
Configure bulk load replication
Enable bulk load replication using Cloudera Manager
Create empty table on the destination cluster
Disable replication at the peer level
Stop replication in an emergency
▶︎
Initiate replication when data already exist
Replicate pre-exist data in an active-active deployment
Using the CldrCopyTable utility to copy data
Effects of WAL rolling on replication
Configuring secure HBase replication
Restore data from a replica
Verify that replication works
Replication caveats
▶︎
Configuring Apache HBase for Apache Phoenix
Configure HBase for use with Phoenix
▶︎
Using Apache Phoenix to Store and Access Data
▶︎
Mapping Apache Phoenix schemas to Apache HBase namespaces
Enable namespace mapping
▶︎
Associating tables of a schema to a namespace
Associate table in a customized Kerberos environment
Associate a table in a non-customized environment without Kerberos
▶︎
Using secondary indexing
Use strongly consistent indexing
Migrate to strongly consistent indexing
▶︎
Using transactions
Configure transaction support
Use transactions with tables
▶︎
Using JDBC API
Connecting to PQS using JDBC
Connect to Phoenix Query Server
Connect to Phoenix Query Server through Apache Knox
Launching Apache Phoenix Thin Client
▶︎
Using the Phoenix JDBC Driver
▶︎
Configuring the Phoenix classpath
Adding the Phoenix JDBC driver jar
Adding the HBase or Hadoop configuration files
Understanding the Phoenix JDBC URL
Using non-JDBC drivers
▶︎
Using Apache Phoenix-Spark connector
Configure Phoenix-Spark connector
Phoenix-Spark connector usage examples
▶︎
Using Apache Phoenix-Hive connector
Configure Phoenix-Hive connector
Apache Phoenix-Hive usage examples
Limitations of Phoenix-Hive connector
▶︎
Managing Apache Phoenix Security
Phoenix is FIPS compliant
Managing Apache Phoenix security
Enable Phoenix ACLs
Configure TLS encryption manually for Phoenix Query Server
▶︎
Data Engineering
▶︎
Configuring Apache Spark
▶︎
Configuring dynamic resource allocation
Customize dynamic resource allocation settings
Configure a Spark job for dynamic resource allocation
Dynamic resource allocation properties
▶︎
Spark security
Enabling Spark authentication
Enabling Spark Encryption
Running Spark applications on secure clusters
Configuring HSTS for Spark
Accessing compressed files in Spark
▶︎
Using Spark History Servers with high availability
Limitation for Spark History Server with high availability
Configuring high availability for SHS with an external load balancer
Configuring high availability for SHS with an internal load balancer
Configuring high availability for SHS with multiple Knox Gateways
How to access Spark files on Ozone
▶︎
Developing Apache Spark Applications
Introduction
Spark application model
Spark execution model
Developing and running an Apache Spark WordCount application
Using the Spark DataFrame API
▶︎
Building Spark Applications
Best practices for building Apache Spark applications
Building reusable modules in Apache Spark applications
Packaging different versions of libraries with an Apache Spark application
▶︎
Using Spark SQL
SQLContext and HiveContext
Querying files into a DataFrame
Spark SQL example
Interacting with Hive views
Performance and storage considerations for Spark SQL DROP TABLE PURGE
TIMESTAMP compatibility for Parquet files
Accessing Spark SQL through the Spark shell
Calling Hive user-defined functions (UDFs)
▶︎
Using Spark Streaming
Spark Streaming and Dynamic Allocation
Spark Streaming Example
Enabling fault-tolerant processing in Spark Streaming
Configuring authentication for long-running Spark Streaming jobs
Building and running a Spark Streaming application
Sample pom.xml file for Spark Streaming with Kafka
▶︎
Accessing external storage from Spark
▶︎
Accessing data stored in Amazon S3 through Spark
Examples of accessing Amazon S3 data from Spark
Accessing Hive from Spark
Accessing HDFS Files from Spark
▶︎
Accessing ORC Data in Hive Tables
Accessing ORC files from Spark
Predicate push-down optimization
Loading ORC data into DataFrames using predicate push-down
Optimizing queries using partition pruning
Enabling vectorized query execution
Reading Hive ORC tables
Accessing Avro data files from Spark SQL applications
Accessing Parquet files from Spark SQL applications
▶︎
Using Spark MLlib
Running a Spark MLlib example
Enabling Native Acceleration For MLlib
Using custom libraries with Spark
Using Apache Iceberg with Spark
▶︎
Running Apache Spark Applications
Introduction
Apache Spark 3.4 Requirements
Running Spark 3.4 Applications
Updating Spark 2 apps for Spark 3.4
Running your first Spark application
Running sample Spark applications
▶︎
Configuring Spark Applications
Configuring Spark application properties in spark-defaults.conf
Configuring Spark application logging properties
▶︎
Submitting Spark applications
spark-submit command options
Spark cluster execution overview
Canary test for pyspark command
Fetching Spark Maven dependencies
Accessing the Spark History Server
▶︎
Running Spark applications on YARN
Spark on YARN deployment modes
Submitting Spark Applications to YARN
Monitoring and Debugging Spark Applications
Example: Running SparkPi on YARN
Configuring Spark on YARN Applications
Dynamic allocation
▶︎
Submitting Spark applications using Livy
Using the Livy API to run Spark jobs
▶︎
Running an interactive session with the Livy REST API
Livy objects for interactive sessions
Setting Python path variables for Livy
Livy API reference for interactive sessions
▶︎
Submitting batch applications using the Livy REST API
Livy batch object
Livy API reference for batch jobs
Submitting a Spark job to a Data Hub cluster using Livy
Configuring the Livy Thrift Server
Connecting to the Apache Livy Thrift Server
Using Livy with Spark
Using Livy with interactive notebooks
▶︎
Using PySpark
Running PySpark in a virtual environment
Running Spark Python applications
Automating Spark Jobs with Oozie Spark Action
▶︎
Tuning Apache Spark
Introduction
Check Job Status
Check Job History
Improving Software Performance
▶︎
Tuning Apache Spark Applications
Tuning Spark Shuffle Operations
Choosing Transformations to Minimize Shuffles
When Shuffles Do Not Occur
When to Add a Shuffle Transformation
Secondary Sort
Tuning Resource Allocation
Resource Tuning Example
Tuning the Number of Partitions
Reducing the Size of Data Structures
Choosing Data Formats
▶︎
Configuring Apache Zeppelin
Introduction
Configuring Livy
Livy high availability support
Configure User Impersonation for Access to Hive
Configure User Impersonation for Access to Phoenix
▶︎
Enabling Access Control for Zeppelin Elements
Enable Access Control for Interpreter, Configuration, and Credential Settings
Enable Access Control for Notebooks
Enable Access Control for Data
▶︎
Shiro Settings: Reference
Active Directory Settings
LDAP Settings
General Settings
shiro.ini Example
▶︎
Using Apache Zeppelin
Introduction
Launch Zeppelin
▶︎
Working with Zeppelin Notes
Create and Run a Note
Import a Note
Export a Note
Using the Note Toolbar
Import External Packages
▶︎
Configuring and Using Zeppelin Interpreters
Modify interpreter settings
Using Zeppelin Interpreters
Customize interpreter settings in a note
Use the JDBC interpreter to access Hive
Use the Livy interpreter to access Spark
Using Spark Hive Warehouse and HBase Connector Client .jar files with Livy
▶︎
Security
▶︎
Apache Ranger APIs
▶︎
Ranger API Overview
Ranger Admin Metrics API
Ranger REST API documentation
▶︎
Apache Ranger Auditing
Audit Overview
▶︎
Managing Auditing with Ranger
Viewing audit details
Viewing audit metrics
Creating a read-only Admin user (Auditor)
Configuring Ranger audit properties for Solr
Configuring Ranger audit properties for HDFS
Triggering HDFS audit files rollover
Configuring Ranger audit log storage to a local file
▶︎
Ranger Audit Filters
Default Ranger audit filters
Configuring a Ranger audit filter policy
How to set audit filters in Ranger Admin Web UI
Filter service access logs from Ranger UI
Configuring audit spool alert notifications
Charting spool alert metrics
Excluding audits for specific users, groups, and roles
Changing Ranger audit storage location and migrating data
Configuring Ranger audits to show actual client IP address
▶︎
Apache Ranger Authorization
Using Ranger to Provide Authorization in CDP
▶︎
Ranger plugin overview
Ranger Hive Plugin
Ranger Kafka Plugin
Ranger special entities
Enabling Ranger HDFS plugin manually on a Data Hub
▶︎
Ranger Policies Overview
Ranger tag-based policies
Tags and policy evaluation
Ranger access conditions
▶︎
Using the Ranger Admin Web UI
Accessing the Ranger Admin Web UI
Ranger console navigation
▶︎
Resource-based Services and Policies
▶︎
Configuring resource-based services
Configure a resource-based service: Atlas
Configure a resource-based service: HBase
Configure a resource-based service: HDFS
Configure a resource-based service: HadoopSQL
Configure a resource-based service: Kafka
Configure a resource-based service: Knox
Configure a resource-based service: NiFi
Configure a resource-based service: NiFi Registry
Configure a resource-based service: Solr
Configure a resource-based service: YARN
▶︎
Configuring resource-based policies
Configure a resource-based policy: Atlas
Configure a resource-based policy: HBase
Configure a resource-based policy: HDFS
Configure a resource-based policy: HadoopSQL
Configure a resource-based storage handler policy: HadoopSQL
Configure a resource-based policy: Kafka
Configure a resource-based policy: Knox
Configure a resource-based policy: NiFi
Configure a resource-based policy: NiFi Registry
Configure a resource-based policy: S3
Configure a resource-based policy: Solr
Configure a resource-based policy: YARN
Wildcards and variables in resource-based policies
Adding a policy condition to a resource-based policy
Adding a policy label to a resource-based policy
Preloaded resource-based services and policies
▶︎
Importing and exporting resource-based policies
Import resource-based policies for a specific service
Import resource-based policies for all services
Export resource-based policies for a specific service
Export all resource-based policies for all services
▶︎
Row-level filtering and column masking in Hive
Row-level filtering in Hive with Ranger policies
Dynamic resource-based column masking in Hive with Ranger policies
Dynamic tag-based column masking in Hive with Ranger policies
▶︎
Tag-based Services and Policies
Adding a tag-based service
▶︎
Adding tag-based policies
Using tag attributes and values in Ranger tag-based policy conditions
Adding a policy condition to a tag-based policy
Adding a tag-based PII policy
Default EXPIRES ON tag policy
▶︎
Importing and exporting tag-based policies
Import tag-based policies
Export tag-based policies
Create a time-bound policy
Create a Hive authorizer URL policy
Showing Role|Grant definitions from Ranger HiveAuthorizer
▶︎
Ranger Security Zones
Security Zones Administration
Security Zones Example Use Cases
Adding a Ranger security zone
▶︎
Administering Ranger Reports
View Ranger reports
Search Ranger reports
Export Ranger reports
Using Ranger client libraries
Using session cookies to validate Ranger policies
Configure optimized rename and recursive delete operations in Ranger Ozone plugin
How to optimally configure Ranger RAZ client performance
▶︎
Apache Ranger User Management
▶︎
Administering Ranger Users, Groups, Roles, and Permissions
Adding a user
Editing a user
Deleting a user
Adding a group
Editing a group
Deleting a group
Adding a role through Ranger
Adding a role through Hive
Editing a role
Deleting a role
Adding or editing module permissions
▶︎
Ranger Usersync
Adding default service users and roles for Ranger
Configuring Usersync assignment of Admin users
Configuring nested group hierarchies
Configuring Ranger Usersync for Deleted Users and Groups
Configuring Ranger Usersync for invalid usernames
Setting credentials for Ranger Usersync custom keystore
Enabling Ranger Usersync search to generate internally
Configuring Usersync to sync directly with LDAP/AD (FreeIPA)
Force deletion of external users and groups from the Ranger database
▶︎
Configuring Ranger Authentication with UNIX, LDAP, or AD
▶︎
Configuring Ranger Authentication with UNIX, LDAP, AD, or PAM
Configure Ranger authentication for UNIX
Configure Ranger authentication for AD
Configure Ranger authentication for LDAP
Configure Ranger authentication for PAM
▶︎
Ranger AD Integration
Ranger UI authentication
Ranger UI authorization
▶︎
How to manage log rotation for Ranger Services
Managing logging properties for Ranger services
Enabling selective debugging for Ranger Admin
Enabling selective debugging for RAZ
▶︎
Configuring and Using Ranger RMS Hive-s3 ACL Sync
Ranger RMS - HIVE-S3 ACL Sync Overview
Understanding Ranger policies with RMS
How to full sync the Ranger RMS database
Configuring Ranger RMS (Hive-S3 ACL Sync)
Ranger RMS (Hive-S3 ACL-Sync) Use Cases
▶︎
Apache Knox Authentication
▶︎
Apache Knox Overview
Securing Access to Hadoop Cluster: Apache Knox
Apache Knox Gateway Overview
Knox Supported Services Matrix
Proxy Cloudera Manager through Apache Knox
▶︎
Installing Apache Knox
Apache Knox Install Role Parameters
▶︎
Management of Knox shared providers in Cloudera Manager
▶︎
Management of existing Apache Knox shared providers
Add a new provider in an existing provider configuration
Modify a provider in an existing provider configuration
Disable a provider in an existing provider configuration
Remove a provider parameter in an existing provider configuration
Saving aliases
Configuring Kerberos authentication in Apache Knox shared providers
▶︎
Management of services for Apache Knox via Cloudera Manager
Enable proxy for a known service in Apache Knox
Disable proxy for a known service in Apache Knox
Add a custom descriptor to Apache Knox
▶︎
Load balancing for Apache Knox
Generate and configure a signing keystore for Knox in HA
▶︎
Knox Gateway token integration
Overview
Token configurations
Generate tokens
Manage Knox Gateway tokens
Knox Token API
Manage Knox metadata
Knox SSO Cookie Invalidation
Concurrent session verification (Tech Preview)
▶︎
Governance
▶︎
Searching with Metadata
▶︎
Using Basic search
Basic search enhancement
Using Relationship search
Using Search filters
▶︎
Ability to download search results from Atlas UI
How to download results using Basic and Advanced search options
Using Free-text Search
Enhancements with search query
▶︎
Ignore or Prune pattern to filter Hive metadata entities
How Ignore and Prune feature works
Using Ignore and Prune patterns
Saving searches
Using advanced search
Atlas index repair configuration
▶︎
Working with Classifications and Labels
▶︎
Working with Atlas classifications and labels
Text-editor for Atlas parameters
▶︎
Creating classifications
Example for finding parent object for assigned classification or term
Creating labels
Adding attributes to classifications
▶︎
Support for validating the AttributeName in parent and child TypeDef
Validations for parent types
Case for implementing backward compatibility
Associating classifications with entities
Propagating classifications through lineage
Searching for entities using classifications
▶︎
Exploring using Lineage
Lineage overview
Viewing lineage
Lineage lifecycle
Support for On-Demand lineage
▶︎
HDFS lineage data extraction in Atlas
Prerequisites for HDFS lineage extraction
▶︎
HDFS lineage commands
Running HDFS lineage commands
Inclusion and exclusion operation for HDFS files
Supported HDFS entities and their hierarchies
▶︎
Leveraging Business Metadata
Business Metadata overview
Creating Business Metadata
Adding attributes to Business Metadata
Associating Business Metadata attributes with entities
Importing Business Metadata associations in bulk
Searching for entities using Business Metadata attributes
▶︎
Managing Business Terms with Atlas Glossaries
Glossaries overview
Creating glossaries
Creating terms
Associating terms with entities
Defining related terms
Creating categories
Assigning terms to categories
Searching using terms
▶︎
Importing Glossary terms in bulk
Enhancements related to bulk glossary terms import
Glossary performance improvements
▶︎
Iceberg for Atlas
▶︎
Iceberg support for Atlas
How Atlas works with Iceberg
Using the Spark shell
Using the Hive shell
Using the Impala shell
▶︎
Setting up Atlas High Availability
About Atlas High Availability
Prerequisites for setting up Atlas HA
Installing Atlas in HA using CDP Private Cloud Base cluster
▶︎
Auditing Atlas Entities
▶︎
Audit Operations
Atlas Type Definitions
▶︎
Atlas Export and Import operations
Exporting data using Connected type
Atlas Server Operations
Audit enhancements
Examples of Audit Operations
▶︎
Storage reduction for Atlas
▶︎
Using audit aging
Enabling audit aging
Using default audit aging
Using Sweep out configurations
Using custom audit aging
Aging patterns
Audit aging reference configurations
Audit aging using REST API
▶︎
Using custom audit filters
Supported operators
Rule configurations
Use cases and sample payloads
▶︎
Securing Atlas
Securing Atlas
Configuring TLS/SSL for Apache Atlas
▶︎
Configuring Atlas Authentication
Configure Kerberos authentication for Apache Atlas
Configure Atlas authentication for AD
Configure Atlas authentication for LDAP
Configure Atlas PAM authentication
Configure Atlas file-based authentication
▶︎
Configuring Atlas Authorization
Restricting classifications based on user permission
Configuring Ranger Authorization for Atlas
Configuring Atlas Authorization using Ranger
Configuring Simple Authorization in Atlas
▶︎
Using Import Utility Tools with Atlas
▶︎
Importing Hive Metadata using Command-Line (CLI) utility
Bulk and migration import of Hive metadata
Using Atlas-Hive import utility with Ozone entities
Setting up Atlas Kafka import tool
▶︎
Configuring Atlas using Cloudera Manager
▶︎
Configuring and Monitoring Atlas
Showing Atlas Server status
Accessing Atlas logs
▶︎
Extracting S3 Metadata using Atlas
▶︎
Amazon S3 metadata collection
Accessing AWS
AWS objects and inferred hierarchy
AWS object lifecycle
▶︎
AWS configuration
To configure an SQS queue suitable for Atlas extraction
To configure an S3 bucket to publish events
▶︎
S3 Extractor configuration
Prerequisites
Configure credentials for Atlas extraction
Extraction Command
Extractor configuration properties
Defining what assets to extract metadata for
Running bulk extraction
Running incremental extraction
Logging Extractor Activity
S3 actions that produce or update Atlas entities
▶︎
S3 entities created in Atlas
AWS S3 Base
AWS S3 Container
AWS S3 Contained
AWS S3 Bucket
AWS S3 Object
▶︎
AWS S3 Directory
▶︎
S3 relationships
Example of Atlas S3 Lineage
S3 entity audit entries
▶︎
Extracting ADLS Metadata using Atlas
Before you start
Introduction to Atlas ADLS Extractor
Terminologies
Extraction Prerequisites
Updating Extractor Configuration with ADLS Authentication
Configuring ADLS Gen2 Storage Queue
▶︎
Setting up Azure managed Identity for Extraction
Creating Managed Identity
Assigning Roles for the Managed Identities
Mapping Atlas Identity to CDP users
Running ADLS Metadata Extractor
Running Bulk Extraction
Running Incremental Extraction
Command-line options to run Extraction
Extraction Configuration
Verifying Atlas for the extracted data
Resources for on-boarding Azure for CDP users
▶︎
Jobs Management
Overview of Oozie
Adding the Oozie service using Cloudera Manager
Considerations for Oozie to work with AWS
▶︎
Adding file system credentials to an Oozie workflow
Credentials for token delegation
File System Credentials
Setting file system credentials through hadoop properties
Setting default credentials using Cloudera Manager
Advanced settings: Overriding default configurations
Modifying the workflow file manually
Hue Limitation
User authorization configuration for Oozie
▶︎
Redeploying the Oozie ShareLib
Redeploying the Oozie sharelib using Cloudera Manager
▶︎
Oozie configurations with CDP services
▶︎
Using Sqoop actions with Oozie
Deploying and configuring Oozie Sqoop1 Action JDBC drivers
Configuring Oozie Sqoop1 Action workflow JDBC drivers
Configuring Oozie to enable MapReduce jobs to read or write from Amazon S3
Configuring Oozie to use HDFS HA
▶︎
Using Oozie with Ozone
Uploading Oozie ShareLib to Ozone
Enabling Oozie workflows that access Ozone storage
Using Hive Warehouse Connector with Oozie Spark Action
Oozie and client configurations
▶︎
Spark 3 support in Oozie
Enable Spark actions
Use Spark actions with a custom Python executable
Spark 3 Oozie action schema
Differences between Spark and Spark 3 actions
Use Spark 3 actions with a custom Python executable
Spark 3 compatibility action executor
Spark 3 examples with Python or Java application
Shell action for Spark 3
Migration of Spark 2 applications
Hue support for Oozie
▶︎
Oozie High Availability
Requirements for Oozie High Availability
▶︎
Configuring Oozie High Availability using Cloudera Manager
Oozie Load Balancer configuration
Enabling Oozie High Availability
Disabling Oozie High Availability
▶︎
Scheduling in Oozie using cron-like syntax
Oozie scheduling examples
▶︎
Configuring an external database for Oozie
Configuring PostgreSQL for Oozie
Configuring MariaDB for Oozie
Configuring MySQL 5 for Oozie
Configuring MySQL 8 for Oozie
Configuring Oracle for Oozie
▶︎
Working with the Oozie server
Starting the Oozie server
Stopping the Oozie server
Accessing the Oozie server with the Oozie Client
Accessing the Oozie server with a browser
Adding schema to Oozie using Cloudera Manager
Enabling the Oozie web console on managed clusters
Enabling Oozie SLA with Cloudera Manager
Disabling Oozie UI using Cloudera Manager
Moving the Oozie service to a different host
▶︎
Oozie database configurations
Configuring Oozie data purge settings using Cloudera Manager
Loading the Oozie database
Dumping the Oozie database
Setting the Oozie database timezone
▶︎
Fine-tuning Oozie's database connection
Assembling a secure JDBC URL for Oozie
Oracle TCPS
Prerequisites for configuring TLS/SSL for Oozie
Configure TLS/SSL for Oozie
Oozie Java-based actions with Java 17
Oozie security enhancements
Additional considerations when configuring TLS/SSL for Oozie HA
Configure Oozie client when TLS/SSL is enabled
Configuring custom Kerberos principal for Oozie
▶︎
Streams Messaging
▶︎
Configuring Apache Kafka
Operating system requirements
Performance considerations
Quotas
▶︎
JBOD
JBOD setup
JBOD Disk migration
Setting user limits for Kafka
Connecting Kafka clients to Data Hub provisioned clusters
▶︎
Rolling restart checks
Configuring rolling restart checks
Configuring the client configuration used for rolling restart checks
▶︎
Cluster discovery with multiple Apache Kafka clusters
▶︎
Cluster discovery using DNS records
A records and round robin DNS
client.dns.lookup property options for client
CNAME records configuration
Connection to the cluster with configured DNS aliases
▶︎
Cluster discovery using load balancers
Setup for SASL with Kerberos
Setup for TLS/SSL encryption
Connecting to the Kafka cluster using load balancer
Configuring Kafka ZooKeeper chroot
Rack awareness
▶︎
Securing Apache Kafka
▶︎
Channel encryption
Configure Kafka brokers
Configure Kafka clients
Configure Kafka MirrorMaker
Configure Zookeeper TLS/SSL support for Kafka
▶︎
Authentication
▶︎
TLS/SSL client authentication
Configure Kafka brokers
Configure Kafka clients
Principal name mapping
Enable Kerberos authentication
▶︎
Delegation token based authentication
Enable or disable authentication with delegation tokens
Manage individual delegation tokens
Rotate the master key/secret
▶︎
Client authentication using delegation tokens
Configure clients on a producer or consumer level
Configure clients on an application level
▶︎
LDAP authentication
Configure Kafka brokers
Configure Kafka clients
▶︎
PAM authentication
Configure Kafka brokers
Configure Kafka clients
▶︎
OAuth2 authentication
Configuring Kafka brokers
Configuring Kafka clients
▶︎
Authorization
▶︎
Ranger
Enable authorization in Kafka with Ranger
Configure the resource-based Ranger service used for authorization
Kafka ACL APIs support in Ranger
▶︎
Governance
Importing Kafka entities into Atlas
Configuring the Atlas hook in Kafka
Inter-broker security
Configuring multiple listeners
▶︎
Kafka security hardening with Zookeeper ACLs
Restricting access to Kafka metadata in Zookeeper
Unlocking access to Kafka metadata in Zookeeper
▶︎
Tuning Apache Kafka Performance
Handling large messages
▶︎
Cluster sizing
Sizing estimation based on network and disk message throughput
Choosing the number of partitions for a topic
▶︎
Broker Tuning
JVM and garbage collection
Network and I/O threads
ISR management
Log cleaner
▶︎
System Level Broker Tuning
File descriptor limits
Filesystems
Virtual memory handling
Networking parameters
Configure JMX ephemeral ports
Kafka-ZooKeeper performance tuning
▶︎
Managing Apache Kafka
▶︎
Management basics
Broker log management
Record management
Broker garbage log collection and log rotation
Client and broker compatibility across Kafka versions
▶︎
Managing topics across multiple Kafka clusters
Set up MirrorMaker in Cloudera Manager
Settings to avoid data loss
▶︎
Broker migration
Migrate brokers by modifying broker IDs in meta.properties
Use rsync to copy files from one broker to another
▶︎
Disk management
Monitoring
▶︎
Handling disk failures
Disk Replacement
Disk Removal
Reassigning replicas between log directories
Retrieving log directory replica assignment information
▶︎
Metrics
Building Cloudera Manager charts with Kafka metrics
Essential metrics to monitor
▶︎
Command Line Tools
Unsupported command line tools
kafka-topics
kafka-cluster
kafka-configs
kafka-console-producer
kafka-console-consumer
kafka-consumer-groups
kafka-features
kafka-reassign-partitions
kafka-log-dirs
zookeeper-security-migration
kafka-delegation-tokens
kafka-*-perf-test
Configuring log levels for command line tools
Understanding the kafka-run-class Bash Script
▶︎
Developing Apache Kafka Applications
Kafka producers
▶︎
Kafka consumers
Subscribing to a topic
Groups and fetching
Protocol between consumer and broker
Rebalancing partitions
Retries
Kafka clients and ZooKeeper
▶︎
Java client
▶︎
Client examples
Simple Java consumer
Simple Java producer
Security examples
▶︎
.NET client
▶︎
Client examples
Simple .NET consumer
Simple .NET producer
Performant .NET producer
Simple .Net consumer using Schema Registry
Simple .Net producer using Schema Registry
Security examples
Kafka Streams
Kafka public APIs
Recommendations for client development
▶︎
Kafka Connect
Kafka Connect Overview
Setting up Kafka Connect
▶︎
Using Kafka Connect
Configuring the Kafka Connect Role
Managing, Deploying and Monitoring Connectors
▶︎
Writing Kafka data to Ozone with Kafka Connect
Writing data in an unsecured cluster
Writing data in a Kerberos and TLS/SSL enabled cluster
Using the AvroConverter
Configuring EOS for source connectors
▶︎
Securing Kafka Connect
▶︎
Kafka Connect to Kafka broker security
Configuring TLS/SSL encryption
Configuring Kerberos authentication
▶︎
Kafka Connect REST API security
▶︎
Authentication
Configuring TLS/SSL client authentication
Configuring SPNEGO authentication and trusted proxies
▶︎
Authorization
Authorization model
Ranger integration
▶︎
Kafka Connect connector configuration security
▶︎
Kafka Connect Secrets Storage
Terms and concepts
Managing secrets using the REST API
Re-encrypting secrets
Configuring connector JAAS configuration and Kerberos principal overrides
Configuring a Nexus repository allow list
▶︎
Single Message Transforms
Configuring an SMT chain
ConvertFromBytes
ConvertToBytes
▶︎
Connectors
Installing connectors
Debezium Db2 Source
Debezium MySQL Source
Debezium Oracle Source
Debezium PostgreSQL Source
Debezium SQL Server Source
HTTP Source
JDBC Source
JMS Source
MQTT Source
SFTP Source
▶︎
Stateless NiFi Source and Sink
Dataflow development best practices
Kafka Connect worker assignment
Kafka Connect log files
Kafka Connect tasks
Developing a dataflow
Deploying a dataflow
Downloading and viewing predefined dataflows
Configuring flow.snapshot
Tutorial: developing and deploying a JDBC Source dataflow
Syslog TCP Source
Syslog UDP Source
ADLS Sink
▶︎
Amazon S3 Sink
Configuration example
▶︎
HDFS Sink
Configuration example for writing data to HDFS
Configuration example for writing data to Ozone FS
HDFS Stateless Sink
HTTP SInk
InfluxDB SInk
JDBC Sink
Kudu Sink
S3 Sink
▶︎
Kafka KRaft [Technical Preview]
KRaft setup
Extracting KRaft metadata
Securing KRaft
▶︎
Configuring Cruise Control
Setting capacity estimations and goals
Configuring Metrics Reporter in Cruise Control
Adding self-healing goals to Cruise Control in Cloudera Manager
▶︎
Securing Cruise Control
Enable security for Cruise Control
▶︎
Managing Cruise Control
Rebalancing with Cruise Control
Cruise Control REST API endpoints
▶︎
Configuring Streams Messaging Manager
Installing SMM in CDP Public Cloud
▶︎
Setting up Prometheus for SMM
▶︎
Prometheus configuration for SMM
Prerequisites for Prometheus configuration
Prometheus properties configuration
SMM property configuration in Cloudera Manager for Prometheus
Kafka property configuration in Cloudera Manager for Prometheus
Kafka Connect property configuration in Cloudera Manager for Prometheus
Start Prometheus
▶︎
Secure Prometheus for SMM
▶︎
Nginx proxy configuration over Prometheus
Nginx installtion
Nginx configuration for Prometheus
▶︎
Setting up TLS for Prometheus
Configuring SMM to recognize Prometheus's TLS certificate
▶︎
Setting up basic authentication with TLS for Prometheus
Configuring Nginx for basic authentication
Configuring SMM for basic authentication
Setting up mTLS for Prometheus
Prometheus for SMM limitations
Troubleshooting Prometheus for SMM
Performance comparison between Cloudera Manager and Prometheus
▶︎
Using Streams Messaging Manager
▶︎
Monitoring Kafka
Monitoring Kafka clusters
Monitoring Kafka producers
Monitoring Kafka topics
Monitoring Kafka brokers
Monitoring Kafka consumers
Monitoring log size information
Monitoring lineage information
▶︎
Managing Kafka topics
Creating a Kafka topic
Modifying a Kafka topic
Deleting a Kafka topic
▶︎
Managing Alert Policies and Notifiers
Creating a notifier
Updating a notifier
Deleting a notifier
Creating an alert policy
Updating an alert policy
Enabling an alert policy
Disabling an alert policy
Deleting an alert policy
Component types and metrics for alert policies
▶︎
Monitoring end-to-end latency
Enabling interceptors
Monitoring end to end latency for Kafka topic
End to end latency use case
▶︎
Monitoring Kafka cluster replications (SRM)
▶︎
Viewing Kafka cluster replication details
Searching Kafka cluster replications by source
Monitoring Kafka cluster replications by quick ranges
Monitoring status of the clusters to be replicated
▶︎
Monitoring topics to be replicated
Searching by topic name
Monitoring throughput for cluster replication
Monitoring replication latency for cluster replication
Monitoring checkpoint latency for cluster replication
Monitoring replication throughput and latency by values
▶︎
Managing and monitoring Kafka Connect
The Kafka Connect UI
Deploying and managing connectors
▶︎
Managing and monitoring Cruise Control rebalance
Authorizing users to access Cruise Control in SMM
Cruise Control dashboard in SMM UI
Using the Rebalance Wizard in Cruise Control
▶︎
Securing Streams Messaging Manager
Securing Streams Messaging Manager
Verifying the setup
▶︎
Integrating with Schema Registry
▶︎
Integrating Schema Registry with NiFi
NiFi record-based Processors and Controller Services
Configuring Schema Registry instance in NiFi
Setting schema access strategy in NiFi
Adding and configuring record-enabled Processors
Integrating Schema Registry with Kafka
Integrating Schema Registry with Flink and SSB
Integrating Schema Registry with Atlas
Improving performance in Schema Registry
▶︎
Using Schema Registry
Adding a new schema
Querying a schema
Evolving a schema
Deleting a schema
Importing Confluent Schema Registry schemas into Cloudera Schema Registry
▶︎
Exporting and importing schemas
Exporting schemas using Schema Registry API
Importing schemas using Schema Registry API
▶︎
ID ranges in Schema Registry
Setting a Schema Registry ID range
▶︎
Load balancer in front of Schema Registry instances
Configurations required to use load balancer with Kerberos enabled
Configurations required to use load balancer with SSL enabled
▶︎
Securing Schema Registry
▶︎
Schema Registry authorization through Ranger access policies
Predefined access policies for Schema Registry
Adding the user or group to a predefined access policy
Creating a custom access policy
▶︎
Schema Registry authentication through OAuth2 JWT tokens
JWT algorithms
Public key and secret storage
Authentication using OAuth2 with Kerberos
Schema Registry server configuration
Configuring the Schema Registry client
▶︎
Configuring Streams Replication Manager
Enable high availability
Enabling prefixless replication
▶︎
Defining and adding clusters for replication
Defining external Kafka clusters
Defining co-located Kafka clusters using a service dependency
Defining co-located Kafka clusters using Kafka credentials
Adding clusters to SRM's configuration
Configuring replications
Configuring the driver role target clusters
Configuring the service role target cluster
Configuring properties not exposed in Cloudera Manager
Configuring replication specific REST servers
▶︎
Configuring Remote Querying
Enabling Remote Querying
Configuring the advertised information of the SRM Service role
Configuring SRM Driver retry behaviour
Configuring SRM Driver heartbeat emission
Configuring automatic group offset synchronization
Configuring SRM Driver for performance tuning
New topic and consumer group discovery
▶︎
Configuration examples
Bidirectional replication example of two active clusters
Cross data center replication example of multiple clusters
▶︎
Using Streams Replication Manager
▶︎
SRM Command Line Tools
▶︎
srm-control
▶︎
Configuring srm-control
Configuring the SRM client's secure storage
Configuring TLS/SSL properties
Configuring Kerberos properties
Configuring properties for non-Kerberos authentication mechanisms
Setting the secure storage password as an environment variable
Topics and Groups Subcommand
Offsets Subcommand
Monitoring Replication with Streams Messaging Manager
Replicating Data
▶︎
How to Set up Failover and Failback
Configure SRM for Failover and Failback
Migrating Consumer Groups Between Clusters
▶︎
Securing Streams Replication Manager
Security overview
Enabling TLS/SSL for the SRM service
Enabling Kerberos for the SRM service
▶︎
Configuring Basic Authentication for the SRM Service
Enabling Basic Authentication for the SRM Service
Configuring Basic Authentication for Remote Querying
SRM security example
▶︎
Use cases for Streams Replication Manager in CDP Public Cloud
Using SRM in CDP Public Cloud overview
Replicating data from PvC Base to Data Hub with on-prem SRM
Replicating data from PvC Base to Data Hub with cloud SRM
Replicate data between Data Hub clusters with cloud SRM
▶︎
Troubleshooting
▶︎
Troubleshooting Apache Atlas
Atlas index repair configuration
▶︎
Troubleshooting Apache Hive
Unable to alter S3-backed tables
▶︎
Troubleshooting Apache Impala
Troubleshooting Impala
Using Breakpad Minidumps for Crash Reporting
Performance Issues Related to Data Encryption
Troubleshooting Crashes Caused by Memory Resource Limit
▶︎
Troubleshooting Apache Hadoop YARN
Troubleshooting Docker on YARN
Troubleshooting on YARN
Troubleshooting Linux Container Executor
▶︎
Troubleshooting Apache HBase
Troubleshooting HBase
▶︎
Using the HBCK2 tool to remediate HBase clusters
Running the HBCK2 tool
Finding issues
Fixing issues
HBCK2 tool command reference
Thrift Server crashes after receiving invalid data
HBase is using more disk space than expected
Troubleshoot RegionServer grouping
▶︎
Troubleshooting Apache Kudu
▶︎
Issues starting or restarting the master or the tablet server
Errors during hole punching test
Already present: FS layout already exists
Troubleshooting NTP stability problems
Disk space usage issue
▶︎
Performance issues
▶︎
Kudu tracing
Accessing the tracing web interface
RPC timeout traces
Kernel stack watchdog traces
Memory limits
Block cache size
Heap sampling
Slow name resolution and nscd
▶︎
Usability issues
ClassNotFoundException: com.cloudera.kudu.hive.KuduStorageHandler
Runtime error: Could not create thread: Resource temporarily unavailable (error 11)
Tombstoned or STOPPED tablet replicas
Corruption: checksum error on CFile block
Symbolizing stack traces
▶︎
Recover from a dead Kudu master
Prepare for the recovery
Perform the recovery
▶︎
Troubleshooting Cloudera Search
Identifying problems
Troubleshooting
▶︎
Troubleshooting Hue
The Hue load balancer not distributing users evenly across various Hue servers
Unable to authenticate users in Hue using SAML
Cleaning up old data to improve performance
Unable to connect to database with provided credential
Activating Hive query editor on Hue UI
Completed Hue query shows executing on CM
Finding the list of Hue superusers
'Type' error while accessing Hue from Knox Gateway
Unable to access Hue from Knox Gateway UI
Unable to view Snappy-compressed files
Invalid query handle
Load balancing between Hue and Impala
Services backed by PostgreSQL fail or stop responding
Invalid method name: 'GetLog' error
Authorization Exception error
Cannot alter compressed tables in Hue
MySQL: 1040, 'Too many connections' exception
Increasing the maximum number of processes for Oracle database
Fixing authentication issues between HBase and Hue
Lengthy BalancerMember Route length
Enabling access to HBase browser from Hue
Hue load balancer does not start after enabling TLS
Unable to log into Hue with Knox
LDAP search fails with invalid credentials error
Unable to execute queries due to atomic block
Disabling the web metric collection for Hue
Resolving "The user authorized on the connection does not match the session username" error
Requirements for compressing and extracting files using Hue File Browser
Fixing a warning related to accessing non-optimized Hue
Fixing incorrect start time and duration on Hue Job Browser
▶︎
Troubleshooting Apache Sqoop
Unable to read Sqoop metastore created by an older HSQLDB version
Merge process stops during Sqoop incremental imports
Sqoop Hive import stops when HS2 does not use Kerberos authentication
▶︎
Troubleshooting Apache Spark
Spark jobs failing with memory issues
▶︎
Reference
▶︎
Apache Hadoop YARN Reference
▶︎
Tuning Apache Hadoop YARN
YARN tuning overview
Step 1: Worker host configuration
Step 2: Worker host planning
Step 3: Cluster size
Steps 4 and 5: Verify settings
Step 6: Verify container settings on cluster
Step 6A: Cluster container capacity
Step 6B: Container parameters checking
Step 7: MapReduce configuration
Step 7A: MapReduce settings checking
Set properties in Cloudera Manager
Configure memory settings
YARN Configuration Properties
Use the YARN REST APIs to manage applications
▶︎
Comparison of Fair Scheduler with Capacity Scheduler
Why one scheduler?
Scheduler performance improvements
Feature comparison
Migration from Fair Scheduler to Capacity Scheduler
▶︎
Configuring and using Queue Manager REST API
Limitations
Using the REST API
Prerequisites
Start Queue
Stop Queue
Add Queue
Change Queue Capacities
Change Queue Properties
Delete Queue
▶︎
Data Access
▶︎
Apache Hive Materialized View Commands
ALTER MATERIALIZED VIEW REBUILD
ALTER MATERIALIZED VIEW REWRITE
CREATE MATERIALIZED VIEW
DESCRIBE EXTENDED and DESCRIBE FORMATTED
DROP MATERIALIZED VIEW
SHOW MATERIALIZED VIEWS
▶︎
Apache Hive Reference
▶︎
Apache Impala Reference
▶︎
Performance Considerations
Performance Best Practices
Query Join Performance
▶︎
Table and Column Statistics
Generating Table and Column Statistics
Runtime Filtering
Min/Max Filtering
Bloom Filtering
Late Materialization of Columns
▶︎
Partitioning
Partition Pruning for Queries
HDFS Caching
HDFS Block Skew
Understanding Performance using EXPLAIN Plan
Understanding Performance using SUMMARY Report
Understanding Performance using Query Profile
Planner changes for CPU usage
DDL Bucketed Tables
▶︎
Scalability Considerations
Scaling Limits and Guidelines
Dedicated Coordinator
▶︎
Hadoop File Formats Support
Using Text Data Files
Using Parquet Data Files
Using ORC Data Files
Using Avro Data Files
Using RCFile Data Files
Using SequenceFile Data Files
▶︎
Storage Systems Supports
▶︎
Impala with HDFS
Configure Impala Daemon to spill to HDFS
▶︎
Impala with Kudu
Configuring for Kudu Tables
▶︎
Impala DDL for Kudu
Partitioning for Kudu Tables
Creating External Table
Impala DML for Kudu Tables
Impala with HBase
Impala with Azure Data Lake Store (ADLS)
▶︎
Impala with Amazon S3
Specifying Impala Credentials to Access S3
▶︎
Impala with Ozone
Configure Impala Daemon to spill to Ozone
Ports Used by Impala
Migration Guide
Setting up Data Cache for Remote Reads
▶︎
Managing Metadata in Impala
On-demand Metadata
Automatic Invalidation of Metadata Cache
Automatic Invalidation/Refresh of Metadata
Transactions
▶︎
Apache Impala SQL Reference
Apache Impala SQL Overview
▶︎
Schema objects
Impala aliases
Databases
Functions
Identifiers
Impala tables
Views
▶︎
Data types
ARRAY complex type
BIGINT data type
BINARY data type
BOOLEAN data type
CHAR data type
DATE data type
DECIMAL data type
DOUBLE data type
FLOAT data type
INT data type
MAP complex type
REAL data type
SMALLINT data type
STRING data type
STRUCT complex type
▶︎
TIMESTAMP data type
Customizing time zones
TINYINT data type
VARCHAR data type
▶︎
Complex types
Querying arrays
Zipping unnest on arrays from views
Literals
Operators
Comments
▶︎
SQL statements
ROLE statements
DDL statements
DML statements
ALTER DATABASE statement
ALTER TABLE statement
ALTER VIEW statement
COMMENT statement
COMPUTE STATS statement
CREATE DATABASE statement
CREATE FUNCTION statement
CREATE ROLE statement
CREATE TABLE statement
CREATE VIEW statement
DELETE statement
DESCRIBE statement
DROP DATABASE statement
DROP FUNCTION statement
DROP ROLE statement
DROP STATS statement
DROP TABLE statement
DROP VIEW statement
EXPLAIN statement
GRANT statement
GRANT ROLE statement
INSERT statement
INVALIDATE METADATA statement
LOAD DATA statement
REFRESH statement
REFRESH AUTHORIZATION statement
REFRESH FUNCTIONS statement
REVOKE statement
REVOKE ROLE statement
▶︎
SELECT statement
Joins in Impala SELECT statements
ORDER BY clause
GROUP BY clause
HAVING clause
LIMIT clause
OFFSET clause
UNION, INTERSECT, and EXCEPT clauses
Subqueries in Impala SELECT statements
TABLESAMPLE clause
WITH clause
DISTINCT operator
SET statement
SHOW statement
SHOW ROLES statement
SHOW CURRENT ROLES statement
SHOW ROLE GRANT GROUP statement
SHUTDOWN statement
TRUNCATE TABLE statement
UPDATE statement
UPSERT statement
USE statement
VALUES statement
Optimizer hints
Query options
Virtual column
▶︎
Built-in functions
Mathematical functions
Bit functions
Conversion functions
Date and time functions
Conditional functions
Impala string functions
Miscellaneous functions
▶︎
Aggregate functions
APPX_MEDIAN function
AVG function
COUNT function
GROUPING() and GROUPING_ID() functions
GROUP_CONCAT function
MAX function
MIN function
NDV function
STDDEV, STDDEV_SAMP, STDDEV_POP functions
SUM function
VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP functions
▶︎
Analytic functions
OVER
WINDOW
AVG
COUNT
CUME_DIST
DENSE_RANK
FIRST_VALUE
LAG
LAST_VALUE
LEAD
MAX
MIN
NTILE
PERCENT_RANK
RANK
ROW_NUMBER
SUM
▶︎
User-defined functions (UDFs)
UDF concepts
Runtime environment for UDFs
Installing the UDF development package
Writing UDFs
Writing user-defined aggregate functions (UDAFs)
Building and deploying UDFs
Performance considerations for UDFs
Examples of creating and using UDFs
Security considerations for UDFs
Limitations and restrictions for Impala UDFs
Transactions
Reserved words
Impala SQL and Hive SQL
SQL migration to Impala
UTF-8 Support
▶︎
Cloudera Search solrctl Reference
solrctl Reference
Using solrctl with an HTTP proxy
▶︎
Cloudera Search Morphlines Reference
Implementing your own Custom Command
Morphline commands overview
kite-morphlines-core-stdio
kite-morphlines-core-stdlib
kite-morphlines-avro
kite-morphlines-json
kite-morphlines-hadoop-core
kite-morphlines-hadoop-parquet-avro
kite-morphlines-hadoop-rcfile
kite-morphlines-hadoop-sequencefile
kite-morphlines-maxmind
kite-morphlines-metrics-servlets
kite-morphlines-protobuf
kite-morphlines-tika-core
kite-morphlines-tika-decompress
kite-morphlines-saxon
kite-morphlines-solr-core
kite-morphlines-solr-cell
kite-morphlines-useragent
▶︎
Operational Database
▶︎
Apache Phoenix Frequently Asked Questions
Frequently asked questions
▶︎
Apache Phoenix Performance Tuning
Performance tuning
▶︎
Apache Phoenix Command Reference
Apache Phoenix SQL command reference
▶︎
Apache Atlas Reference
Apache Atlas Advanced Search language reference
Apache Atlas Statistics reference
Apache Atlas metadata attributes
▶︎
Dynamic handling of failure in updating index
Configurations used for index recovery
Defining Apache Atlas enumerations
▶︎
Purging deleted entities
Auditing purged entities
PUT /admin/purge/ API
POST /admin/audits/ API
▶︎
Apache Atlas technical metadata migration reference
System metadata migration
HDFS entity metadata migration
Hive entity metadata migration
Impala entity metadata migration
Spark entity metadata migration
AWS S3 entity metadata migration
▶︎
NiFi metadata collection
How Lineage strategy works
Understanding the data that flow into Atlas
NiFi lineage
Atlas NiFi relationships
Atlas NiFi audit entries
How the reporting task runs in a NiFi cluster
Analysing event analysis
Limitations of Atlas-NiFi integration
▶︎
HiveServer metadata collection
HiveServer actions that produce Atlas entities
HiveServer entities created in Atlas
HiveServer relationships
HiveServer lineage
HiveServer audit entries
▶︎
HBase metadata collection
HBase actions that produce Atlas entities
HBase entities created in Atlas
Hbase lineage
HBase audit entries
▶︎
Schema Registry metadata collection
Configuring Atlas and Schema Registry
Schema Registry actions that produce Atlas entities
Schema replationships
Schema Registry audit entries
Troubleshooting Schema Registry
▶︎
Impala metadata collection
Impala actions that produce Atlas entities
Impala entities created in Atlas
Impala lineage
Impala audit entries
▶︎
Kafka metadata collection
Kafka actions that produce Atlas entities
Kafka relationships
Kafka lineage
Kafka audit entries
▶︎
Spark metadata collection
Spark actions that produce Atlas entities
Spark entities created in Apache Atlas
Spark lineage
Spark relationships
Spark audit entries
Spark connector configuration in Apache Atlas
Spark troubleshooting
▶︎
Streams Messaging
▶︎
Kafka Connect Connector Reference
HTTP Source properties reference
JDBC Source properties reference
JMS Source properties reference
MQTT Source properties reference
SFTP Source properties reference
Stateless NiFi Source properties reference
Syslog TCP Source properties reference
Syslog UDP Source properties reference
ADLS Sink properties reference
Amazon S3 Sink properties reference
HDFS Sink properties reference
HDFS Stateless Sink properties reference
HTTP Sink properties reference
InfluxDB Sink properties reference
JDBC Sink properties reference
Kudu Sink properties reference
S3 Sink properties reference
Stateless NiFi Sink properties reference
▶︎
Schema Registry Reference
SchemaRegistryClient properties reference
KafkaAvroSerializer properties reference
KafkaAvroDeserializer properties reference
▶︎
Streams Replication Manager Reference
srm-control Options Reference
Configuration Properties Reference for Properties not Available in Cloudera Manager
Kafka credentials property reference
SRM Service data traffic reference
Cruise Control REST API Reference
Kafka Connect REST API Reference
Schema Registry REST API Reference
Streams Messaging Manager REST API Reference
Streams Replication Manager REST API Reference
▶︎
Encryption Reference
Auto-TLS Requirements and Limitations
Rotate Auto-TLS Certificate Authority and Host Certificates
Auto-TLS Agent File Locations
▶︎
Apache Oozie Reference
Submit Oozie Jobs in Data Engineering Cluster
'Type' error while accessing Hue from Knox Gateway
.NET client
A List of S3A Configuration Properties
A records and round robin DNS
Ability to download search results from Atlas UI
About Atlas High Availability
About HBase snapshots
About Hue Query Processor
About the Hue SQL AI Assistant
About the Off-heap BucketCache
About using Hue
Access HDFS from the NFS Gateway
Accessing and using Hue
Accessing Apache HBase
Accessing Atlas logs
Accessing Avro data files from Spark SQL applications
Accessing AWS
Accessing Azure Storage account container from spark-shell
Accessing Cloud Data
Accessing compressed files in Spark
Accessing data stored in Amazon S3 through Spark
Accessing external storage from Spark
Accessing HBase metrics in Prometheus format
Accessing HDFS Files from Spark
Accessing Hive from an external node
Accessing Hive from Spark
Accessing Iceberg tables
Accessing ORC Data in Hive Tables
Accessing ORC files from Spark
Accessing Parquet files from Spark SQL applications
Accessing Spark SQL through the Spark shell
Accessing StorageHandler and other external tables
Accessing the Oozie server with a browser
Accessing the Oozie server with the Oozie Client
Accessing the Ranger Admin Web UI
Accessing the Spark History Server
Accessing the tracing web interface
Accessing the YARN Queue Manager UI
Accessing the YARN Web User Interface
ACID Operation
ACID operations in Data Hub
ACL examples
ACLS on HDFS features
Activate read replicas on a table
Activating Hive query editor on Hue UI
Active Directory Settings
Add a custom coprocessor
Add a custom descriptor to Apache Knox
Add a new provider in an existing provider configuration
Add a ZooKeeper service
Add HDFS system mount
Add Queue
Add storage directories using Cloudera Manager
Add the HttpFS role
Adding a custom banner in Hue
Adding a group
Adding a load balancer
Adding a new schema
Adding a policy condition to a resource-based policy
Adding a policy condition to a tag-based policy
Adding a policy label to a resource-based policy
Adding a Ranger security zone
Adding a role through Hive
Adding a role through Ranger
Adding a splash screen in Hue
Adding a tag-based PII policy
Adding a tag-based service
Adding a user
Adding and configuring record-enabled Processors
Adding and Removing Range Partitions
Adding attributes to Business Metadata
Adding attributes to classifications
Adding clusters to SRM's configuration
Adding default service users and roles for Ranger
Adding file system credentials to an Oozie workflow
Adding multiple namenodes using the HDFS service
Adding or editing module permissions
Adding Query Processor admin users and groups
Adding Query Processor service to a cluster
Adding queues using YARN Queue Manager UI
Adding schema to Oozie using Cloudera Manager
Adding self-healing goals to Cruise Control in Cloudera Manager
Adding tag-based policies
Adding the HBase or Hadoop configuration files
Adding the Lily HBase indexer service
Adding the Oozie service using Cloudera Manager
Adding the Phoenix JDBC driver jar
Adding the user or group to a predefined access policy
Additional Configuration Options for GCS
Additional considerations when configuring TLS/SSL for Oozie HA
Additional HDFS haadmin commands to administer the cluster
Adjust the Solr replication factor for index files stored in HDFS
Adjusting Heartbeat TCP Timeout Interval
ADLS Proxy Setup
ADLS Sink
ADLS Sink properties reference
ADLS Trash Folder Behavior
Admin ACLs
Administering Hue
Administering Ranger Reports
Administering Ranger Users, Groups, Roles, and Permissions
Administrative commands
Administrative tools for Hive Metastore integration
Admission Control and Query Queuing
Admission Control Sample Scenario
Advanced Committer Configuration
Advanced configuration for write-heavy workloads
Advanced erasure coding configuration
Advanced ORC properties
Advanced partitioning
Advanced settings: Overriding default configurations
Advanced topics
Advanced topics
Advantages of defining a schema for production use
Aggregate functions
Aggregating and grouping data
Aging patterns
Allocating DataNode memory as storage
Already present: FS layout already exists
Alter a table
ALTER DATABASE statement
ALTER MATERIALIZED VIEW REBUILD
ALTER MATERIALIZED VIEW REWRITE
Alter table feature
ALTER TABLE statement
ALTER VIEW statement
Amazon S3 metadata collection
Amazon S3 Sink
Amazon S3 Sink properties reference
Analysing event analysis
Analytic functions
Apache Atlas Advanced Search language reference
Apache Atlas dashboard tour
Apache Atlas metadata attributes
Apache Atlas metadata collection overview
Apache Atlas Reference
Apache Atlas Statistics reference
Apache Atlas technical metadata migration reference
Apache Hadoop YARN Overview
Apache Hadoop YARN Reference
Apache HBase Overview
Apache HBase overview
Apache Hive 3 ACID transactions
Apache Hive 3 in Data Hub architectural overview
Apache Hive 3 tables
Apache Hive content roadmap
Apache Hive features in Data Hub
Apache Hive Materialized View Commands
Apache Hive Metastore Overview
Apache Hive Overview
Apache Hive Performance Tuning
Apache Hive query basics
Apache Hive Reference
Apache Hive storage in public clouds
Apache Hive-Kafka integration
Apache Iceberg features
Apache Iceberg Overview
Apache Impala Overview
Apache Impala Overview
Apache Impala Reference
Apache Impala SQL Overview
Apache Impala SQL Reference
Apache Kafka Overview
Apache Knox Authentication
Apache Knox Gateway Overview
Apache Knox Install Role Parameters
Apache Knox Overview
Apache Kudu Background Operations
Apache Kudu Overview
Apache Kudu usage limitations
Apache Oozie Reference
Apache Phoenix and SQL
Apache Phoenix and SQL
Apache Phoenix Command Reference
Apache Phoenix Frequently Asked Questions
Apache Phoenix Overview
Apache Phoenix Performance Tuning
Apache Phoenix SQL command reference
Apache Phoenix-Hive usage examples
Apache Ranger APIs
Apache Ranger Auditing
Apache Ranger Authorization
Apache Ranger User Management
Apache Spark 3.4 Requirements
Apache Spark executor task statistics
Apache Spark Overview
Apache Spark Overview
Apache Zeppelin Overview
APIs for accessing HDFS
Application ACL evaluation
Application ACLs
Application logs' ACLs
Application reservations
Applications and permissions reference
APPX_MEDIAN function
ARRAY complex type
Assembling a secure JDBC URL for Oozie
Assigning or unassigning a node to a partition
Assigning Roles for the Managed Identities
Assigning superuser status to an LDAP user
Assigning terms to categories
Associate a table in a non-customized environment without Kerberos
Associate table in a customized Kerberos environment
Associating Business Metadata attributes with entities
Associating classifications with entities
Associating partitions with queues
Associating tables of a schema to a namespace
Associating terms with entities
Atlas
Atlas
Atlas
Atlas classifications drive Ranger policies
Atlas Export and Import operations
Atlas index repair configuration
Atlas index repair configuration
Atlas metadata model overview
Atlas NiFi audit entries
Atlas NiFi relationships
Atlas Server Operations
Atlas Type Definitions
Audit aging reference configurations
Audit aging using REST API
Audit enhancements
Audit Operations
Audit Overview
Auditing Atlas Entities
Auditing purged entities
Authenticating with ADLS Gen2
Authentication
Authentication
Authentication
Authentication using Kerberos
Authentication using Knox SSO
Authentication using LDAP
Authentication using OAuth2 with Kerberos
Authentication using PAM
Authentication using SAML
Authorization
Authorization
Authorization
Authorization Exception error
Authorization model
Authorizing users to access Cruise Control in SMM
Auto-TLS Agent File Locations
Auto-TLS Requirements and Limitations
Automatic group offset synchronization
Automatic Invalidation of Metadata Cache
Automatic Invalidation of Metadata Cache
Automatic Invalidation/Refresh of Metadata
Automatic Invalidation/Refresh of Metadata
Automating partition discovery and repair
Automating Spark Jobs with Oozie Spark Action
Autoscaling behavior
Autoscaling clusters
AVG
AVG function
Avro
Avro
AWS configuration
AWS object lifecycle
AWS objects and inferred hierarchy
AWS S3 Base
AWS S3 Bucket
AWS S3 Contained
AWS S3 Container
AWS S3 Directory
AWS S3 entity metadata migration
AWS S3 Object
Back up HDFS metadata
Back up HDFS metadata using Cloudera Manager
Back up tables
Backing up a collection from HDFS
Backing up a collection from local file system
Backing up and Recovering Apache Kudu
Backing up and restoring data
Backing up HDFS metadata
Backing up NameNode metadata
Backup directory structure
Backup tools
Balancer commands
Balancing data across an HDFS cluster
Balancing data across disks of a DataNode
Basic partitioning
Basic search enhancement
Basics
Batch indexing into offline Solr shards
Batch indexing to Solr using SparkApp framework
Batch indexing using Morphlines
Before you create an operational database cluster
Before you start
Behavioral Changes In Cloudera Runtime 7.2.18
Behavioral Changes In Cloudera Runtime 7.2.18.500
Benefits of centralized cache management in HDFS
Best practices for building Apache Spark applications
Best practices for Iceberg in CDP
Best practices for performance tuning
Best practices for rack and node setup for EC
Best practices when adding new tablet servers
Best practices when using RegionServer grouping
Bidirectional replication example of two active clusters
BIGINT data type
BINARY data type
Bit functions
Block cache size
Block move execution
Block move scheduling
Bloom Filtering
BOOLEAN data type
Bring a tablet that has lost a majority of replicas back online
Broker garbage log collection and log rotation
Broker log management
Broker migration
Broker Tuning
Brokers
BucketCache IO engine
Bucketed tables in Hive
Building and deploying UDFs
Building and running a Spark Streaming application
Building Cloudera Manager charts with Kafka metrics
Building reusable modules in Apache Spark applications
Building Spark Applications
Building the project and upload the JAR
Built-in functions
Bulk and migration import of Hive metadata
Bulk Write Access
Business Metadata overview
Bypass the BlockCache
Cache eviction priorities
Caching manifest files
Caching terminology
Calling Hive user-defined functions (UDFs)
Calling the UDF in a query
Canary test for pyspark command
Cancelling a Query
Cannot alter compressed tables in Hue
Case for implementing backward compatibility
Catalog operations
CDP Security Overview
Centralized cache management architecture
Change master hostnames
Change Queue Capacities
Change Queue Properties
Changing a nameservice name for Highly Available HDFS using Cloudera Manager
Changing directory configuration
Changing Ranger audit storage location and migrating data
Changing resource allocation mode
Changing the page logo in Hue
Changing the table metadata location
Channel encryption
CHAR data type
CHAR data type support
Charting spool alert metrics
Check for required Ranger features in Data Hub
Check Job History
Check Job Status
Choose the right import method
Choosing Data Formats
Choosing the number of partitions for a topic
Choosing the Sufficient Security Level for Your Environment
Choosing Transformations to Minimize Shuffles
ClassNotFoundException: com.cloudera.kudu.hive.KuduStorageHandler
Cleaning up after failed jobs
Cleaning up old data to improve performance
Cleaning up old queries
CLI commands to perform snapshot operations
Client and broker compatibility across Kafka versions
Client authentication to secure Kudu clusters
Client authentication using delegation tokens
Client examples
Client examples
client.dns.lookup property options for client
Closing HiveWarehouseSession operations
Cloud Connectors
Cloud Connectors
Cloud Connectors
Cloud storage connectors overview
Cloudera Runtime
Cloudera Runtime 7.2.18.100
Cloudera Runtime 7.2.18.200
Cloudera Runtime 7.2.18.300
Cloudera Runtime 7.2.18.400
Cloudera Runtime 7.2.18.500
Cloudera Runtime Component Versions
Cloudera Runtime Release Notes
Cloudera Search and CDP
Cloudera Search architecture
Cloudera Search config templates
Cloudera Search configuration files
Cloudera Search ETL
Cloudera Search log files
Cloudera Search Morphlines Reference
Cloudera Search Overview
Cloudera Search security aspects
Cloudera Search solrctl Reference
Cloudera Search tasks and processes
Cluster balancing algorithm
Cluster discovery using DNS records
Cluster discovery using load balancers
Cluster discovery with multiple Apache Kafka clusters
Cluster management limitations
Cluster management limitations
Cluster sizing
CNAME records configuration
Collecting metrics through HTTP
Column compression
Column design
Column encoding
Command Line Tools
Command-line options to run Extraction
Commands for configuring storage policies
Commands for using cache pools and directives
COMMENT statement
Comments
Committing a transaction for Direct Reader
Common replication topologies
Common web interface pages
Compacting on-disk data
Compaction observability in Cloudera Manager
Compaction of Data in FULL ACID Transactional Table
Compaction tasks
Compactor properties
Comparing Hive and Impala queries in Hue
Comparing replication and erasure coding
Comparing tables using ANY/SOME/ALL
Comparison of Fair Scheduler with Capacity Scheduler
Compatibility policies
Complete list of model-related configurations for setting up the Hue SQL AI Assistant
Completed Hue query shows executing on CM
Complex types
Component types and metrics for alert policies
Components of cache-aware load balancer
Components of Impala
Components of stochastic load balancer
Compound operators
Compute
COMPUTE STATS statement
Concepts Used in FULL ACID v2 Tables
Concurrent session verification (Tech Preview)
Conditional functions
Configuration details
Configuration details
Configuration details
Configuration example
Configuration example for writing data to HDFS
Configuration example for writing data to Ozone FS
Configuration examples
Configuration properties
Configuration Properties Reference for Properties not Available in Cloudera Manager
Configurations and CLI options for the HDFS Balancer
Configurations for submitting a Hive query to a dedicated queue
Configurations required to use load balancer with Kerberos enabled
Configurations required to use load balancer with SSL enabled
Configurations used for index recovery
Configure a resource-based policy: Atlas
Configure a resource-based policy: HadoopSQL
Configure a resource-based policy: HBase
Configure a resource-based policy: HDFS
Configure a resource-based policy: Kafka
Configure a resource-based policy: Knox
Configure a resource-based policy: NiFi
Configure a resource-based policy: NiFi Registry
Configure a resource-based policy: S3
Configure a resource-based policy: Solr
Configure a resource-based policy: YARN
Configure a resource-based service: Atlas
Configure a resource-based service: HadoopSQL
Configure a resource-based service: HBase
Configure a resource-based service: HDFS
Configure a resource-based service: Kafka
Configure a resource-based service: Knox
Configure a resource-based service: NiFi
Configure a resource-based service: NiFi Registry
Configure a resource-based service: Solr
Configure a resource-based service: YARN
Configure a resource-based storage handler policy: HadoopSQL
Configure a Spark job for dynamic resource allocation
Configure Access to GCS from Your Cluster
Configure archival storage
Configure Atlas authentication for AD
Configure Atlas authentication for LDAP
Configure Atlas file-based authentication
Configure Atlas PAM authentication
Configure BucketCache IO engine
Configure bulk load replication
Configure clients on a producer or consumer level
Configure clients on an application level
Configure columns to store MOBs
Configure CPU scheduling and isolation
Configure credentials for Atlas extraction
Configure DataNode memory as storage
Configure DNS
Configure DNS
Configure encryption in HBase
Configure four-letter-word commands in ZooKeeper
Configure FPGA scheduling and isolation
Configure GPU scheduling and isolation
Configure HBase for use with Phoenix
Configure HBase garbage collection
Configure HBase-Spark connector
Configure HDFS RPC protection
Configure Hive to use with HBase
Configure Impala Daemon to spill to HDFS
Configure Impala Daemon to spill to Ozone
Configure JMX ephemeral ports
Configure Kafka brokers
Configure Kafka brokers
Configure Kafka brokers
Configure Kafka brokers
Configure Kafka clients
Configure Kafka clients
Configure Kafka clients
Configure Kafka clients
Configure Kafka MirrorMaker
Configure Kerberos
Configure Kerberos
Configure Kerberos authentication for Apache Atlas
Configure Kudu processes
Configure Lily HBase Indexer Service to use Kerberos authentication
Configure Lily HBase Indexer to use TLS/SSL
Configure memory settings
Configure mountable HDFS
Configure Oozie client when TLS/SSL is enabled
Configure optimized rename and recursive delete operations in Ranger Ozone plugin
Configure Phoenix-Hive connector
Configure Phoenix-Spark connector
Configure queue ordering policies
Configure Ranger authentication for AD
Configure Ranger authentication for LDAP
Configure Ranger authentication for PAM
Configure Ranger authentication for UNIX
Configure read replicas using Cloudera Manager
Configure RegionServer grouping
Configure secure replication
Configure source and destination realms in krb5.conf
Configure SQL AI Assistant using Cloudera AI Workbench
Configure SQL AI Assistant using the Amazon Bedrock Service
Configure SQL AI Assistant using the Cloudera AI Inference service
Configure SQL AI Assistant using the Microsoft Azure OpenAI service
Configure SQL AI Assistant using the OpenAI platform
Configure SQL AI Assistant using vLLM
Configure SRM for Failover and Failback
Configure storage balancing for DataNodes using Cloudera Manager
Configure the blocksize for a column family
Configure the compaction speed using Cloudera Manager
Configure the G1GC garbage collector
Configure the graceful shutdown timeout property
Configure the HBase canary
Configure the HBase client TGT renewal period
Configure the HBase thrift server role
Configure the MOB cache using Cloudera Manager
Configure the NFS Gateway
Configure the off-heap BucketCache using Cloudera Manager
Configure the off-heap BucketCache using the command line
Configure the resource-based Ranger service used for authorization
Configure the scanner heartbeat using Cloudera Manager
Configure the storage policy for WALs using Cloudera Manager
Configure the storage policy for WALs using the Command Line
Configure TLS encryption manually for Phoenix Query Server
Configure TLS/SSL for Oozie
Configure transaction support
Configure ulimit for HBase using Cloudera Manager
Configure ulimit using Pluggable Authentication Modules using the Command Line
Configure User Impersonation for Access to Hive
Configure User Impersonation for Access to Phoenix
Configure ZooKeeper client shell for Kerberos authentication
Configure ZooKeeper server for Kerberos authentication
Configure Zookeeper TLS/SSL support for Kafka
Configure ZooKeeper TLS/SSL using Cloudera Manager
Configuring a custom Hive CREATE TABLE statement
Configuring a Nexus repository allow list
Configuring a Ranger audit filter policy
Configuring a secure Kudu cluster using Cloudera Manager
Configuring Access to Azure on CDP Public Cloud
Configuring Access to Azure on Cloudera Private Cloud Base
Configuring Access to Google Cloud Storage
Configuring Access to S3
Configuring Access to S3 on CDP Public Cloud
Configuring Access to S3 on Cloudera Private Cloud Base
Configuring ACLs on HDFS
Configuring ADLS Gen2 Storage Queue
Configuring an external database for Oozie
Configuring an SMT chain
Configuring and Monitoring Atlas
Configuring and running the HDFS balancer using Cloudera Manager
Configuring and tuning S3A block upload
Configuring and using Queue Manager REST API
Configuring and Using Ranger RMS Hive-s3 ACL Sync
Configuring and Using Zeppelin Interpreters
Configuring Apache Hadoop YARN High Availability
Configuring Apache Hadoop YARN Log Aggregation
Configuring Apache Hadoop YARN Security
Configuring Apache HBase
Configuring Apache HBase for Apache Phoenix
Configuring Apache HBase High Availability
Configuring Apache Impala
Configuring Apache Kafka
Configuring Apache Kudu
Configuring Apache Spark
Configuring Apache Zeppelin
Configuring Apache ZooKeeper
Configuring Atlas and Schema Registry
Configuring Atlas Authentication
Configuring Atlas Authorization
Configuring Atlas Authorization using Ranger
Configuring Atlas using Cloudera Manager
Configuring audit spool alert notifications
Configuring authentication for long-running Spark Streaming jobs
Configuring authentication with LDAP and Direct Bind
Configuring authentication with LDAP and Search Bind
Configuring Authorization
Configuring auto split policy in an HBase table
Configuring automatic group offset synchronization
Configuring autoscaling
Configuring Basic Authentication for Remote Querying
Configuring Basic Authentication for the SRM Service
Configuring block size
Configuring caching for secure access mode
Configuring Client Access to Impala
Configuring client side JWT authentication for Kudu
Configuring cluster capacity with queues
Configuring coarse-grained authorization with ACLs
Configuring compaction health monitoring
Configuring compaction in Cloudera Manager
Configuring compaction using table properties
Configuring concurrent moves
Configuring connector JAAS configuration and Kerberos principal overrides
Configuring Cross-Origin Support for YARN UIs and REST APIs
Configuring Cruise Control
Configuring custom Beeline arguments
Configuring custom Hive JDBC arguments
Configuring custom Hive table properties
Configuring custom Kerberos principal for Kudu
Configuring custom Kerberos principal for Oozie
Configuring data at rest encryption
Configuring data locality
Configuring Data Protection
Configuring debug delay
Configuring Dedicated Coordinators and Executors
Configuring dedicated Impala coordinator
Configuring Delegation for Clients
Configuring Directories for Intermediate Data
Configuring dynamic resource allocation
Configuring Dynamic Resource Pool
Configuring edge node on AWS
Configuring edge node on Azure
Configuring edge node on GCP
Configuring Encryption for Specific Buckets
Configuring EOS for source connectors
Configuring Fault Tolerance
Configuring file and directory permissions for Hue
Configuring flow.snapshot
Configuring for HDFS high availability
Configuring for Kudu Tables
Configuring graceful shutdown property for HiveServer
Configuring group permissions
Configuring HBase BlockCache
Configuring HBase MultiWAL
Configuring HBase persistent BucketCache
Configuring HBase servers to authenticate with a secure HDFS cluster
Configuring HBase snapshots
Configuring HBase to use HDFS HA
Configuring HDFS ACLs
Configuring HDFS High Availability
Configuring HDFS trash
Configuring heterogeneous storage in HDFS
Configuring high availability for Hue
Configuring high availability for SHS with an external load balancer
Configuring high availability for SHS with an internal load balancer
Configuring high availability for SHS with multiple Knox Gateways
Configuring Hive and Impala for high availability with Hue
Configuring HMS for high availability
Configuring HSTS for HBase Web UIs
Configuring HSTS for HDFS Web UIs
Configuring HSTS for Spark
Configuring HTTPS encryption
Configuring Hue as a TLS/SSL client
Configuring Hue as a TLS/SSL server
Configuring Hue to handle HS2 failover
Configuring Impala
Configuring Impala TLS/SSL
Configuring Impala to work with HDFS HA
Configuring Impala Web UI
Configuring Impyla for Impala
Configuring JDBC for Impala
Configuring JWT Authentication
Configuring Kafka brokers
Configuring Kafka clients
Configuring Kafka ZooKeeper chroot
Configuring Kerberos authentication
Configuring Kerberos Authentication for Impala
Configuring Kerberos authentication in Apache Knox shared providers
Configuring Kerberos properties
Configuring LDAP Authentication
Configuring LDAP on unmanaged clusters
Configuring Lily HBase Indexer Security
Configuring Livy
Configuring Load Balancer for Impala
Configuring log aggregation
Configuring log levels for command line tools
Configuring manifest caching in Cloudera Manager
Configuring MariaDB for Oozie
Configuring Metrics Reporter in Cruise Control
Configuring multiple listeners
Configuring MultiWAL support using Cloudera Manager
Configuring MySQL 5 for Oozie
Configuring MySQL 8 for Oozie
Configuring nested group hierarchies
Configuring network line-of-sight
Configuring network line-of-sight
Configuring Nginx for basic authentication
Configuring node attribute for application master placement
Configuring NodeManager heartbeat
Configuring ODBC for Impala
Configuring Oozie data purge settings using Cloudera Manager
Configuring Oozie High Availability using Cloudera Manager
Configuring Oozie Sqoop1 Action workflow JDBC drivers
Configuring Oozie to enable MapReduce jobs to read or write from Amazon S3
Configuring oozie to use HDFS HA
Configuring Oozie to use HDFS HA
Configuring Oracle for Oozie
Configuring other CDP components to use HDFS HA
Configuring partitions for transactions
Configuring per queue properties
Configuring Per-Bucket Settings
Configuring Per-Bucket Settings to Access Data Around the World
Configuring PostgreSQL for Oozie
Configuring preemption
Configuring properties for non-Kerberos authentication mechanisms
Configuring properties not exposed in Cloudera Manager
Configuring Proxy Users to Access HDFS
Configuring queue mapping to use the user name from the application tag using Cloudera Manager
Configuring quotas
Configuring Ranger audit log storage to a local file
Configuring Ranger audit properties for HDFS
Configuring Ranger audit properties for Solr
Configuring Ranger audits to show actual client IP address
Configuring Ranger Authentication with UNIX, LDAP, AD, or PAM
Configuring Ranger Authentication with UNIX, LDAP, or AD
Configuring Ranger authorization
Configuring Ranger Authorization for Atlas
Configuring Ranger RMS (Hive-S3 ACL Sync)
Configuring Ranger Usersync for Deleted Users and Groups
Configuring Ranger Usersync for invalid usernames
Configuring Remote Querying
Configuring replication specific REST servers
Configuring replications
Configuring resource-based policies
Configuring resource-based services
Configuring rolling restart checks
Configuring SAML authentication on managed clusters
Configuring scheduler properties at the global level
Configuring Schema Registry instance in NiFi
Configuring secure access between Solr and Hue
Configuring secure HBase replication
Configuring secure HBase replication
Configuring server side JWT authentication for Kudu
Configuring Simple Authorization in Atlas
Configuring SMM for basic authentication
Configuring SMM to recognize Prometheus's TLS certificate
Configuring Spark application logging properties
Configuring Spark application properties in spark-defaults.conf
Configuring Spark Applications
Configuring Spark on YARN Applications
Configuring SPNEGO authentication and trusted proxies
Configuring SRM Driver for performance tuning
Configuring SRM Driver heartbeat emission
Configuring SRM Driver retry behaviour
Configuring srm-control
Configuring storage balancing for DataNodes
Configuring Streams Messaging Manager
Configuring Streams Replication Manager
Configuring tablet servers
Configuring the ABFS Connector
Configuring the advertised information of the SRM Service role
Configuring the Atlas hook in Kafka
Configuring the balancer threshold
Configuring the BI tool
Configuring the client configuration used for rolling restart checks
Configuring the compaction check interval
Configuring the driver role target clusters
Configuring the embedded Jetty Server in Queue Manager
Configuring the Hive Metastore to use HDFS HA
Configuring the Hue Query Processor scan frequency
Configuring the Kafka Connect Role
Configuring the Kudu master
Configuring the Livy Thrift Server
Configuring the Phoenix classpath
Configuring the queue auto removal expiration time
Configuring the resource capacity of root queue
Configuring the Schema Registry client
Configuring the service role target cluster
Configuring the SRM client's secure storage
Configuring the storage policy for the Write-Ahead Log (WAL)
Configuring timezone for Hue
Configuring TLS/SSL client authentication
Configuring TLS/SSL encryption
Configuring TLS/SSL encryption for Kudu using Cloudera Manager
Configuring TLS/SSL for Apache Atlas
Configuring TLS/SSL for Core Hadoop Services
Configuring TLS/SSL for HBase
Configuring TLS/SSL for HBase REST Server
Configuring TLS/SSL for HBase Thrift Server
Configuring TLS/SSL for HBase Web UIs
Configuring TLS/SSL for HDFS
Configuring TLS/SSL for Hue
Configuring TLS/SSL for YARN
Configuring TLS/SSL properties
Configuring ulimit for HBase
Configuring Usersync assignment of Admin users
Configuring Usersync to sync directly with LDAP/AD (FreeIPA)
Configuring work preserving recovery on NodeManager
Configuring work preserving recovery on ResourceManager
Configuring YARN Queue Manager dependency
Configuring YARN ResourceManager high availability
Configuring YARN Security for Long-Running Applications
Configuring YARN Services API to manage long-running applications
Configuring YARN Services using Cloudera Manager
Confirm the election status of a ZooKeeper service
Connect to Phoenix Query Server
Connect to Phoenix Query Server through Apache Knox
Connect workers
Connecting Hive to BI tools using a JDBC/ODBC driver in Data Hub
Connecting Kafka clients to Data Hub provisioned clusters
Connecting to Impala Daemon in Impala Shell
Connecting to PQS using JDBC
Connecting to the Apache Livy Thrift Server
Connecting to the Kafka cluster using load balancer
Connection to the cluster with configured DNS aliases
Connectors
Connectors
Considerations for backfill inserts
Considerations for Oozie to work with AWS
Considerations for working with HDFS snapshots
Contents of the BlockCache
Controlling access to queues using ACLs
Controlling Data Access with Tags
Conversion functions
ConvertFromBytes
Converting a managed non-transactional table to external
Converting a queue to a Managed Parent Queue
Converting from an NFS-mounted shared edits directory to Quorum-Based Storage
Converting instance directories to configs
ConvertToBytes
Copy sample tweets to HDFS
Copying data between a secure and an insecure cluster using DistCp and WebHDFS
Copying data with Hadoop DistCp
Corruption: checksum error on CFile block
COUNT
COUNT function
Create a collection for tweets
Create a Collection in Cloudera Search
Create a Collection in Cloudera Search
Create a Custom Role
Create a GCP Service Account
Create a Hive authorizer URL policy
Create a new Kudu table from Impala
Create a snapshot
Create a snapshot policy
Create a table in Hive
Create a test collection
Create a time-bound policy
Create a topology map
Create a topology script
Create a user-defined function
Create and Run a Note
CREATE DATABASE statement
Create empty table on the destination cluster
CREATE FUNCTION statement
Create indexer Maven project
CREATE MATERIALIZED VIEW
Create partitioned table as select feature
CREATE ROLE statement
Create snapshots on a directory
Create snapshots using Cloudera Manager
Create table as select feature
Create table feature
CREATE TABLE statement
Create table … like feature
CREATE VIEW statement
Creating a CRUD transactional table
Creating a custom access policy
Creating a custom YARN service
Creating a default directory for managed tables
Creating a function
Creating a group in Hue
Creating a Hadoop archive
Creating a Hue user
Creating a JAAS configuration file
Creating a Kafka topic
Creating a Lily HBase Indexer Configuration File
Creating a Lily HBase Indexer Configuration File
Creating a Morphline Configuration File
Creating a Morphline Configuration File
Creating a new Dynamic Configuration
Creating a notifier
Creating a read-only Admin user (Auditor)
Creating a replica of an existing shard
Creating a Solr collection
Creating a SQL policy to query an Iceberg table
Creating a Sqoop import command
Creating a standard YARN service
Creating a table for a Kafka stream
Creating a truststore file in PEM format
Creating an alert policy
Creating an Iceberg partitioned table
Creating an Iceberg table
Creating an insert-only transactional table
Creating an operational database cluster
Creating an S3-based external table
Creating and using a materialized view
Creating and using a partitioned materialized view
Creating Business Metadata
Creating categories
Creating classifications
Creating External Table
Creating glossaries
Creating labels
Creating Managed Identity
Creating new YARN services using UI
Creating partitions
Creating partitions dynamically
Creating placement rules
Creating secure external tables
Creating Static Pools
Creating tables in Hue by importing files
Creating terms
Creating the tables and view
Creating the UDF class
Credentials for token delegation
Cross data center replication example of multiple clusters
Cruise Control
Cruise Control
Cruise Control
Cruise Control dashboard in SMM UI
Cruise Control Overview
Cruise Control REST API endpoints
CSE-KMS: Amazon S3-KMS managed encryption keys
CUME_DIST
Customize dynamic resource allocation settings
Customize interpreter settings in a note
Customize the HDFS home directory
Customizing HDFS
Customizing Per-Bucket Secrets Held in Credential Files
Customizing the Hue web interface
Customizing time zones
DAS
Data Access
Data Access
Data Access
Data compaction
Data Engineering
Data Engineering
Data migration to Apache Hive
Data protection
Data Stewardship with Apache Atlas
Data storage metrics
Data types
Databases
Databases and Table Names
Dataflow development best practices
Dataflow management with schema-based routing
DataNodes
DataNodes
Date and time functions
DATE data type
DDL Bucketed Tables
DDL statements
Debezium Db2 Source
Debezium MySQL Source
Debezium Oracle Source
Debezium PostgreSQL Source
Debezium SQL Server Source
Debug Web UI for Catalog Server
Debug Web UI for Impala Daemon
Debug Web UI for Query Timeline
Debug Web UI for StateStore
Decide to use the BucketCache
DECIMAL data type
Decimal type
Decommission or remove a tablet server
Dedicated Coordinator
Default EXPIRES ON tag policy
Default operational database cluster definition
Default Ranger audit filters
Defining a backup target in solr.xml
Defining and adding clusters for replication
Defining Apache Atlas enumerations
Defining co-located Kafka clusters using a service dependency
Defining co-located Kafka clusters using Kafka credentials
Defining external Kafka clusters
Defining related terms
Defining what assets to extract metadata for
Delegation token based authentication
Delete data
Delete data feature
Delete Queue
Delete snapshots
Delete snapshots using Cloudera Manager
DELETE statement
Deleting a collection
Deleting a group
Deleting a Kafka topic
Deleting a notifier
Deleting a role
Deleting a schema
Deleting a user
Deleting all documents in a collection
Deleting an alert policy
Deleting data from a table
Deleting dynamically created child queues
Deleting dynamically created child queues manually
Deleting partitions
Deleting placement rules
Deleting queues
Deletion
DENSE_RANK
Deploy HBase replication
Deploying a dataflow
Deploying and configuring Oozie Sqoop1 Action JDBC drivers
Deploying and managing connectors
Deploying and managing services on YARN
Deployment Planning for Cloudera Search
Deprecation Notices In Cloudera Runtime 7.2.18
DESCRIBE EXTENDED and DESCRIBE FORMATTED
DESCRIBE statement
Describe table metadata feature
Describing a materialized view
Deserializing and serializing data from and to a Kafka topic
Detecting slow DataNodes
Determining the table type
Developing a dataflow
Developing and running an Apache Spark WordCount application
Developing Apache Kafka Applications
Developing Apache Spark Applications
Developing Applications with Apache Kudu
Diagnostics logging
Differences between Spark and Spark 3 actions
Dimensioning guidelines
Direct Reader configuration properties
Direct Reader limitations
Direct Reader mode introduction
Directory configurations
Directory permissions when using PAM authentication backend
Disable a provider in an existing provider configuration
Disable loading of coprocessors
Disable proxy for a known service in Apache Knox
Disable RegionServer grouping
Disable replication at the peer level
Disable the BoundedByteBufferPool
Disabling an alert policy
Disabling and redeploying HDFS HA
Disabling auto queue deletion globally
Disabling automatic compaction
Disabling CA Certificate validation from Hue
Disabling dynamic child creation in weight mode
Disabling Kerberos authentication for HBase clients
Disabling Oozie High Availability
Disabling Oozie UI using Cloudera Manager
Disabling queue auto removal on a queue level
Disabling redaction
Disabling the automatic creation of user home directories
Disabling the share option in Hue
Disabling the web metric collection for Hue
Disabling YARN Ranger authorization support
Disassociating partitions from queues
Disk Balancer commands
Disk management
Disk Removal
Disk Replacement
Disk space usage issue
Disk space versus namespace
DistCp and Proxy Settings
Distcp between secure clusters in different Kerberos realms
Distcp syntax and examples
DISTINCT operator
DML statements
DOUBLE data type
Downloading and exporting data from Hue
Downloading and viewing predefined dataflows
Downloading debug bundles
Downloading Hdfsfindtool from the CDH archives
Driver inter-node coordination
Drop a Kudu table
DROP DATABASE statement
DROP FUNCTION statement
DROP MATERIALIZED VIEW
Drop partition feature
DROP ROLE statement
DROP STATS statement
Drop table feature
DROP TABLE statement
DROP VIEW statement
Dropping a materialized view
Dropping an external table along with data
Dumping the Oozie database
Dynamic allocation
Dynamic Configurations execution log
Dynamic handling of failure in updating index
Dynamic Queue Scheduling
Dynamic resource allocation properties
Dynamic Resource Pool Settings
Dynamic resource-based column masking in Hive with Ranger policies
Dynamic tag-based column masking in Hive with Ranger policies
Dynamically loading a custom filter
Edit or delete a snapshot policy
Edit query in natural language
Editing a group
Editing a role
Editing a storage handler policy to access Iceberg files on the file system
Editing a user
Editing placement rules
Editing rack assignments for hosts
Effects of WAL rolling on replication
Enable Access Control for Data
Enable Access Control for Interpreter, Configuration, and Credential Settings
Enable Access Control for Notebooks
Enable and disable snapshot creation using Cloudera Manager
Enable authorization for additional HDFS web UIs
Enable authorization for HDFS web UIs
Enable authorization in Kafka with Ranger
Enable authorization of StorageHandler-based tables in Data Hub
Enable bulk load replication using Cloudera Manager
Enable Cgroups
Enable core dump
Enable detection of slow DataNodes
Enable disk IO statistics
Enable document-level authorization
Enable garbage collector logging
Enable GZipCodec as the default compression codec
Enable HBase high availability using Cloudera Manager
Enable HBase indexing
Enable hedged reads for HBase
Enable high availability
Enable HTTPS communication
Enable Kerberos authentication
Enable LDAP authentication in Solr
Enable multi-threaded faceting
Enable namespace mapping
Enable or disable authentication with delegation tokens
Enable Phoenix ACLs
Enable proxy for a known service in Apache Knox
Enable RegionServer grouping using Cloudera Manager
Enable replication on a specific table
Enable replication on HBase column families
Enable security for Cruise Control
Enable server-server mutual authentication
Enable snapshot creation on a directory
Enable Spark actions
Enable stored procedures in Hue
Enable the AdminServer
Enabling a multi-threaded environment for Hue
Enabling ABFS file browser for Hue configured with IDBroker
Enabling ABFS file browser for Hue configured without IDBroker
Enabling ABFS File Browser in Hue with RAZ in DataHub
Enabling Access Control for Zeppelin Elements
Enabling access to HBase browser from Hue
Enabling ACL for RegionServer grouping
Enabling Admission Control
Enabling an alert policy
Enabling and disabling trash
Enabling asynchronous scheduler
Enabling audit aging
Enabling Basic Authentication for the SRM Service
Enabling cache-control HTTP headers when using Hue
Enabling CSE-KMS
Enabling custom Kerberos principal support in a Queue Manager cluster
Enabling custom Kerberos principal support in YARN
Enabling DEBUG logging for Hue logs
Enabling dynamic child creation in weight mode
Enabling fault-tolerant processing in Spark Streaming
Enabling GS File Browser with RAZ
Enabling HBase META Replicas
Enabling HDFS HA
Enabling High Availability and automatic failover
Enabling httpd log rotation for Hue
Enabling Hue applications with Cloudera Manager
Enabling Hue as a TLS/SSL client
Enabling Hue as a TLS/SSL server using Cloudera Manager
Enabling interceptors
Enabling Intra-Queue preemption
Enabling Intra-Queue Preemption for a specific queue
Enabling JWT Authentication for impala-shell
Enabling Kerberos authentication and RPC encryption
Enabling Kerberos for the SRM service
Enabling LazyPreemption
Enabling LDAP Authentication for impala-shell
Enabling LDAP authentication with HiveServer2 and Impala
Enabling LDAP in Hue
Enabling Native Acceleration For MLlib
Enabling node labels on a cluster to configure partition
Enabling Oozie High Availability
Enabling Oozie SLA with Cloudera Manager
Enabling Oozie workflows that access Ozone storage
Enabling or disabling anonymous usage date collection
Enabling override of default queue mappings
Enabling preemption for a specific queue
Enabling prefixless replication
Enabling Ranger authorization
Enabling Ranger HDFS plugin manually on a Data Hub
Enabling Ranger Usersync search to generate internally
Enabling Remote Querying
Enabling S3 browser for Hue configured with IDBroker
Enabling S3 browser for Hue configured without IDBroker
Enabling S3 File Browser for Hue with RAZ in DataHub
Enabling scheduled queries
Enabling selective debugging for Ranger Admin
Enabling selective debugging for RAZ
Enabling Spark 3 engine in Hue
Enabling Spark authentication
Enabling Spark Encryption
Enabling Speculative Execution
Enabling SSE-C
Enabling SSE-KMS
Enabling SSE-S3
Enabling the Hive Metastore integration
Enabling the Oozie web console on managed clusters
Enabling the Query Processor service in Hue
Enabling the SQL editor autocompleter
Enabling TLS/SSL communication with HiveServer2
Enabling TLS/SSL communication with Impala
Enabling TLS/SSL for Hue Load Balancer
Enabling TLS/SSL for the SRM service
Enabling vectorized query execution
Enabling YARN Ranger authorization support
Enabling ZooKeeper SSL/TLS for Solr and HBase Indexer
Enabling ZooKeeper-less connection registry for HBase client
Encrypting an S3 Bucket with Amazon S3 Default Encryption
Encrypting Data on S3
Encryption
Encryption Reference
End to end latency use case
Enforcing TLS version 1.2 for Hue
Enhancements related to bulk glossary terms import
Enhancements with search query
Environment variables for sizing NameNode heap memory
Erasure coding CLI command
Erasure coding examples
Erasure coding overview
Errors during hole punching test
Escaping an invalid identifier
Essential metrics to monitor
Estimating memory limits
ETL with Cloudera Morphlines
Evolving a schema
Example - Placement rules creation
Example for finding parent object for assigned classification or term
Example for using THttpClient API in secure cluster
Example for using THttpClient API in unsecure cluster
Example for using TSaslClientTransport API in secure cluster without HTTP
Example of Atlas S3 Lineage
Example use cases
Example workload
Example: Configuration for work preserving recovery
Example: Running SparkPi on YARN
Example: Using the HBase-Spark connector
Examples
Examples of accessing Amazon S3 data from Spark
Examples of Audit Operations
Examples of controlling data access using classifications
Examples of creating and using UDFs
Examples of creating secure external tables
Examples of DistCp commands using the S3 protocol and hidden credentials
Examples of estimating NameNode heap memory
Examples of interacting with Schema Registry
Examples of overlapping quota policies
Examples of writing data in various file formats
Excluding audits for specific users, groups, and roles
Exit statuses for the HDFS Balancer
Experimental flags
Expire snapshots feature
Expiring snapshots
Explain query in natural language
EXPLAIN statement
Exploring using Lineage
Export a Note
Export a snapshot to another cluster
Export all resource-based policies for all services
Export Ranger reports
Export resource-based policies for a specific service
Export tag-based policies
Exporting and importing schemas
Exporting data using Connected type
Exporting schemas using Schema Registry API
Expose HBase metrics to a Ganglia server
Extending Atlas to Manage Metadata from Additional Sources
External table access
Extracting ADLS Metadata using Atlas
Extracting KRaft metadata
Extracting S3 Metadata using Atlas
Extraction Command
Extraction Configuration
Extraction Prerequisites
Extractor configuration properties
Failures during INSERT, UPDATE, UPSERT, and DELETE operations
Feature comparison
Feature Comparisons
Fetching Spark Maven dependencies
File descriptor limits
File descriptors
File System Credentials
Files and directories
Files and directories
Filesystems
Filter service access logs from Ranger UI
Filter types
Finding issues
Finding the list of Hue superusers
Finding the list of Hue superusers
Fine-tuning Oozie's database connection
FIRST_VALUE
Fixed Common Vulnerabilities and Exposures 7.2.18
Fixed Issues In Cloudera Runtime 7.2.18
Fixed Issues In Cloudera Runtime 7.2.18.100
Fixed Issues in Cloudera Runtime 7.2.18.200
Fixed Issues in Cloudera Runtime 7.2.18.300
Fixed Issues in Cloudera Runtime 7.2.18.400
Fixed Issues in Cloudera Runtime 7.2.18.500
Fixing a query in Hue
Fixing a warning related to accessing non-optimized Hue
Fixing authentication issues between HBase and Hue
Fixing block inconsistencies
Fixing incorrect start time and duration on Hue Job Browser
Fixing issues
Flexible partitioning
FLOAT data type
Flush options
Flushing data to disk
Force deletion of external users and groups from the Ranger database
Format for using Hadoop archives with MapReduce
Frequently asked questions
Functions
General Quota Syntax
General Settings
Generate a table list
Generate and configure a signing keystore for Knox in HA
Generate comment for a SQL query
Generate SQL from NQL
Generate tokens
Generating collection configuration using configs
Generating Solr collection configuration using instance directories
Generating surrogate keys
Generating Table and Column Statistics
Getting scheduled query information and monitor the query
Getting Started with Operational Database
Getting the JDBC or ODBC driver
Glossaries overview
Glossary performance improvements
Governance
Governance
Governance
Governance Overview
Graceful HBase shutdown
Gracefully shut down an HBase RegionServer
Gracefully shut down the HBase service
GRANT ROLE statement
GRANT statement
Granting permission to access S3, ABFS, GS File Browser in Hue
GROUP BY clause
GROUPING() and GROUPING_ID() functions
Groups and fetching
GROUP_CONCAT function
Guidelines for Schema Design
Hadoop
Hadoop archive components
Hadoop File Formats Support
Hadoop File System commands
Handling disk failures
Handling Dynamic Configuration conflicts
Handling large messages
Hash and hash partitioning
Hash and range partitioning
Hash partitioning
Hash partitioning
HashTable/SyncTable tool configuration
HAVING clause
HBase
HBase
HBase
HBase actions that produce Atlas entities
HBase audit entries
HBase authentication
HBase authorization
HBase backup and disaster recovery strategies
HBase cache-aware load balancer configuration
HBase entities created in Atlas
HBase filtering
HBase I/O components
HBase is using more disk space than expected
Hbase lineage
HBase load balancer
HBase MCC Configurations
HBase MCC Restrictions
HBase MCC Usage in Spark with Java
HBase MCC Usage in Spark with Scala
HBase MCC Usage with Kerberos
HBase metadata collection
HBase metrics
HBase online merge
HBase persistent BucketCache
HBase read replicas
HBase Shell example
HBase stochastic load balancer configuration
HBaseMapReduceIndexerTool command line reference
HBCK2 tool command reference
HDFS
HDFS
HDFS
HDFS ACLs
HDFS Block Skew
HDFS Caching
HDFS commands for metadata files and directories
HDFS entity metadata migration
HDFS lineage commands
HDFS lineage data extraction in Atlas
HDFS Metrics
HDFS Overview
HDFS Sink
HDFS Sink properties reference
HDFS Stateless Sink
HDFS Stateless Sink properties reference
HDFS storage policies
HDFS storage types
HDFS storage types
Heap sampling
Hierarchical namespaces vs. non-namespaces
Hierarchical queue characteristics
High Availability on HDFS clusters
Hive
Hive
Hive
Hive
Hive access authorization
Hive ACID metric properties for compaction observability
Hive demo data
Hive entity metadata migration
Hive Metastore leader election
Hive on Tez introduction
Hive table locations
Hive unsupported interfaces and features in public clouds
Hive Warehouse Connector
Hive Warehouse Connector for accessing Apache Spark data
Hive Warehouse Connector Interfaces
HiveServer actions that produce Atlas entities
HiveServer audit entries
HiveServer entities created in Atlas
HiveServer lineage
HiveServer metadata collection
HiveServer relationships
HMS table storage
How Atlas works with Iceberg
How Cloudera Search works
How Ignore and Prune feature works
How Lineage strategy works
How NameNode manages blocks on a failed DataNode
How NFS Gateway authenticates and maps users
How Range-aware replica placement in Kudu works
How tag-based access control works
How the reporting task runs in a NiFi cluster
How to access Spark files on Ozone
How to download results using Basic and Advanced search options
How to full sync the Ranger RMS database
How to manage log rotation for Ranger Services
How to optimally configure Ranger RAZ client performance
How to read the Configurations table
How to read the Placement Rules table
How to set audit filters in Ranger Admin Web UI
How to Set up Failover and Failback
HPL/SQL stored procedures
HTTP SInk
HTTP Sink properties reference
HTTP Source
HTTP Source properties reference
HttpFS authentication
Hue
Hue
Hue
Hue Advanced Configuration Snippet
Hue configuration files
Hue configurations in CDP Runtime
Hue Limitation
Hue load balancer does not start after enabling TLS
Hue logs
Hue Overview
Hue overview
Hue service Django logs
Hue support for Oozie
Hue supported browsers
HWC and DataFrame API limitations
HWC and DataFrame APIs
HWC API Examples
HWC authorization
HWC integration pyspark, sparklyr, and Zeppelin
HWC limitations
HWC supported types mapping
IAM Role permissions for working with SSE-KMS
Iceberg
Iceberg
Iceberg
Iceberg data types
Iceberg for Atlas
Iceberg overview
Iceberg support for Atlas
Iceberg table properties
ID ranges in Schema Registry
Identifiers
Identifying problems
Identity Management
Ignore or Prune pattern to filter Hive metadata entities
Impact of quota violation policy
Impala
Impala
Impala
Impala actions that produce Atlas entities
Impala aliases
Impala audit entries
Impala Authentication
Impala Authorization
Impala database containment model
Impala DDL for Kudu
Impala DML for Kudu Tables
Impala entities created in Atlas
Impala entity metadata migration
Impala integration limitations
Impala integration limitations
Impala lineage
Impala lineage
Impala Logs
Impala metadata collection
Impala Shell Command Reference
Impala Shell Configuration File
Impala Shell Configuration Options
Impala Shell Tool
Impala SQL and Hive SQL
Impala Startup Options for Client Connections
Impala string functions
Impala tables
Impala with Amazon S3
Impala with Azure Data Lake Store (ADLS)
Impala with HBase
Impala with HDFS
Impala with Kudu
Impala with Ozone
Implementing your own Custom Command
Import a Note
Import and sync LDAP users and groups
Import command options
Import External Packages
Import resource-based policies for a specific service
Import resource-based policies for all services
Import tag-based policies
Importance of a Secure Cluster
Importance of logical types in Avro
Importing and exporting resource-based policies
Importing and exporting tag-based policies
Importing Business Metadata associations in bulk
Importing Confluent Schema Registry schemas into Cloudera Schema Registry
Importing data into HBase
Importing Glossary terms in bulk
Importing Hive Metadata using Command-Line (CLI) utility
Importing Kafka entities into Atlas
Importing RDBMS data into Hive
Importing schemas using Schema Registry API
Imports into Hive
Improving Performance for S3A
Improving performance in Schema Registry
Improving performance with centralized cache management
Improving performance with short-circuit local reads
Improving Software Performance
Inclusion and exclusion operation for HDFS files
Increasing StateStore Timeout
Increasing storage capacity with HDFS compression
Increasing the maximum number of processes for Oracle database
Index sample data
Indexing data
Indexing Data Using Morphlines
Indexing Data Using Spark-Solr Connector
Indexing data with MapReduceIndexerTool in Solr backup format
Indexing sample tweets with Cloudera Search
InfluxDB SInk
InfluxDB Sink properties reference
Information and debugging
Initiate replication when data already exist
Initiating automatic compaction in Cloudera Manager
INSERT and primary key uniqueness violations
Insert data
INSERT statement
Insert table data feature
Inserting data into a table
Inserting data into a table
Install the NFS Gateway
Installing Apache Knox
Installing Atlas in HA using CDP Private Cloud Base cluster
Installing connectors
Installing SMM in CDP Public Cloud
Installing the REST Server using Cloudera Manager
Installing the UDF development package
INT data type
Integrating Apache Hive with BI
Integrating Apache Hive with Spark and Kafka
Integrating Schema Registry with Atlas
Integrating Schema Registry with Flink and SSB
Integrating Schema Registry with Kafka
Integrating Schema Registry with NiFi
Integrating the Hive Metastore with Apache Kudu
Integrating with Schema Registry
Integrating your identity provider's SAML server with Hue
Inter-broker security
Interacting with Hive views
Internal and external Impala tables
Introducing the S3A Committers
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction to Apache HBase
Introduction to Apache Phoenix
Introduction to Apache Phoenix
Introduction to Atlas ADLS Extractor
Introduction to Azure Storage and the ABFS Connector
Introduction to HBase Multi-cluster Client
Introduction to HBase Multi-cluster Client
Introduction to HDFS metadata files and directories
Introduction to Operational Database
Introduction to Streams Messaging Manager
Introduction to the HBase stochastic load balancer
Invalid method name: 'GetLog' error
Invalid query handle
INVALIDATE METADATA statement
ISR management
Issues starting or restarting the master or the tablet server
Java API example
Java client
JBOD
JBOD Disk migration
JBOD setup
JDBC mode configuration properties
JDBC mode limitations
JDBC read mode introduction
JDBC Sink
JDBC Sink properties reference
JDBC Source
JDBC Source properties reference
JMS Source
JMS Source properties reference
Job cleanup
Job cleanup
Job summaries in _SUCCESS files
Job summaries in _SUCCESS files
Jobs Management
Joins in Impala SELECT statements
JournalNodes
JournalNodes
JVM and garbage collection
JWT algorithms
JWT authentication for Kudu
Kafka
Kafka
Kafka
Kafka
Kafka
Kafka ACL APIs support in Ranger
Kafka actions that produce Atlas entities
Kafka Architecture
Kafka audit entries
Kafka brokers and Zookeeper
Kafka clients and ZooKeeper
Kafka cluster load balancing using Cruise Control
Kafka Connect
Kafka Connect connector configuration security
Kafka Connect Connector Reference
Kafka Connect log files
Kafka Connect Overview
Kafka Connect property configuration in Cloudera Manager for Prometheus
Kafka Connect REST API security
Kafka Connect Secrets Storage
Kafka Connect tasks
Kafka Connect to Kafka broker security
Kafka Connect worker assignment
Kafka consumers
Kafka credentials property reference
Kafka disaster recovery
Kafka FAQ
Kafka Introduction
Kafka KRaft [Technical Preview]
Kafka KRaft [Technical Preview]
Kafka lineage
Kafka metadata collection
Kafka producers
Kafka property configuration in Cloudera Manager for Prometheus
Kafka public APIs
Kafka rack awareness
Kafka relationships
Kafka security hardening with Zookeeper ACLs
Kafka storage handler and table properties
Kafka Streams
Kafka stretch clusters
kafka-*-perf-test
kafka-cluster
kafka-configs
kafka-console-consumer
kafka-console-producer
kafka-consumer-groups
kafka-delegation-tokens
kafka-features
kafka-log-dirs
kafka-reassign-partitions
kafka-topics
Kafka-ZooKeeper performance tuning
KafkaAvroDeserializer properties reference
KafkaAvroSerializer properties reference
Keep replicas current
Kerberos configurations for HWC
Kerberos setup guidelines for Distcp between secure clusters
Kernel stack watchdog traces
Key Differences between INSERT-ONLY and FULL ACID Tables
Key Features
kite-morphlines-avro
kite-morphlines-core-stdio
kite-morphlines-core-stdlib
kite-morphlines-hadoop-core
kite-morphlines-hadoop-parquet-avro
kite-morphlines-hadoop-rcfile
kite-morphlines-hadoop-sequencefile
kite-morphlines-json
kite-morphlines-maxmind
kite-morphlines-metrics-servlets
kite-morphlines-protobuf
kite-morphlines-saxon
kite-morphlines-solr-cell
kite-morphlines-solr-core
kite-morphlines-tika-core
kite-morphlines-tika-decompress
kite-morphlines-useragent
Known issue and its workaround
Known issues and limitations
Known Issues In Cloudera Runtime 7.2.18
Known Issues in Cloudera Runtime 7.2.18.500
Known limitations in Hue
Knox
Knox
Knox
Knox Gateway token integration
Knox SSO Cookie Invalidation
Knox Supported Services Matrix
Knox Token API
KRaft setup
Kudu
Kudu
Kudu
Kudu and Apache Ranger integration
Kudu architecture in a CDP public cloud deployment
Kudu authentication
Kudu authentication tokens
Kudu authentication with Kerberos
Kudu authorization policies
Kudu authorization tokens
Kudu backup
Kudu coarse-grained authorization
Kudu concepts
Kudu example applications
Kudu fine-grained authorization
Kudu integration with Spark
Kudu introduction
Kudu master web interface
Kudu metrics
Kudu network architecture
Kudu Python client
Kudu recovery
Kudu schema design
Kudu security considerations
Kudu security limitations
Kudu security limitations
Kudu Sink
Kudu Sink properties reference
Kudu tablet server web interface
Kudu tracing
Kudu transaction semantics
Kudu web interfaces
Kudu-Impala integration
LAG
LAST_VALUE
Late Materialization of Columns
Launch distcp
Launch Zeppelin
Launching a YARN service
Launching Apache Phoenix Thin Client
LAZY_PERSIST memory storage policy
LDAP authentication
LDAP import and sync options
LDAP properties
LDAP search fails with invalid credentials error
LDAP Settings
LEAD
Leader positions and in-sync replicas
Lengthy BalancerMember Route length
Leveraging Business Metadata
Lily HBase batch indexing for Cloudera Search
Lily HBase NRT indexing
LIMIT clause
Limit CPU usage with Cgroups
Limitation for Spark History Server with high availability
Limitations
Limitations
Limitations and restrictions for Impala UDFs
Limitations of Amazon S3
Limitations of Atlas-NiFi integration
Limitations of erasure coding
Limitations of Phoenix-Hive connector
Limitations of the S3A Committers
Limiting the speed of compactions
Lineage lifecycle
Lineage overview
Linux Container Executor
List files in Hadoop archives
List of Thrift API and HBase configurations
List restored snapshots
List snapshots
Listing available metrics
Literals
Live write access
Livy
Livy
Livy API reference for batch jobs
Livy API reference for interactive sessions
Livy batch object
Livy high availability support
Livy interpreter configuration
Livy objects for interactive sessions
Load balancer in front of Schema Registry instances
Load balancing between Hue and Impala
Load balancing for Apache Knox
Load data inpath feature
LOAD DATA statement
Load or replace partition data feature
Loading ORC data into DataFrames using predicate push-down
Loading the Oozie database
Local file system support
Locking an account after invalid login attempts
Log aggregation file controllers
Log aggregation properties
Log cleaner
Logging Extractor Activity
Logical Architecture
Logical operators, comparison operators and comparators
Logs and log segments
Main Use Cases
Maintenance manager
Making row-level changes on V2 tables only
Manage HBase snapshots using COD CLI
Manage HBase snapshots using the HBase shell
Manage individual delegation tokens
Manage Knox Gateway tokens
Manage Knox metadata
Manage Ranger authorization in Solr
Managed Parent Queues
Management basics
Management of existing Apache Knox shared providers
Management of Knox shared providers in Cloudera Manager
Management of services for Apache Knox via Cloudera Manager
Managing Access Control Lists
Managing Alert Policies and Notifiers
Managing and Allocating Cluster Resources using Capacity Scheduler
Managing and monitoring Cruise Control rebalance
Managing and monitoring Kafka Connect
Managing Apache Hadoop YARN Services
Managing Apache HBase
Managing Apache HBase Security
Managing Apache Hive
Managing Apache Impala
Managing Apache Kafka
Managing Apache Kudu
Managing Apache Kudu Security
Managing Apache Phoenix Security
Managing Apache Phoenix security
Managing Apache ZooKeeper
Managing Apache ZooKeeper Security
Managing Auditing with Ranger
Managing Business Terms with Atlas Glossaries
Managing Cloudera Search
Managing collection configuration
Managing collections
Managing Cruise Control
Managing Data Storage
Managing dynamic child creation enabled parent queues
Managing Dynamic Configurations
Managing dynamic queues
Managing dynamically created child queues
Managing high partition workloads
Managing Hue permissions
Managing Kafka topics
Managing Kudu tables with range-specific hash schemas
Managing logging properties for Ranger services
Managing Logs
Managing Metadata in Impala
Managing Metadata in Impala
Managing partition retention time
Managing placement rules
Managing query rewrites
Managing queues
Managing Resources in Impala
Managing secrets using the REST API
Managing snapshot policies using Cloudera Manager
Managing the YARN service life cycle through the REST API
Managing topics across multiple Kafka clusters
Managing YARN Queue Manager
Managing, Deploying and Monitoring Connectors
Manifest committer for ABFS and GCS
Manifest committer for ABFS and GCS
Manually configuring SAML authentication
Manually failing over to the standby NameNode
MAP complex type
Mapping Apache Phoenix schemas to Apache HBase namespaces
Mapping Atlas Identity to CDP users
MapReduce indexing
MapReduce Job ACLs
MapReduceIndexerTool
MapReduceIndexerTool input splits
MapReduceIndexerTool metadata
MapReduceIndexerTool usage syntax
Materialized view feature
Materialized view rebuild feature
Materialized views
Mathematical functions
Maven artifacts
MAX
MAX function
Memory
Memory limits
Merge feature
Merge process stops during Sqoop incremental imports
Merging data in tables
Metrics
Metrics and Insight
Migrate brokers by modifying broker IDs in meta.properties
Migrate data on the same host
Migrate Hive table to Iceberg feature
Migrate to a multiple Kudu master configuration
Migrate to strongly consistent indexing
Migrating a Hive table to Iceberg
Migrating Consumer Groups Between Clusters
Migrating Data Using Sqoop
Migrating database configuration to a new location
Migrating ResourceManager to another host
Migrating Solr replicas
Migration from Fair Scheduler to Capacity Scheduler
Migration Guide
Migration of Spark 2 applications
MIN
MIN function
Min/Max Filtering
Minimize cluster distruption during planned downtime
Miscellaneous functions
Mixed resource allocation mode (Technical Preview)
MOB cache properties
Modify a provider in an existing provider configuration
Modify GCS Bucket Permissions
Modify interpreter settings
Modifying a collection configuration generated using an instance directory
Modifying a Kafka topic
Modifying Impala Startup Options
Modifying the workflow file manually
Monitor cluster health with ksck
Monitor RegionServer grouping
Monitor the BlockCache
Monitor the performance of hedged reads
Monitoring
Monitoring and Debugging Spark Applications
Monitoring and metrics
Monitoring Apache Impala
Monitoring Apache Kudu
Monitoring checkpoint latency for cluster replication
Monitoring compaction health in Cloudera Manager
Monitoring end to end latency for Kafka topic
Monitoring end-to-end latency
Monitoring heap memory usage
Monitoring Kafka
Monitoring Kafka brokers
Monitoring Kafka cluster replications (SRM)
Monitoring Kafka cluster replications by quick ranges
Monitoring Kafka clusters
Monitoring Kafka consumers
Monitoring Kafka producers
Monitoring Kafka topics
Monitoring lineage information
Monitoring log size information
Monitoring replication latency for cluster replication
Monitoring replication throughput and latency by values
Monitoring Replication with Streams Messaging Manager
Monitoring status of the clusters to be replicated
Monitoring throughput for cluster replication
Monitoring topics to be replicated
More Resources
Morphline commands overview
Move HBase Master Role to another host
Moving a NameNode to a different host using Cloudera Manager
Moving highly available NameNode, failover controller, and JournalNode roles using the Migrate Roles wizard
Moving NameNode roles
Moving the JournalNode edits directory for a role group using Cloudera Manager
Moving the JournalNode edits directory for a role instance using Cloudera Manager
Moving the Oozie service to a different host
MQTT Source
MQTT Source properties reference
Multi-server LDAP/AD autentication
Multilevel partitioning
Multiple Namenodes configurations
Multiple NameNodes overview
MySQL: 1040, 'Too many connections' exception
NameNode architecture
NameNodes
NameNodes
NDV function
Network and I/O threads
Networking parameters
New topic and consumer group discovery
Nginx configuration for Prometheus
Nginx installtion
Nginx proxy configuration over Prometheus
NiFi lineage
NiFi metadata collection
NiFi record-based Processors and Controller Services
Non-covering range partitions
Non-unique primary key index
Notes about replication
NTILE
Number-of-Regions Quotas
Number-of-Tables Quotas
OAuth2 authentication
Off-heap BucketCache
OFFSET clause
Offsets Subcommand
On-demand Metadata
On-demand Metadata
Oozie
Oozie
Oozie
Oozie
Oozie and client configurations
Oozie configurations with CDP services
Oozie database configurations
Oozie High Availability
Oozie Java-based actions with Java 17
Oozie Load Balancer configuration
Oozie scheduling examples
Oozie security enhancements
Opening Ranger in Data Hub
Operating system requirements
Operational Database
Operational Database
Operational Database
Operational database cluster
Operational Database Overview
Operators
Optimize mountable HDFS
Optimize performance for evaluating SQL predicates
Optimize SQL query
Optimizer hints
Optimizing data storage
Optimizing HBase I/O
Optimizing NameNode disk space with Hadoop archives
Optimizing performance
Optimizing queries using partition pruning
Optimizing S3A read performance for different file types
Options to determine differences between contents of snapshots
Options to monitor compactions
Options to monitor transaction locks
Options to monitor transactions
Options to rerun Oozie workflows in Hue
Options to restart the Hue service
Oracle TCPS
ORC file format
ORC vs Parquet formats
Orchestrate a rolling restart with no downtime
ORDER BY clause
Other known issues
OVER
Overview
Overview
Overview
Overview
Overview
Overview of Hadoop archives
Overview of HDFS
Overview of Oozie
Packaging different versions of libraries with an Apache Spark application
PAM authentication
Parameters to configure the Disk Balancer
Parquet
Partition configuration
Partition evolution feature
Partition pruning
Partition Pruning for Queries
Partition refresh and configuration
Partition transform feature
Partitioning
Partitioning
Partitioning examples
Partitioning for Kudu Tables
Partitioning guidelines
Partitioning limitations
Partitioning limitations
Partitioning tables
Partitions
Partitions and performance
PERCENT_RANK
Perform a backup of the HDFS metadata
Perform a disk hot swap for DataNodes using Cloudera Manager
Perform ETL by ingesting data from Kafka into Hive
Perform master hostname changes
Perform scans using HBase Shell
Perform the recovery
Perform the removal
Performance and Scalability
Performance and storage considerations for Spark SQL DROP TABLE PURGE
Performance Best Practices
Performance comparison between Cloudera Manager and Prometheus
Performance considerations
Performance Considerations
Performance considerations for UDFs
Performance Impact of Encryption
Performance improvement using partitions
Performance issues
Performance Issues Related to Data Encryption
Performance tuning
Performance tuning
Performant .NET producer
Periodically rebuilding a materialized view
Phoenix
Phoenix
Phoenix
Phoenix is FIPS compliant
Phoenix-Spark connector usage examples
Physical backups of an entire node
Pillars of Security
Placement rule policies
Plan the data movement across disks
Planner changes for CPU usage
Planning for Apache Impala
Planning for Apache Kafka
Planning for Apache Kudu
Planning for Streams Replication Manager
Planning overview
Populating an HBase Table
Ports Used by Impala
POST /admin/audits/ API
Post-migration verification
Predefined access policies for Schema Registry
Predicate push-down optimization
Preloaded resource-based services and policies
Prepare for master hostname changes
Prepare for removal
Prepare for the recovery
Prepare to back up the HDFS metadata
Preparing a thrift server and client
Preparing the hardware resources for HDFS High Availability
Prerequisites
Prerequisites
Prerequisites
Prerequisites
Prerequisites
Prerequisites for configuring short-ciruit local reads
Prerequisites for configuring TLS/SSL for Oozie
Prerequisites for enabling erasure coding
Prerequisites for enabling GS File Browser
Prerequisites for enabling HDFS HA using Cloudera Manager
Prerequisites for HDFS lineage extraction
Prerequisites for Prometheus configuration
Prerequisites for setting up Atlas HA
Prerequisites to configure TLS/SSL for HBase
Preventing inadvertent deletion of directories
Primary key design
Primary key index
Principal name mapping
Prometheus configuration for SMM
Prometheus for SMM limitations
Prometheus properties configuration
Propagating classifications through lineage
Propagation of tags as deferred actions
Properties for configuring centralized caching
Properties for configuring short-circuit local reads on HDFS
Properties for configuring the Balancer
Properties to set the size of the NameNode edits directory
Protocol between consumer and broker
Providing read-only access to Queue Manager UI
Providing the Hive password through a file
Providing the Hive password through a prompt
Providing the Hive password through an alias
Providing the Hive password through an alias in a file
Provision an operational database cluster
Proxied RPCs in Kudu
Proxy Cloudera Manager through Apache Knox
Public Cloud Service Pack Releases
Public key and secret storage
Purging deleted entities
Purposely using a stale materialized view
PUT /admin/purge/ API
Query an existing Kudu table from Impala
Query Join Performance
Query metadata tables feature
Query options
Query results cache
Query sample data
Query scheduling
Query vectorization
Querying a schema
Querying arrays
Querying correlated data
Querying existing HBase tables
Querying files into a DataFrame
Querying Kafka data
Querying live data from Kafka
Querying the information_schema database
Queue ACLs
Quota enforcement
Quota violation policies
Quotas
Rack awareness
Rack awareness (Location awareness)
Range partitioning
Range partitioning
Range-specific hash schemas example: Using impala-shell
Range-specific hash schemas example: Using Kudu C++ client API
Range-specific hash schemas example: Using Kudu Java client API
Ranger
Ranger
Ranger
Ranger
Ranger
Ranger access conditions
Ranger AD Integration
Ranger Admin Metrics API
Ranger API Overview
Ranger Audit Filters
Ranger console navigation
Ranger Hive Plugin
Ranger integration
Ranger Kafka Plugin
Ranger plugin overview
Ranger policies for Kudu
Ranger Policies Overview
Ranger REST API documentation
Ranger RMS (Hive-S3 ACL-Sync) Use Cases
Ranger RMS - HIVE-S3 ACL Sync Overview
Ranger Security Zones
Ranger special entities
Ranger tag-based policies
Ranger UI authentication
Ranger UI authorization
Ranger Usersync
RANK
Re-encrypting secrets
Read access
Read and write operations
Read operations (scans)
Read replica properties
Reading and writing Hive tables in R
Reading and writing Hive tables in Zeppelin
Reading data from HBase
Reading data through HWC
Reading Hive ORC tables
Reads (scans)
REAL data type
Reassigning replicas between log directories
Rebalancing partitions
Rebalancing with Cruise Control
Rebuild a Kudu filesystem layout
Recommendations for client development
Recommended configurations for the Balancer
Recommended configurations for the balancer
Recommended deployment architecture
Recommended settings for G1GC
Recommissioning Kudu masters through Cloudera Manager
Record management
Record order and assignment
Records
Recover data from a snapshot
Recover from a dead Kudu master
Recover from disk failure
Recover from full disks
Redeploying the Oozie ShareLib
Redeploying the Oozie sharelib using Cloudera Manager
Reducing the Size of Data Structures
Refer to a table using dot notation
Reference architecture
Referencing S3 Data in Applications
REFRESH AUTHORIZATION statement
REFRESH FUNCTIONS statement
REFRESH statement
Registering a Lily HBase Indexer Configuration with the Lily HBase Indexer Service
Registering and querying a schema for a Kafka topic
Registering the UDF
Reloading, viewing, and filtering functions
Remote Querying
Remote topic discovery
Remove a DataNode
Remove a provider parameter in an existing provider configuration
Remove a RegionServer from RegionServer grouping
Remove Kudu masters through CLI
Remove or add storage directories for NameNode data directories
Remove storage directories using Cloudera Manager
Removing Kudu masters through Cloudera Manager
Removing Query Processor service from cluster
Reordering placement rules
Repairing partitions manually using MSCK repair
Replace a disk on a DataNode host
Replace a ZooKeeper disk
Replace a ZooKeeper role on an unmanaged cluster
Replace a ZooKeeper role with ZooKeeper service downtime
Replace a ZooKeeper role without ZooKeeper service downtime
Replicate data between Data Hub clusters with cloud SRM
Replicate pre-exist data in an active-active deployment
Replicating Data
Replicating data from PvC Base to Data Hub with cloud SRM
Replicating data from PvC Base to Data Hub with on-prem SRM
Replication
Replication across three or more clusters
Replication caveats
Replication flows and replication policies
Replication requirements
Report craches using breakpad
Request a timeline-consistent read
Requirements for compressing and extracting files using Hue File Browser
Requirements for Oozie High Availability
Rerunning a query from the Job Browser page
Reserved words
Resetting Hue user password
Resolving "The user authorized on the connection does not match the session username" error
Resource allocation overview
Resource distribution workflow
Resource scheduling and management
Resource Tuning Example
Resource-based Services and Policies
Resources for on-boarding Azure for CDP users
REST API
Restore a snapshot
Restore data from a replica
Restore HDFS metadata from a backup using Cloudera Manager
Restore tables from backups
Restoring a collection
Restoring NameNode metadata
Restricting access to Kafka metadata in Zookeeper
Restricting classifications based on user permission
Restricting supported ciphers for Hue
Restricting user login
Retries
Retrieving log directory replica assignment information
Retrieving the clusterstate.json file
Reuse the subnets created for CDP
Reuse the subnets created for CDP
Revalidating Dynamic Configurations
REVOKE ROLE statement
REVOKE statement
ROLE statements
Rollback table feature
Rolling restart checks
Rotate Auto-TLS Certificate Authority and Host Certificates
Rotate the master key/secret
Row-level filtering and column masking in Hive
Row-level filtering in Hive with Ranger policies
Row-level filtering in Impala with Ranger policies
ROW_NUMBER
RPC timeout traces
Rule configurations
Run a Hive command
Run a tablet rebalancing tool in Cloudera Manager
Run a tablet rebalancing tool in command line
Run a tablet rebalancing tool on a rack-aware cluster
Run stored procedure from Hue
Run the Disk Balancer plan
Run the spark-submit job
Run the tablet rebalancing tool
Running a Spark MLlib example
Running ADLS Metadata Extractor
Running an interactive session with the Livy REST API
Running Apache Spark Applications
Running bulk extraction
Running Bulk Extraction
Running Commands and SQL Statements in Impala Shell
Running HDFS lineage commands
Running incremental extraction
Running Incremental Extraction
Running PySpark in a virtual environment
Running sample Spark applications
Running shell commands
Running Spark 3.4 Applications
Running Spark applications on secure clusters
Running Spark applications on YARN
Running Spark Python applications
Running the balancer
Running the HBaseMapReduceIndexerTool
Running the HBCK2 tool
Running time travel queries
Running YARN Services
Running your first Spark application
Runtime 7.2.18.0-641
Runtime environment for UDFs
Runtime error: Could not create thread: Resource temporarily unavailable (error 11)
Runtime Filtering
S3 actions that produce or update Atlas entities
S3 entities created in Atlas
S3 entity audit entries
S3 Extractor configuration
S3 Performance Checklist
S3 relationships
S3 Sink
S3 Sink properties reference
S3A and Checksums (Advanced Feature)
Safely Writing to S3 Through the S3A Committers
SAML properties
Sample pom.xml file for Spark Streaming with Kafka
Saving a YARN service definition
Saving aliases
Saving searches
Saving the password to Hive Metastore
Scalability Considerations
Scaling Kudu
Scaling Limits and Guidelines
Scaling recommendations and limitations
Scaling recommendations and limitations
Scheduler performance improvements
Scheduling among queues
Scheduling in Oozie using cron-like syntax
Schema alterations
Schema design limitations
Schema design limitations
Schema entities
Schema evolution feature
Schema inference feature
Schema objects
Schema Registry
Schema Registry
Schema Registry
Schema Registry actions that produce Atlas entities
Schema Registry audit entries
Schema Registry authentication through OAuth2 JWT tokens
Schema Registry authorization through Ranger access policies
Schema Registry component architecture
Schema Registry concepts
Schema Registry metadata collection
Schema Registry Overview
Schema Registry overview
Schema Registry Reference
Schema Registry server configuration
Schema Registry use cases
Schema replationships
Schemaless mode overview and best practices
SchemaRegistryClient properties reference
Script with HBase Shell
SDX
Search and other Runtime components
Search Ranger reports
Search Tutorial
Searching applications
Searching by topic name
Searching for entities using Business Metadata attributes
Searching for entities using classifications
Searching Kafka cluster replications by source
Searching metadata tags
Searching using terms
Searching with Metadata
Secondary Sort
Secure access mode introduction
Secure by Design
Secure options to provide Hive password during a Sqoop import
Secure Prometheus for SMM
Securing Access to Hadoop Cluster: Apache Knox
Securing Apache Hive
Securing Apache Impala
Securing Apache Kafka
Securing Atlas
Securing Atlas
Securing Cloudera Search
Securing configs with ZooKeeper ACLs and Ranger
Securing Cruise Control
Securing database connections with TLS/SSL
Securing Hue
Securing Hue from CWE-16
Securing Hue passwords with scripts
Securing Impala
Securing Kafka Connect
Securing KRaft
Securing Schema Registry
Securing sessions
Securing Streams Messaging Manager
Securing Streams Messaging Manager
Securing Streams Replication Manager
Securing the S3A Committers
Security
Security considerations for UDFs
Security examples
Security examples
Security Levels
Security Management Model
Security Model and Operations on S3
Security overview
Security Terms
Security Zones Administration
Security Zones Example Use Cases
Select Iceberg data feature
SELECT statement
Selecting an Iceberg table
Server management limitations
Server management limitations
Services backed by PostgreSQL fail or stop responding
Set HADOOP_CONF to the destination cluster
Set HDFS quotas
Set properties in Cloudera Manager
SET statement
Set up
Set up a storage policy for HDFS
Set up MirrorMaker in Cloudera Manager
Set up SQL AI Assistant
Set up SSD storage using Cloudera Manager
Set up WebHDFS on a secure cluster
Setting a default partition expression
Setting a Schema Registry ID range
Setting Application-Master resource-limit for a specific queue
Setting capacity estimations and goals
Setting capacity using mixed resource allocation mode (Technical Preview)
Setting consumer and producer table properties
Setting credentials for Ranger Usersync custom keystore
Setting default Application Master resource limit
Setting default credentials using Cloudera Manager
Setting file system credentials through hadoop properties
Setting global application limits
Setting global maximum application priority
Setting HDFS quotas in Cloudera Manager
Setting Java system properties for Solr
Setting Maximum Application limit for a specific queue
Setting Maximum Parallel Application
Setting maximum parallel application limits
Setting maximum parallel application limits for a specific queue
Setting Oozie permissions
Setting ordering policies within a specific queue
Setting Python path variables for Livy
Setting queue priorities
Setting schema access strategy in NiFi
Setting the cache timeout
Setting the Idle Query and Idle Session Timeouts
Setting the Oozie database timezone
Setting the secure storage password as an environment variable
Setting the Solr Critical State Cores Percentage parameter
Setting the Solr Recovering Cores Percentage parameter
Setting the trash interval
Setting Timeout and Retries for Thrift Connections to Backend Client
Setting Timeouts in Impala
Setting up a Hive client
Setting up a Hue service account with a custom name
Setting up and configuring the ABFS connector
Setting up Atlas High Availability
Setting up Atlas Kafka import tool
Setting up Azure managed Identity for Extraction
Setting up basic authentication with TLS for Prometheus
Setting up Data Cache for Remote Reads
Setting up Data Cache for Remote Reads
Setting Up HDFS Caching
Setting up Kafka Connect
Setting up mTLS for Prometheus
Setting up Prometheus for SMM
Setting up secure access mode in Data Hub
Setting up the development environment
Setting up TLS for Prometheus
Setting user limits for HBase
Setting user limits for Kafka
Setting user limits within a queue
Settings to avoid data loss
Setup for SASL with Kerberos
Setup for TLS/SSL encryption
SFTP Source
SFTP Source properties reference
Shell action for Spark 3
Shell commands
Shiro Settings: Reference
shiro.ini Example
SHOW CURRENT ROLES statement
SHOW MATERIALIZED VIEWS
SHOW ROLE GRANT GROUP statement
SHOW ROLES statement
SHOW statement
Showing Atlas Server status
Showing materialized views
Showing Role|Grant definitions from Ranger HiveAuthorizer
Shut Down Impala
SHUTDOWN statement
Simple .NET consumer
Simple .Net consumer using Schema Registry
Simple .NET producer
Simple .Net producer using Schema Registry
Simple Java consumer
Simple Java producer
Single Message Transforms
Single tablet write operations
Size the BlockCache
Sizing estimation based on network and disk message throughput
Sizing NameNode heap memory
Slow name resolution and nscd
SMALLINT data type
SMM property configuration in Cloudera Manager for Prometheus
Snapshot management
Solr
Solr
Solr
Solr and HDFS - the block cache
Solr server tuning categories
solrctl Reference
Space quotas
Spark
Spark
Spark
Spark 2
Spark 3 compatibility action executor
Spark 3 examples with Python or Java application
Spark 3 Oozie action schema
Spark 3 support in Oozie
Spark actions that produce Atlas entities
Spark application model
Spark audit entries
Spark cluster execution overview
Spark connector configuration in Apache Atlas
Spark Dynamic Partition overwriting
Spark Dynamic Partition overwriting
Spark entities created in Apache Atlas
Spark entity metadata migration
Spark execution model
Spark indexing using morphlines
Spark integration best practices
Spark integration known issues and limitations
Spark integration limitations
Spark integration with Hive
Spark Job ACLs
Spark jobs failing with memory issues
Spark lineage
Spark metadata collection
Spark on YARN deployment modes
Spark relationships
Spark security
Spark SQL example
Spark Streaming and Dynamic Allocation
Spark Streaming Example
Spark troubleshooting
Spark tuning
spark-submit command options
Spark3
Specify truststore properties
Specifying domains or pages to which Hue can redirect users
Specifying HTTP request methods
Specifying Impala Credentials to Access S3
Specifying racks for hosts
Specifying trusted users
Speeding up Job Commits by Increasing the Number of Threads
Splitting a shard on HDFS
Spooling Query Results
SQL migration to Impala
SQL statements
SQLContext and HiveContext
Sqoop
Sqoop
Sqoop
Sqoop enhancements to the Hive import process
Sqoop Hive import stops when HS2 does not use Kerberos authentication
SRM Command Line Tools
SRM security example
SRM Service data traffic reference
srm-control
srm-control Options Reference
SSE-C: Server-Side Encryption with Customer-Provided Encryption Keys
SSE-KMS: Amazon S3-KMS Managed Encryption Keys
SSE-S3: Amazon S3-Managed Encryption Keys
Standard stream logs
Start and stop Kudu processes
Start and stop the NFS Gateway services
Start HBase
Start Hive on an insecure cluster
Start Hive using a password
Start Prometheus
Start Queue
Start SQL AI Assistant
Start the NFS Gateway services
Starting and Stopping Apache Impala
Starting and stopping HBase using Cloudera Manager
Starting and stopping queues
Starting Apache Hive
Starting compaction manually
Starting the Lily HBase NRT indexer service
Starting the Oozie server
Stateless NiFi Sink properties reference
Stateless NiFi Source and Sink
Stateless NiFi Source properties reference
STDDEV, STDDEV_SAMP, STDDEV_POP functions
Step 1: Worker host configuration
Step 2: Worker host planning
Step 3: Cluster size
Step 6: Verify container settings on cluster
Step 6A: Cluster container capacity
Step 6B: Container parameters checking
Step 7: MapReduce configuration
Step 7A: MapReduce settings checking
Steps 4 and 5: Verify settings
Stop HBase
Stop Queue
Stop replication in an emergency
Stop the NFS Gateway services
Stopping the Oozie server
Storage
Storage
Storage group classification
Storage group pairing
Storage reduction for Atlas
Storage Systems Supports
Stored procedure examples
Storing medium objects (MOBs)
Streams Messaging
Streams Messaging
Streams Messaging
Streams Messaging Manager
Streams Messaging Manager
Streams Messaging Manager
Streams Messaging Manager Overview
Streams Replication Manager
Streams Replication Manager
Streams Replication Manager
Streams Replication Manager Architecture
Streams Replication Manager Driver
Streams Replication Manager Overview
Streams Replication Manager Reference
Streams Replication Manager requirements
Streams Replication Manager Service
Stretch cluster reference architecture
STRING data type
STRUCT complex type
Submit Oozie Jobs in Data Engineering Cluster
Submitting a Python app
Submitting a Scala or Java application
Submitting a Spark job to a Data Hub cluster using Livy
Submitting batch applications using the Livy REST API
Submitting Spark applications
Submitting Spark Applications to YARN
Submitting Spark applications using Livy
Subqueries in Impala SELECT statements
Subquery restrictions
Subscribing to a topic
SUM
SUM function
Support for On-Demand lineage
Support for validating the AttributeName in parent and child TypeDef
Supported HDFS entities and their hierarchies
Supported non-ASCII and special characters in Hue
Supported operators
Switching from CMS to G1GC
Symbolizing stack traces
Synchronization between Impala Clusters
Synchronize table data using HashTable/SyncTable tool
Synchronizing the contents of JournalNodes
Syslog TCP Source
Syslog TCP Source properties reference
Syslog UDP Source
Syslog UDP Source properties reference
System Level Broker Tuning
System metadata migration
Table and Column Statistics
TABLESAMPLE clause
Tablet history garbage collection and the ancient history mark
Tag-based Services and Policies
Tags and policy evaluation
Take a snapshot using a shell script
Task architecture and load-balancing
Terminating Hive queries
Terminologies
Terms
Terms and concepts
Test driving Iceberg from Hive
Test driving Iceberg from Impala
Test MOB storage and retrieval performance
Testing the LDAP configuration
Text-editor for Atlas parameters
Tez
The Cloud Storage Connectors
The HDFS mover command
The Hue load balancer not distributing users evenly across various Hue servers
The Kafka Connect UI
The perfect schema
The S3A Committers and Third-Party Object Stores
Thread Tuning for S3A Data Upload
Threads
Thrift Server crashes after receiving invalid data
Throttle quota examples
Throttle quotas
Time travel feature
Timeline consistency
TIMESTAMP compatibility for Parquet files
TIMESTAMP data type
TINYINT data type
TLS/SSL client authentication
To configure an S3 bucket to publish events
To configure an SQS queue suitable for Atlas extraction
Token configurations
Tombstoned or STOPPED tablet replicas
Top-down process for adding a new metadata source
Topics
Topics and Groups Subcommand
Transactional table access
Transactions
Transactions
Trash behavior with HDFS Transparent Encryption enabled
Triggering HDFS audit files rollover
Troubleshoot RegionServer grouping
Troubleshooting
Troubleshooting ABFS
Troubleshooting Apache Atlas
Troubleshooting Apache Hadoop YARN
Troubleshooting Apache HBase
Troubleshooting Apache Hive
Troubleshooting Apache Impala
Troubleshooting Apache Kudu
Troubleshooting Apache Spark
Troubleshooting Apache Sqoop
Troubleshooting Cloudera Search
Troubleshooting Crashes Caused by Memory Resource Limit
Troubleshooting Docker on YARN
Troubleshooting HBase
Troubleshooting Hue
Troubleshooting Impala
Troubleshooting Linux Container Executor
Troubleshooting NTP stability problems
Troubleshooting on YARN
Troubleshooting Prometheus for SMM
Troubleshooting S3
Troubleshooting SAML authentication
Troubleshooting Schema Registry
Troubleshooting the S3A Committers
Truncate table feature
TRUNCATE TABLE statement
Tuning Apache Hadoop YARN
Tuning Apache Impala
Tuning Apache Kafka Performance
Tuning Apache Spark
Tuning Apache Spark Applications
Tuning Cloudera Search
Tuning garbage collection
Tuning Hue
Tuning replication
Tuning Resource Allocation
Tuning S3A Uploads
Tuning Spark Shuffle Operations
Tuning the Number of Partitions
Turning safe mode on HA NameNodes
Tutorial
Tutorial: developing and deploying a JDBC Source dataflow
UDF concepts
UI Tools
Unable to access Hue from Knox Gateway UI
Unable to alter S3-backed tables
Unable to authenticate users in Hue using SAML
Unable to connect to database with provided credential
Unable to execute queries due to atomic block
Unable to log into Hue with Knox
Unable to read Sqoop metastore created by an older HSQLDB version
Unable to view Snappy-compressed files
Unaffected Components in this release
Understanding --go-live and HDFS ACLs
Understanding co-located and external clusters
Understanding CREATE TABLE behavior
Understanding erasure coding policies
Understanding HBase garbage collection
Understanding Hue users and groups
Understanding Impala integration with Kudu
Understanding Performance using EXPLAIN Plan
Understanding Performance using Query Profile
Understanding Performance using SUMMARY Report
Understanding Ranger policies with RMS
Understanding SRM properties, their configuration and hierarchy
Understanding the data that flow into Atlas
Understanding the extractHBaseCells Morphline Command
Understanding the extractHBaseCells Morphline Command
Understanding the kafka-run-class Bash Script
Understanding the Phoenix JDBC URL
Understanding YARN architecture
UNION, INTERSECT, and EXCEPT clauses
Unlocking access to Kafka metadata in Zookeeper
Unlocking locked out user accounts in Hue
Unsupported Apache Spark Features
Unsupported command line tools
Unsupported features and limitations
Unsupported features in Hue
Update data
UPDATE statement
Updating a notifier
Updating an alert policy
Updating an Iceberg partition
Updating data in a table
Updating Extractor Configuration with ADLS Authentication
Updating Spark 2 apps for Spark 3.4
Updating the schema in a collection
Updating YARN Queue Manager Database Password
Upgrading existing Kudu tables for Hive Metastore integration
Uploading Oozie ShareLib to Ozone
Upsert a row
Upsert option in Kudu Spark
UPSERT statement
Usability issues
Use a CTE in a query
Use a custom MapReduce job
Use BulkLoad
Use case architectures
Use cases
Use cases and sample payloads
Use cases for ACLs on HDFS
Use cases for BulkLoad
Use cases for centralized cache management
Use cases for Streams Replication Manager in CDP Public Cloud
Use Cgroups
Use cluster names in the kudu command line tool
Use cluster replication
Use CopyTable
Use CPU scheduling
Use CPU scheduling with distributed shell
Use CREATE TABLE AS SELECT
Use curl to access a URL protected by Kerberos HTTP SPNEGO
Use Digest Authentication Provider
Use FPGA scheduling
Use FPGA with distributed shell
Use GPU scheduling
Use GPU scheduling with distributed shell
Use GZipCodec with a one-time job
Use HashTable and SyncTable Tool
Use multiple ZooKeeper services
Use rsync to copy files from one broker to another
Use snapshots
Use Spark
Use Spark 3 actions with a custom Python executable
Use Spark actions with a custom Python executable
Use Spark with a secure Kudu cluster
Use Sqoop
USE statement
Use strongly consistent indexing
Use the Charts Library
Use the HBase APIs for Java
Use the HBase command-line utilities
Use the HBase REST server
Use the HBase shell
Use the Hue HBase app
Use the JDBC interpreter to access Hive
Use the Livy interpreter to access Spark
Use the Network Time Protocol (NTP) with HBase
Use the YARN REST APIs to manage applications
Use transactions with tables
Use wildcards with SHOW DATABASES
User Account Requirements
User authentication in Hue
User authorization configuration for Oozie
User management in Hue
User-defined functions (UDFs)
Using --go-live with SSL or Kerberos
Using a credential provider to secure S3 credentials
Using a subquery
Using ABFS using CLI
Using advanced search
Using Amazon S3 with Hue
Using Apache HBase Backup and Disaster Recovery
Using Apache HBase Hive integration
Using Apache Hive
Using Apache Iceberg
Using Apache Iceberg with Spark
Using Apache Impala with Apache Kudu
Using Apache Phoenix to Store and Access Data
Using Apache Phoenix-Hive connector
Using Apache Phoenix-Spark connector
Using Apache Zeppelin
Using Atlas-Hive import utility with Ozone entities
Using audit aging
Using Avro Data Files
Using Azure Data Lake Storage Gen2 with Hue
Using Basic search
Using Breakpad Minidumps for Crash Reporting
Using CLI commands to create and list ACLs
Using Cloudera Manager to manage HDFS HA
Using common table expressions
Using Configuration Properties to Authenticate
Using constraints
Using custom audit aging
Using custom audit filters
Using custom JAR files with Search
Using custom libraries with Spark
Using default audit aging
Using dfs.datanode.max.transfer.threads with HBase
Using Direct Reader mode
Using DistCp
Using DistCp between HA clusters using Cloudera Manager
Using DistCp to copy files
Using DistCp with Amazon S3
Using DistCp with Highly Available remote clusters
Using DNS with HBase
Using EC2 Instance Metadata to Authenticate
Using Environment Variables to Authenticate
Using erasure coding for existing data
Using erasure coding for new data
Using Free-text Search
Using functions
Using Google Cloud Storage with Hue
Using governance-based data discovery
Using HBase blocksize
Using HBase coprocessors
Using HBase Hive integration
Using HBase replication
Using HBase scanner heartbeat
Using HDFS snapshots for data protection
Using HdfsFindTool to find files
Using hedged reads
Using Hive Metastore with Apache Kudu
Using Hive Warehouse Connector with Oozie Spark Action
Using HLL Datasketch Algorithms in Impala
Using HttpFS to provide access to HDFS
Using Hue
Using Hue scripts
Using HWC for streaming
Using Ignore and Prune patterns
Using Impala to query Kudu tables
Using Import Utility Tools with Atlas
Using JDBC API
Using JDBC read mode
Using JMX for accessing HBase metrics
Using JMX for accessing HDFS metrics
Using Kafka Connect
Using KLL Datasketch Algorithms in Impala
Using Livy with interactive notebooks
Using Livy with Spark
Using Load Balancer with HttpFS
Using MapReduce batch indexing to index sample Tweets
Using metadata for cluster governance
Using Morphlines to index Avro
Using Morphlines with Syslog
Using non-JDBC drivers
Using Oozie with Ozone
Using optimizations from a subquery
Using ORC Data Files
Using Parquet Data Files
Using partitions when submitting a job
Using Per-Bucket Credentials to Authenticate
Using PySpark
Using quota management
Using rack awareness for read replicas
Using Ranger client libraries
Using Ranger to Provide Authorization in CDP
Using RCFile Data Files
Using RegionServer grouping
Using Relationship search
Using Schema Registry
Using Search filters
Using secondary indexing
Using secondary indexing
Using secure access mode
Using SequenceFile Data Files
Using session cookies to validate Ranger policies
Using solrctl with an HTTP proxy
Using Spark History Servers with high availability
Using Spark Hive Warehouse and HBase Connector Client .jar files with Livy
Using Spark MLlib
Using Spark SQL
Using Spark Streaming
Using SQL to query HBase from Hue
Using Sqoop actions with Oozie
Using SRM in CDP Public Cloud overview
Using Streams Messaging Manager
Using Streams Replication Manager
Using Sweep out configurations
Using tag attributes and values in Ranger tag-based policy conditions
Using Text Data Files
Using the Apache Thrift Proxy API
Using the AvroConverter
Using the CldrCopyTable utility to copy data
Using the Cloudera Runtime Maven repository 7.2.18
Using the cursor to return record sets
Using the Directory Committer in MapReduce
Using the HBase-Spark connector
Using the HBCK2 tool to remediate HBase clusters
Using the Hive shell
Using the Impala shell
Using the indexer HTTP interface
Using the Lily HBase NRT indexer service
Using the Livy API to run Spark jobs
Using the manifest committer
Using the manifest committer
Using the NFS Gateway for accessing HDFS
Using the Note Toolbar
Using the Phoenix JDBC Driver
Using the Ranger Admin Web UI
Using the Rebalance Wizard in Cruise Control
Using the REST API
Using the REST API
Using the REST proxy API
Using the S3Guard Command to List and Delete Uploads
Using the Spark DataFrame API
Using the Spark shell
Using the YARN CLI to viewlogs for applications
Using the yarn rmadmin tool to administer ResourceManager high availability
Using transactions
Using Unique Filenames to Avoid File Update Inconsistency
Using YARN Web UI and CLI
Using Zeppelin Interpreters
UTF-8 Support
Validating the Cloudera Search deployment
Validations for parent types
VALUES statement
VARCHAR data type
Varchar type
VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP functions
Variations on Put
Vectorization default
Verifing use of a query rewrite
Verify that replication works
Verify the DNS configuration
Verify the DNS configuration
Verify the network line-of-sight
Verify the network line-of-sight
Verify the ZooKeeper authentication
Verify validity of the NFS services
Verifying Atlas for the extracted data
Verifying if a memory limit is sufficient
Verifying That an S3A Committer Was Used
Verifying that Indexing Works
Verifying the Impala dependency on Kudu
Verifying the setup
Versions
View Ranger reports
View the API documentation
Viewing all applications
Viewing and modifying log levels for Search and related services
Viewing and modifying Solr configuration using Cloudera Manager
Viewing application details
Viewing audit details
Viewing audit metrics
Viewing configurations for a Hive query
Viewing DAG information for a Hive query
Viewing existing collections
Viewing explain plan for a Hive query
Viewing Hive query details
Viewing Hive query history
Viewing Hive query information
Viewing Hive query timeline
Viewing Impala profiles in Hue
Viewing Impala query details
Viewing Impala query history
Viewing Impala query information
Viewing Kafka cluster replication details
Viewing lineage
Viewing nodes and node details
Viewing partitions
Viewing queues and queue details
Viewing racks assigned to cluster hosts
Viewing the Cluster Overview
Viewing the Impala query execution plan
Viewing the Impala query metrics
Views
Virtual column
Virtual machine options for HBase Shell
Virtual memory handling
Web User Interface for Debugging
What is Cloudera Search
What's New
What's New In Cloudera Runtime 7.2.18.400
What's New In Cloudera Runtime 7.2.18.500
When Shuffles Do Not Occur
When to Add a Shuffle Transformation
When to use Atlas classifications for access control
Why HDFS data becomes unbalanced
Why one scheduler?
Wildcards and variables in resource-based policies
WINDOW
WITH clause
Work preserving recovery for YARN components
Working with Amazon S3
Working with Apache Hive Metastore
Working with Atlas classifications and labels
Working with Azure ADLS Gen2 storage
Working with Classifications and Labels
Working with Google Cloud Storage
Working with Google Cloud Storage
Working with S3 buckets in the same AWS region
Working with the ABFS Connector
Working with the Oozie server
Working with Third-party S3-compatible Object Stores
Working with versioned S3 buckets
Working with Zeppelin Notes
Write-ahead log garbage collection
Writes
Writing data in a Kerberos and TLS/SSL enabled cluster
Writing data in an unsecured cluster
Writing data through HWC
Writing data to HBase
Writing data to Kafka
Writing Kafka data to Ozone with Kafka Connect
Writing to multiple tablets
Writing transformed Hive data to Kafka
Writing UDFs
Writing user-defined aggregate functions (UDAFs)
YARN
YARN
YARN ACL rules
YARN ACL syntax
YARN ACL types
YARN and YARN Queue Manager
YARN Configuration Properties
YARN Features
YARN log aggregation overview
YARN Ranger authorization support
YARN Ranger authorization support compatibility matrix
YARN resource allocation of multiple resource-types
YARN ResourceManager high availability
YARN ResourceManager high availability architecture
YARN services API examples
YARN tuning overview
Zeppelin
Zeppelin
Zeppelin
Zipping unnest on arrays from views
ZooKeeper
ZooKeeper
ZooKeeper
ZooKeeper ACLs Best Practices
ZooKeeper ACLs Best Practices: Atlas
ZooKeeper ACLs Best Practices: Cruise Control
ZooKeeper ACLs Best Practices: HBase
ZooKeeper ACLs Best Practices: HDFS
ZooKeeper ACLs Best Practices: Kafka
ZooKeeper ACLs Best Practices: Oozie
ZooKeeper ACLs Best Practices: Ranger
ZooKeeper ACLs best practices: Search
ZooKeeper ACLs Best Practices: YARN
ZooKeeper ACLs Best Practices: ZooKeeper
ZooKeeper Authentication
Zookeeper Configurations
zookeeper-security-migration
«
Filter topics
Schema Registry use cases
▼
Schema Registry overview
Examples of interacting with Schema Registry
▼
Schema Registry use cases
Registering and querying a schema for a Kafka topic
Deserializing and serializing data from and to a Kafka topic
Dataflow management with schema-based routing
Schema Registry component architecture
▶︎
Schema Registry concepts
Schema entities
Compatibility policies
Importance of logical types in Avro
»
Schema Registry Overview
Schema Registry use cases
Learn about different use cases of using Schema Registry.
Registering and querying a schema for a Kafka topic
Learn how to use Schema Registry to track metadata for a Kafka topic.
Deserializing and serializing data from and to a Kafka topic
Learn how to use Schema Registry to store metadata on reading or deserializing and writing or serializing data from and to Kafka topics.
Dataflow management with schema-based routing
Learn how to use Schema Registry to support NiFi dataflow management.
Parent topic:
Schema Registry overview
7.3.1
7.2
7.2.18
7.2.17
7.2.16
7.2.15
7.2.14
7.2.12
7.2.11
7.2.10
7.2.9
7.2.8
7.2.7
7.2.6
7.2.2
7.2.1
7.2.0
7.1.0
7.0
7.0.2
7.0.1
7.0.0
This site uses cookies and related technologies, as described in our
privacy policy
, for purposes that may include site operation, analytics, enhanced user experience, or advertising. You may choose to consent to our use of these technologies, or
manage your own preferences.
Accept all