Homepage
/
Cloudera Private Cloud Base
7.1.1
(Private Cloud)
Search Documentation
▶︎
Cloudera
Reference Architectures
▼
Cloudera Public Cloud
Getting Started
Patterns
Preview Features
Data Catalog
Data Engineering
DataFlow
Data Hub
Data Warehouse
Data Warehouse Runtime
Cloudera AI
Management Console
Operational Database
Replication Manager
DataFlow for Data Hub
Runtime
▶︎
Cloudera Private Cloud
Getting Started
Base
Upgrade
▶︎
Cloudera Manager
Cloudera Manager
▶︎
Applications
Streaming Community Edition
Data Science Workbench
Data Visualization
Edge Management
Observability
Observability on premises
Workload XM On-Prem
▶︎
Legacy
Cloudera Enterprise
Flow Management
Stream Processing
HDP
HDF
Streams Messaging Manager
Streams Replication Manager
▶︎
Getting Started
Patterns
Preview Features
Data Catalog
Data Engineering
DataFlow
Data Hub
Data Warehouse
Data Warehouse Runtime
Cloudera AI
Management Console
Operational Database
Replication Manager
DataFlow for Data Hub
Runtime
«
Filter topics
Cloudera Runtime
▶︎
Cloudera Runtime Release Notes
Overview
Cloudera Runtime Component Versions
▶︎
Using the Cloudera Runtime Maven Repository
Maven Artifacts for Cloudera Runtime 7.1.1.0
▶︎
What's New
Atlas
Cruise Control
DAS
HBase
HDFS
Hive
Hue
Impala
Kafka
Knox
Kudu
Oozie
Ozone
Phoenix
Schema Registry
Search
Spark
Sqoop
Streams Replication Manager
Streams Messaging Manager
YARN
ZooKeeper
▶︎
Deprecation Notices In Cloudera Runtime 7.1.1
HBase
Kudu
Search
Kafka
▶︎
Behavioral Changes In Cloudera Runtime 7.1.1
Search
Hive
YARN
▶︎
Fixed Issues In Cloudera Runtime 7.1.1
Atlas
DAS
Hadoop
HBase
HDFS
Hive
Hue
Kafka
Impala
Kudu
Oozie
Ozone
Phoenix
Search
Spark
Sqoop
Streams Replication Manager
Streams Replication Manager
YARN
Zeppelin
ZooKeeper
▶︎
Known Issues In Cloudera Runtime 7.1.1
Atlas
Cruise Control
DAS
Hadoop
HBase
HDFS
Hive
Hue
Impala
Kafka
Kerberos
Knox
Kudu
Oozie
Ozone
Ranger
Ranger KMS
Schema Registry
Search
Solr
Spark
Streams Replication Manager
Streams Messaging Manager
Sqoop
YARN
Zeppelin
ZooKeeper
Cloudera Manager Release Notes
▶︎
Concepts
▶︎
Storage
▶︎
Apache Hadoop HDFS Overview
▶︎
Introduction
Overview of HDFS
▶︎
NameNodes
▶︎
Moving NameNode roles
Moving highly available NameNode, failover controller, and JournalNode roles using the Migrate Roles wizard
Moving a NameNode to a different host using Cloudera Manager
▶︎
Sizing NameNode heap memory
Environment variables for sizing NameNode heap memory
Monitoring heap memory usage
Files and directories
Disk space versus namespace
Replication
Examples of estimating NameNode heap memory
▶︎
DataNodes
How NameNode manages blocks on a failed DataNode
Replace a disk on a DataNode host
Remove a DataNode
Fixing block inconsistencies
Add storage directories using Cloudera Manager
Remove storage directories using Cloudera Manager
▶︎
Configuring storage balancing for DataNodes
Configure storage balancing for DataNodes using Cloudera Manager
Perform a disk hot swap for DataNodes using Cloudera Manager
▶︎
JournalNodes
Moving the JournalNode edits directory for a role group using Cloudera Manager
Moving the JournalNode edits directory for a role instance using Cloudera Manager
Synchronizing the contents of JournalNodes
▶︎
Apache Kudu Overview
▶︎
Apache Kudu overview
Kudu architecture in a CDP private cloud base deployment
Apache Kudu architecture in a CDP Data Center deployment
Kudu-Impala integration
Example use cases
▶︎
Apache Kudu concepts
Columnar datastore
Raft consensus algorithm
Table
Tablet
Tablet server
Master
Catalog table
Logical replication
▶︎
Apache Kudu usage limitations
Schema design limitations
Partitioning limitations
Scaling recommendations and limitations
Server management limitations
Cluster management limitations
Impala integration limitations
Spark integration limitations
Security limitations
Other known issues
More Resources
▶︎
Apache Kudu Design
▶︎
Apache Kudu schema design
The perfect schema
▶︎
Column design
Decimal type
Varchar type
Column encoding
Column compression
▶︎
Primary key design
Primary key index
Considerations for backfill inserts
▶︎
Partitioning
▶︎
Range partitioning
Adding and Removing Range Partitions
Hash partitioning
Multilevel partitioning
Partition pruning
▶︎
Partitioning examples
Range partitioning
Hash partitioning
Hash and range partitioning
Hash and hash partitioning
Schema alterations
Schema design limitations
▶︎
Apache Kudu transaction semantics
Single tablet write operations
Writing to multiple tablets
Read operations (scans)
▶︎
Known issues and limitations
Writes
Reads (scans)
▶︎
Scaling Kudu
Terms
Example workload
▶︎
Memory
Verifying if a memory limit is sufficient
File descriptors
Threads
▶︎
Apache Hadoop YARN Overview
Introduction
YARN Features
Understanding YARN architecture
▶︎
Data Access
▶︎
Data Analytics Studio Overview
Data Analytics Studio overview
DAS architecture
DAS architecture
▶︎
Apache Hive Metastore Overview
Introduction to Hive metastore
▶︎
Apache Hive Overview
Apache Hive key features
Unsupported Interfaces and Features
Installing Hive on Tez and adding a HiveServer role
Apache Hive 3 architectural overview
Apache Hive content roadmap
▶︎
Apache Impala Overview
Introduction
Components
▶︎
Hue Overview
Introduction
▶︎
Cloudera Search Overview
▶︎
Cloudera Search Overview
How Cloudera Search Works
Understanding
Search and other Runtime components
Cloudera Search Architecture
▶︎
Cloudera Search Tasks and Processes
Ingestion
Indexing
Querying
▶︎
Operational Database
▶︎
Apache HBase Overview
Overview of Apache HBase
Use cases for HBase
HBase on CDP
▶︎
Apache Phoenix Overview
Overview of Apache Phoenix
▶︎
Data Engineering
▶︎
Apache Spark Overview
Apache Spark Overview
Unsupported Apache Spark Features
▶︎
Apache Zeppelin Overview
Overview
▶︎
CDP Security Overview
Cloudera Runtime Security and Governance
▶︎
Governance
▶︎
Governance Overview
Using metadata for cluster governance
Data Stewardship with Apache Atlas
Apache Atlas dashboard tour
Apache Atlas metadata collection overview
Atlas metadata model overview
▶︎
Controlling Data Access with Tags
Atlas classifications drive Ranger policies
When to use Atlas classifications for access control
How tag-based access control works
Examples of controlling data access using classifications
▶︎
Extending Atlas to Manage Metadata from Additional Sources
Top-down process for adding a new metadata source
▶︎
Streams Messaging
▶︎
Apache Kafka Overview
Kafka Introduction
▶︎
Kafka Architecture
Brokers
Topics
Records
Partitions
Record order and assignment
Logs and log segments
Kafka brokers and Zookeeper
Leader positions and in-sync replicas
▶︎
Kafka FAQ
Basics
Use cases
▶︎
Cruise Control Overview
How Cruise Control rebalancing works
How Cruise Control retrieves metrics
▶︎
Streams Messaging Manager Overview
Streams Messaging Manager Overview
▶︎
Streams Replication Manager Overview
Overview
Key Features
Main Use Cases
▶︎
Use Case Architectures
▶︎
Highly Available Kafka Architectures
Active / Stand-by Architecture
Active / Active Architecture
Cross Data Center Replication
▶︎
Cluster Migration Architectures
On-premise to Cloud and Kafka Version Upgrade
Aggregation for Analytics
▶︎
Streams Replication Manager Architecture
▶︎
Streams Replication Manager Driver
Connect workers
Connectors
Task architecture and load-balancing
Streams Replication Manager Service
▶︎
Understanding Replication Flows
Replication Flows Overview
Remote Topics
Bi-directional Replication Flows
Fan-in and Fan-out Replication Flows
▶︎
Schema Registry Overview
▶︎
Schema Registry Overview
Examples of Interacting with Schema Registry
▶︎
Schema Registry Use Cases
Use Case 1: Registering and Querying a Schema for a Kafka Topic
Use Case 2: Reading/Deserializing and Writing/Serializing Data from and to a Kafka Topic
Use Case 3: Dataflow Management with Schema-based Routing
Schema Registry Component Architecture
▶︎
Schema Registry Concepts
Schema Entities
Compatibility Policies
▶︎
Planning
▶︎
Deployment Planning for Cloudera Search
▶︎
Deployment Planning for Cloudera Search
Guidelines for Deploying Cloudera Search
Schemaless Mode Overview and Best Practices
Defining a Schema is Recommended for Production Use
▶︎
Planning for Apache Impala
Guidelines for Schema Design
User Account Requirements
▶︎
Planning for Streams Replication Manager
Streams Replication Manager requirements
Recommended deployment architecture
▼
How To
▼
Storage
▶︎
Managing Data Storage
▶︎
Optimizing data storage
▶︎
Balancing data across disks of a DataNode
▶︎
Plan the data movement across disks
Parameters to configure the Disk Balancer
Execute the Disk Balancer plan
Disk Balancer commands
▶︎
Erasure coding overview
Understanding erasure coding policies
Comparing replication and erasure coding
Best practices for rack and node setup for EC
Prerequisites for enabling erasure coding
Limitations of erasure coding
Using erasure coding for existing data
Using erasure coding for new data
Advanced erasure coding configuration
Erasure coding CLI command
Erasure coding examples
▶︎
Increasing storage capacity with HDFS compression
Enable GZipCodec as the default compression codec
Use GZipCodec with a one-time job
▶︎
Setting HDFS quotas
Set quotas using Cloudera Manager
▶︎
Configuring heterogeneous storage in HDFS
HDFS storage types
HDFS storage policies
Commands for configuring storage policies
Set up a storage policy for HDFS
Set up SSD storage using Cloudera Manager
Configure archival storage
The HDFS mover command
▶︎
Balancing data across an HDFS cluster
Why HDFS data becomes unbalanced
▶︎
Configurations and CLI options for the HDFS Balancer
Properties for configuring the Balancer
Balancer commands
Recommended configurations for the Balancer
▶︎
Configuring and running the HDFS balancer using Cloudera Manager
Configuring the balancer threshold
Configuring concurrent moves
Recommended configurations for the balancer
Running the balancer
Configuring block size
▶︎
Cluster balancing algorithm
Storage group classification
Storage group pairing
Block move scheduling
Block move execution
Exit statuses for the HDFS Balancer
▶︎
Optimizing performance
▶︎
Improving performance with centralized cache management
Benefits of centralized cache management in HDFS
Use cases for centralized cache management
Centralized cache management architecture
Caching terminology
Properties for configuring centralized caching
Commands for using cache pools and directives
▶︎
Specifying racks for hosts
Viewing racks assigned to cluster hosts
Editing rack assignments for hosts
▶︎
Customizing HDFS
Customize the HDFS home directory
Properties to set the size of the NameNode edits directory
▶︎
Optimizing NameNode disk space with Hadoop archives
Overview of Hadoop archives
Hadoop archive components
Create a Hadoop archive
List files in Hadoop archives
Format for using Hadoop archives with MapReduce
▶︎
Detecting slow DataNodes
Enable disk IO statistics
Enable detection of slow DataNodes
▶︎
Allocating DataNode memory as storage
HDFS storage types
LAZY_PERSIST memory storage policy
Configure DataNode memory as storage
▶︎
Improving performance with short-circuit local reads
Prerequisites for configuring short-ciruit local reads
Properties for configuring short-circuit local reads on HDFS
▶︎
Configure mountable HDFS
Add HDFS system mount
Optimize mountable HDFS
▶︎
Using DistCp to copy files
Using DistCp
Distcp syntax and examples
Using DistCp with Highly Available remote clusters
▶︎
Using DistCp with Amazon S3
Using a credential provider to secure S3 credentials
Examples of DistCp commands using the S3 protocol and hidden credentials
Kerberos setup guidelines for Distcp between secure clusters
▶︎
Distcp between secure clusters in different Kerberos realms
Configure source and destination realms in krb5.conf
Configure HDFS RPC protection
Configure acceptable Kerberos principal patterns
Specify truststore properties
Set HADOOP_CONF to the destination cluster
Launch distcp
Copying data between a secure and an insecure cluster using DistCp and WebHDFS
Post-migration verification
Using DistCp between HA clusters using Cloudera Manager
▶︎
Using the NFS Gateway for accessing HDFS
Configure the NFS Gateway
▶︎
Start and stop the NFS Gateway services
Verify validity of the NFS services
▶︎
Access HDFS from the NFS Gateway
How NFS Gateway authenticates and maps users
▶︎
Configuring Proxy Users to Access HDFS
Proxy users for Kerberos-enabled clusters
▶︎
APIs for accessing HDFS
Set up WebHDFS on a secure cluster
▶︎
Using HttpFS to provide access to HDFS
Add the HttpFS role
Using Load Balancer with HttpFS
▶︎
HttpFS authentication
Use curl to access a URL protected by Kerberos HTTP SPNEGO
▶︎
Data storage metrics
Using JMX for accessing HDFS metrics
▶︎
Configure the G1GC garbage collector
Recommended settings for G1GC
Switching from CMS to G1GC
HDFS Metrics
▶︎
Using HdfsFindTool to find files
Downloading Hdfsfindtool from the CDH archives
▼
Configuring Data Protection
▼
Data protection
▶︎
Backing up HDFS metadata
▶︎
Introduction to HDFS metadata files and directories
▶︎
Files and directories
NameNodes
JournalNodes
DataNodes
▶︎
HDFS commands for metadata files and directories
Configuration properties
▶︎
Back up HDFS metadata
Prepare to back up the HDFS metadata
Backing up NameNode metadata
Back up HDFS metadata using Cloudera Manager
Restoring NameNode metadata
Restore HDFS metadata from a backup using Cloudera Manager
Perform a backup of the HDFS metadata
▼
Using HDFS snapshots for data protection
Considerations for working with HDFS snapshots
Enable snapshot creation on a directory
Create snapshots on a directory
Recover data from a snapshot
Options to determine differences between contents of snapshots
CLI commands to perform snapshot operations
▼
Managing snapshot policies using Cloudera Manager
Create a snapshot policy
Edit or delete a snapshot policy
Enable and disable snapshot creation using Cloudera Manager
Create snapshots using Cloudera Manager
Delete snapshots using Cloudera Manager
▶︎
Configuring HDFS trash
Trash behavior with HDFS Transparent Encryption enabled
Enabling and disabling trash
Setting the trash interval
Preventing inadvertent deletion of directories
▶︎
Accessing Cloud Data
Cloud storage connectors overview
The Cloud Storage Connectors
▶︎
Working with Amazon S3
Limitations of Amazon S3
▶︎
Configuring Access to S3
Configuring Access to S3 on CDP Public Cloud
▶︎
Configuring Access to S3 on CDP Private Cloud Base
Using Configuration Properties to Authenticate
Using Per-Bucket Credentials to Authenticate
Using Environment Variables to Authenticate
Using EC2 Instance Metadata to Authenticate
Referencing S3 Data in Applications
▶︎
Configuring Per-Bucket Settings
Customizing Per-Bucket Secrets Held in Credential Files
Configuring Per-Bucket Settings to Access Data Around the World
▶︎
Encrypting Data on S3
▶︎
SSE-S3: Amazon S3-Managed Encryption Keys
Enabling SSE-S3
▶︎
SSE-KMS: Amazon S3-KMS Managed Encryption Keys
Enabling SSE-KMS
IAM Role permissions for working with SSE-KMS
▶︎
SSE-C: Server-Side Encryption with Customer-Provided Encryption Keys
Enabling SSE-C
Configuring Encryption for Specific Buckets
Encrypting an S3 Bucket with Amazon S3 Default Encryption
Performance Impact of Encryption
▶︎
Using S3Guard for Consistent S3 Metadata
Introduction to S3Guard
▶︎
Configuring S3Guard
Preparing the S3 Bucket
Choosing a DynamoDB Table and IO Capacity
Creating DynamoDB Access Policy
Restricting Access to S3Guard Tables
Configuring S3Guard in Cloudera Manager
Create the S3Guard Table in DynamoDB
Monitoring and Maintaining S3Guard
Disabling S3Guard and destroying a table
Pruning Old Data from S3Guard Tables
Importing a Bucket into S3Guard
Verifying that S3Guard is Enabled on a Bucket
Using the S3Guard CLI
S3Guard: Operational Issues
▶︎
Safely Writing to S3 Through the S3A Committers
Introducing the S3A Committers
Configuring Directories for Intermediate Data
Using the Directory Committer in MapReduce
Verifying That an S3A Committer Was Used
Cleaning up after failed jobs
Using the S3Guard Command to List and Delete Uploads
▶︎
Advanced Committer Configuration
Enabling Speculative Execution
Using Unique Filenames to Avoid File Update Inconsistency
Speeding up Job Commits by Increasing the Number of Threads
Securing the S3A Committers
The S3A Committers and Third-Party Object Stores
Limitations of the S3A Committers
Troubleshooting the S3A Committers
Security Model and Operations on S3
S3A and Checksums (Advanced Feature)
A List of S3A Configuration Properties
Working with versioned S3 buckets
Working with Third-party S3-compatible Object Stores
▶︎
Improving Performance for S3A
Working with S3 buckets in the same AWS region
▶︎
Configuring and tuning S3A block upload
Tuning S3A Uploads
Thread Tuning for S3A Data Upload
Optimizing S3A read performance for different file types
S3 Performance Checklist
Troubleshooting S3 and S3Guard
▶︎
Working with Google Cloud Storage
▶︎
Configuring Access to Google Cloud Storage
Create a GCP Service Account
Create a Custom Role
Modify GCS Bucket Permissions
Configure Access to GCS from Your Cluster
Additional Configuration Options for GCS
▶︎
Configuring HDFS ACLs
HDFS ACLs
Configuring ACLs on HDFS
Using CLI commands to create and list ACLs
ACL examples
ACLS on HDFS features
Use cases for ACLs on HDFS
▶︎
Enable authorization for HDFS web UIs
Enable authorization for additional HDFS web UIs
Configuring HSTS for HDFS Web UIs
▶︎
Configuring Fault Tolerance
▶︎
High Availability on HDFS clusters
▶︎
Configuring HDFS High Availability
NameNode architecture
Preparing the hardware resources for HDFS High Availability
▶︎
Using Cloudera Manager to manage HDFS HA
Enabling HDFS HA
Prerequisites for enabling HDFS HA using Cloudera Manager
Enabling High Availability and automatic failover
Disabling and redeploying HDFS HA
▶︎
Configuring other CDP components to use HDFS HA
Configuring HBase to use HDFS HA
Configuring the Hive Metastore to use HDFS HA
Configuring Impala to work with HDFS HA
Configuring oozie to use HDFS HA
Changing a nameservice name for Highly Available HDFS using Cloudera Manager
Manually failing over to the standby NameNode
Additional HDFS haadmin commands to administer the cluster
Turning safe mode on HA NameNodes
Converting from an NFS-mounted shared edits directory to Quorum-Based Storage
Administrative commands
▶︎
Storing Data Using Ozone
▶︎
Introduction to Ozone
Ozone architecture
How Ozone manages read operations
How Ozone manages write operations
▶︎
Managing storage elements by using the command-line interface
▶︎
Commands for managing volumes
Assigning administrator privileges to users
Commands for managing buckets
Commands for managing keys
▶︎
Using Ozone S3 Gateway to work with storage elements
URL schema for Ozone S3 Gateway
URL to browse Ozone buckets
REST endpoints supported on Ozone S3 Gateway
Mapping for an Ozone volume in Amazon S3 API
Examples of using the Amazon Web Services command-line interface for S3 Gateway
▶︎
Working with Ozone File System
Setting up OzoneFS
Configuration updates for Spark to work with OzoneFS
▶︎
Overview of the Ozone Manager in High Availability
Considerations for configuring High Availability on Ozone Manager
▶︎
Ozone Manager nodes in High Availability
Read and write requests with Ozone Manager in High Availability
▶︎
Working with the Recon web user interface
Access the Recon web user interface
▶︎
Elements of the Recon web user interface
Overview page
DataNodes page
Pipelines page
Missing Containers page
Configuring Ozone to work with Prometheus
▶︎
Configuring Ozone Security
Working with Ozone ACLs
Using Ranger with Ozone
▶︎
Kerberos configuration for Ozone
Security tokens in Ozone
Kerberos principal and keytab properties for Ozone service daemons
Securing DataNodes
Configure S3 credentials for working with Ozone
Configure Transparent Data Encryption for Ozone
▶︎
Administering Apache Kudu
▶︎
Apache Kudu administration
Starting and stopping Kudu processes
▶︎
Kudu web interfaces
Kudu master web interface
Kudu tablet server web interface
Common web interface pages
▶︎
Kudu metrics
Listing available metrics
Collecting metrics via HTTP
Diagnostics logging
Rack awareness (Location awareness)
▶︎
Backup and restore
Backing up tables
Restoring tables from backups
Backup tools
Backup directory structure
Physical backups of an entire node
▶︎
Common Kudu workflows
▶︎
Migrating to multiple Kudu masters
Prepare for the migration
Perform the migration
▶︎
Recovering from a dead Kudu master in a multi-master deployment
Prepare for the recovery
Perform the recovery
▶︎
Removing Kudu masters from a multi-master deployment
Prepare for removal
Perform the removal
▶︎
Changing master hostnames
Prepare for hostname changes
Perform hostname changes
Best practices when adding new tablet servers
Monitoring cluster health with ksck
Orchestrating a rolling restart with no downtime
Changing directory configuration
Recovering from disk failure
Recovering from full disks
Bringing a tablet that has lost a majority of replicas back online
Rebuilding a Kudu filesystem layout
Physical backups of an entire node
Scaling storage on Kudu master and tablet servers in the cloud
Migrating Kudu data from one directory to another on the same host
Minimizing cluster disruption during temporary planned downtime of a single tablet server
▶︎
Running tablet rebalancing tool
Running a tablet rebalancing tool on a rack-aware cluster
Running a tablet rebalancing tool in Cloudera Manager
Decommissioning or permanently removing a tablet server from a cluster
Using cluster names in the kudu command line tool
▶︎
Managing Kudu with Cloudera Manager
Enabling core dump for the Kudu service
Verifying the Impala dependency on Kudu
Using the Charts Library with the Kudu service
▶︎
Kudu security
▶︎
Kudu authentication with Kerberos
Internal private key infrastructure (PKI)
Authentication tokens
Client authentication to secure Kudu clusters
Scalability
Coarse-grained authorization
▶︎
Fine-grained authorization
Apache Ranger
Authorization tokens
Trusted users
Configuring Kudu's integration with Apache Ranger
Ranger client caching
Encryption
Web UI encryption
Web UI redaction
Log redaction
▶︎
Configuring a secure Kudu cluster using Cloudera Manager
Enabling Kerberos authentication and RPC encryption
Configuring coarse-grained authorization with ACLs
Enabling Ranger authorization
Configuring HTTPS encryption for the Kudu master and tablet server web UIs
Configuring a secure Kudu cluster using the command line
▶︎
Apache Kudu background maintenance tasks
Maintenance manager
Flushing data to disk
Compacting on-disk data
Write-ahead log garbage collection
Tablet history garbage collection and the ancient history mark
▶︎
Developing Applications with Apache Kudu
▶︎
Developing applications with Apache Kudu
Viewing the API documentation
Kudu example applications
Maven artifacts
Building the Java client
Kudu Python client
▶︎
Kudu integration with Spark
Upsert option in Kudu Spark
Using Spark with a secure Kudu cluster
Spark integration known issues and limitations
Spark integration best practices
▶︎
Using Apache Impala with Apache Kudu
▶︎
Using Apache Impala with Apache Kudu
Impala database containment model
Internal and external Impala tables
▶︎
Using Impala to query Kudu tables
Querying an existing Kudu table from Impala
Creating a new Kudu table from Impala
CREATE TABLE AS SELECT
▶︎
Partitioning tables
Basic partitioning
Advanced partitioning
Non-covering range partitions
Partitioning guidelines
Optimizing performance for evaluating SQL predicates
Inserting a row
Inserting in bulk
INSERT and primary key uniqueness violations
Updating a row
Updating in bulk
Upserting a row
Altering a table
Deleting a row
Deleting in bulk
Failures during INSERT, UPDATE, UPSERT, and DELETE operations
Altering table properties
Dropping a Kudu table using Impala
Security considerations
Known issues and limitations
Next steps
▶︎
Compute
▶︎
Using YARN Web UI and CLI
Access the YARN Web User Interface
View Cluster Overview
View Nodes and Node Details
View Queues and Queue Details
▶︎
View All Applications
Search applications
View application details
UI Tools
Use the YARN CLI to View Logs for Applications
▶︎
Configuring Apache Hadoop YARN Security
Linux Container Executor
▶︎
Managing Access Control Lists
YARN ACL rules
YARN ACL syntax
▶︎
YARN ACL types
Admin ACLs
Queue ACLs
▶︎
Application ACLs
Application ACL evaluation
MapReduce Job ACLs
Spark Job ACLs
Application logs' ACLs
▶︎
Configure TLS/SSL for Core Hadoop Services
Configure TLS/SSL for HDFS
Configure TLS/SSL for YARN
Configure Cross-Origin Support for YARN UIs and REST APIs
Configure YARN Security for Long-Running Applications
▶︎
Configuring Apache Hadoop YARN High Availability
▶︎
YARN ResourceManager High Availability
YARN ResourceManager high availability architecture
Configure YARN ResourceManager high availability
Use the yarn rmadmin tool to administer ResourceManager high availability
▶︎
Work Preserving Recovery for YARN components
Configure work preserving recovery on ResourceManager
Configure work preserving recovery on NodeManager
Example: Configuration for work preserving recovery
▶︎
Managing and Allocating Cluster Resources using Capacity Scheduler
▶︎
Resource Scheduling and Management
YARN resource allocation of multiple resource-types
Hierarchical queue characteristics
Scheduling among queues
Application reservations
Resource distribution workflow
▶︎
Use CPU scheduling
Configure CPU scheduling and isolation
Use CPU scheduling with distributed shell
▶︎
Use GPU scheduling
Configure GPU scheduling and isolation
Use GPU scheduling with distributed shell
▶︎
Use FPGA scheduling
Configure FPGA scheduling and isolation
Use FPGA with distributed shell
▶︎
Limit CPU usage with Cgroups
Use Cgroups
Enable Cgroups
▶︎
Partition a cluster using node labels
Configure node labels
Use node labels
▶︎
Manage Queues
Prerequisite
Add queues using YARN Queue Manager UI
Configuring cluster capacity with Queues
Start and stop queues
Delete queues
▶︎
Configure Scheduler Properties at the Global Level
Set global maximum application priority
Configure preemption
Enable Intra-Queue preemption
Set global application limits
Set default Application Master resource limit
Enable asynchronous scheduler
▶︎
Configure placement rules
Dynamic queues
Create placement rules
Reorder placement rules
Edit placement rules
Delete placement rules
Configure queue mapping to use the user name from the application tag using Cloudera Manager
Configure NodeManager heartbeat
Configure data locality
▶︎
Configure Per Queue Properties
Set user limits within a queue
Set Maximum Application limit for a specific queue
Set Application-Master resource-limit for a specific queue
Control access to queues using ACLs
Enable preemption for a specific queue
Enable Intra-Queue Preemption for a specific queue
Configure dynamic queue properties
▶︎
Set Ordering policies within a specific queue
Configure queue ordering policies
Associate node labels with queues
Enable override of default queue mappings at individual queue level
▶︎
Managing Apache Hadoop YARN Services
Configure YARN Services API to Manage Long-running Applications
Configure YARN Services using Cloudera Manager
▶︎
Running YARN Services
Deploy and manage services on YARN
Launch a YARN service
Save a YARN service definition
▶︎
Create new YARN services using UI
Create a standard YARN service
Create a custom YARN service
Manage the YARN service life cycle through the REST API
YARN services API examples
▶︎
Managing YARN Docker Containers
▶︎
Configuring YARN Docker Containers Support
Prerequisites for installing Docker
Recommendations for managing Docker containers on YARN
Install Docker
Configure Docker
Configure YARN for managing Docker containers
Docker on YARN configuration properties
▶︎
Running Dockerized Applications on YARN
Docker on YARN example: MapReduce job
Docker on YARN example: DistributedShell
Docker on YARN example: Spark-on-Docker-on-YARN
▶︎
Configuring Apache Hadoop YARN Log Aggregation
YARN Log Aggregation Overview
Log Aggregation File Controllers
Configure Log Aggregation
Log Aggregation Properties
Configure Debug Delay
▶︎
Managing Apache ZooKeeper
Add a ZooKeeper service
Use multiple ZooKeeper services
Replace a ZooKeeper disk
Replace a ZooKeeper role with ZooKeeper service downtime
Replace a ZooKeeper role without ZooKeeper service downtime
Replace a ZooKeeper role on an unmanaged cluster
Confirm the election status of a ZooKeeper service
▶︎
Configuring Apache ZooKeeper
Enable the AdminServer
Configure four-letter-word commands in ZooKeeper
▶︎
Managing Apache ZooKeeper Security
▶︎
ZooKeeper Authentication
Configure ZooKeeper server for Kerberos authentication
Configure ZooKeeper client shell for Kerberos authentication
Verify the ZooKeeper authentication
Enable server-server mutual authentication
▶︎
ZooKeeper ACLs Best Practices
ZooKeeper ACLs Best Practices: Atlas
ZooKeeper ACLs Best Practices: HBase
ZooKeeper ACLs Best Practices: HDFS
ZooKeeper ACLs Best Practices: Kafka
ZooKeeper ACLs Best Practices: Oozie
ZooKeeper ACLs Best Practices: Ranger
ZooKeeper ACLs best practices: Search
ZooKeeper ACLs Best Practices: YARN
ZooKeeper ACLs Best Practices: ZooKeeper
Configure ZooKeeper TLS/SSL using Cloudera Manager
▶︎
Data Access
▶︎
Using Data Analytics Studio
Compose queries
▶︎
Manage queries
Searching queries
Refining query search using filters
Saving the search results
Compare queries
▶︎
View query details
Viewing the query recommendations
Viewing the query details
Viewing the visual explain for a query
Viewing the Hive configurations for a query
Viewing the query timeline
Viewing the task-level DAG information
Viewing the DAG flow
Viewing the DAG counters
Viewing the Tez configurations for a query
▶︎
Manage databases and tables
Using the Database Explorer
Searching tables
Managing tables
Creating tables
Uploading tables
Editing tables
Deleting tables
Managing columns
Managing partitions
Viewing storage information
Viewing detailed information
Viewing table and column statistics
Previewing tables using Data Preview
▶︎
Manage reports
Viewing the Read and Write report
Viewing the Join report
▶︎
DAS administration using Cloudera Manager in CDP
Running a query on a different Hive instance
Modifying the session cookie timeout value
▶︎
Configuring user authentication
Configuring user authentication using SPNEGO
Configuring user authentication using LDAP
Configuring TLS/SSL encryption manually for DAS using Cloudera Manager
Cleaning up old queries, DAG information, and reports data
▶︎
DAS administration using Ambari in CDP
Running a query on a different Hive instance
Cleaning up old queries, DAG information, and reports data using Ambari
Creating system tables to run query on Hive and Tez DAG events
Changing the retention period of DAS event logs
▶︎
Working with Apache Hive Metastore
HMS table storage
Configuring HMS for high availability
Configure HMS properties for authorization
Filter HMS results
▶︎
Setting up the metastore database
▶︎
Setting up the backend Hive metastore database
Set up MariaDB or MySQL database
Set up a PostgreSQL database
Set up an Oracle database
Configure metastore database properties
Configuring metastore location
Set up a JDBC URL connection override
Tuning the metastore
▶︎
Starting Apache Hive
Start Hive on an insecure cluster
Start Hive using a password
Run a Hive command
Converting Hive CLI scripts to Beeline
▶︎
Using Apache Hive
▶︎
Apache Hive 3 tables
Locating Hive tables and changing the location
Refer to a table using dot notation
Create a CRUD transactional table
Create an insert-only transactional table
Create, use, and drop an external table
Drop an external table along with data
Convert a managed, non-transactional table to external
Using constraints
Determine the table type
Hive 3 ACID transactions
▶︎
Using materialized views
Create and use a materialized view
Use materialized view optimations from a subquery
Drop a materialized view
Show materialized views
Describe a materialized view
Manage query rewrites
Create and use a partitioned materialized view
▶︎
Scheduling queries
Enable scheduled queries
Periodically rebuild a materialized view
Get scheduled query information and monitor the query
▶︎
Apache Hive query basics
Query the information_schema database
Insert data into a table
Update data in a table
Merge data in tables
Delete data from a table
▶︎
Create a temporary table
Configure temporary table storage
▶︎
Use a subquery
Subquery restrictions
Aggregate and group data
Query correlated data
▶︎
Using common table expressions
Use a CTE in a query
Escape an illegal identifier
CHAR data type support
ORC vs Parquet in CDP
Create a default directory for managed tables
Partitions introduction
Create partitions dynamically
▶︎
Manage partitions
Automate partition discovery and repair
Repair partitions manually using MSCK repair
Manage partition retention time
Generate surrogate keys
Using JdbcStorageHandler to query RDBMS
▶︎
Using functions
Reload, view, and filter functions
▶︎
Create a user-defined function
Set up the development environment
Create the UDF class
Build the project and upload the JAR
Register the UDF
Call the UDF in a query
▶︎
Managing Apache Hive
▶︎
ACID operations
Configure partitions for transactions
View transactions
View transaction locks
▶︎
Data compaction
Compaction prerequisites
Enable automatic compaction
Start compaction manually
View compaction progress
Disable automatic compaction
Compactor properties
▶︎
Query vectorization
Configuring query vectorization
Check query execution
Tracking Hive on Tez query execution
Tracking an Apache Hive query in YARN
Application not running message
▶︎
Configuring Apache Hive
Limit concurrent connections
▶︎
Configuring HiveServer high availability using a load balancer
Configuring the Hive Delegation Token Store
Adding a HiveServer role
Configuring the HiveServer load balancer
Configuring HiveServer high availability using ZooKeeper
▶︎
Generating statistics
Set up the cost-based optimizer and statistics
Generate and view Apache Hive statistics
Statistics generation and viewing commands
Removing scratch directories
▶︎
Securing Apache Hive
Authorizing Apache Hive Access
Transactional table access
External table access
Disabling impersonation (doas)
Managing YARN queue users
Configure HiveServer for ETL using YARN queues
Connecting to an Apache Hive endpoint through Apache Knox
Apache Spark access to Apache Hive
▶︎
Hive Authentication
Secure HiveServer using LDAP
Client connections to HiveServer
Pluggable authentication modules in HiveServer
JDBC connection string syntax
▶︎
Encrypting Communication
Enable TLS/SSL for HiveServer
Enable SASL in HiveServer
Secure Hive Metastore
Activating the Hive Web UI
▶︎
Integrating Apache Hive with Apache Spark and BI
▶︎
Hive Warehouse Connector for accessing Apache Spark data
HWC execution modes
Spark Direct Reader mode
JDBC execution mode
Automating mode selection
Configuring Spark Direct Reader mode
Configuring JDBC execution mode
Kerberos configurations for HWC
Configuring external file authorization
Reading managed tables through HWC
Writing managed tables through HWC
▶︎
API operations
HWC supported types mapping
Catalog operations
Read and write operations
Commit transaction in Spark Direct Reader mode
Close HiveWarehouseSession operations
Use HWC for streaming
HWC API Examples
Hive Warehouse Connector Interfaces
HWC configuration planning
Submit a Scala or Java application
Submit a Python app
▶︎
Apache Hive-Kafka integration
Create a table for a Kafka stream
▶︎
Querying Kafka data
Query live data from Kafka
Perform ETL by ingesting data from Kafka into Hive
▶︎
Writing data to Kafka
Write transformed Hive data to Kafka
Set consumer and producer properties as table properties
Kafka storage handler and table properties
▶︎
Connecting Hive to BI tools using a JDBC/ODBC driver
Getting the JDBC driver
Integrating Hive and a BI tool
Specify the JDBC connection string
JDBC connection string syntax
Using JdbcStorageHandler to query RDBMS
Set up JDBCStorageHandler for Postgres
▶︎
Apache Hive Performance Tuning
Low-latency analytical processing
Query results cache
Best practices for performance tuning
▶︎
Maximizing storage resources using ORC
Advanced ORC properties
Improving performance using partitions
Handling bucketed tables
▶︎
Migrating Data Using Sqoop
Data migration to Apache Hive
Set Up Sqoop
▶︎
Moving data from databases to Apache Hive
Create a Sqoop import command
Import RDBMS data into Hive
▶︎
Moving data from HDFS to Apache Hive
Import RDBMS data to HDFS
Convert an HDFS file to ORC
Incrementally update an imported table
Import command options
▶︎
Managing Apache Impala
Configuring Impala
Modifying Impala Startup Options
▶︎
Monitoring Impala
▶︎
Impala Logs
Managing Logs
Impala lineage
▶︎
Web User Interface for Debugging
Debug Web UI for Impala Daemon
Debug Web UI for StateStore
Debug Web UI for Catalog Server
Configuring Impala Web UI
Stopping Impala
▶︎
Securing Impala
Configuring Impala TLS/SSL
▶︎
Impala Authentication
Configuring Kerberos Authentication
▶︎
Configuring LDAP Authentication
Enabling LDAP for in Hue
Enabling LDAP Authentication for impala-shell
▶︎
Impala Authorization
Configuring Authorization
▶︎
Tuning Impala
Setting Up HDFS Caching
Setting up Data Cache for Remote Reads
Configuring Dedicated Coordinators and Executors
▶︎
Managing Resources in Impala
Admission Control and Query Queuing
Enabling Admission Control
Creating Static Pools
Configuring Dynamic Resource Pool
Dynamic Resource Pool Settings
Admission Control Sample Scenario
Cancelling a Query
Managing disk space for Impala data
▶︎
Managing Metadata in Impala
On-demand Metadata
Automatic Invalidation of Metadata Cache
▶︎
Automatic Invalidation/Refresh of Metadata
Configuring Event Based Automatic Metadata Sync
▶︎
Setting Timeouts in Impala
Setting Timeout and Retries for Thrift Connections to Backend Client
Increasing StateStore Timeout
Setting the Idle Query and Idle Session Timeouts
Configuring Load Balancer for Impala
▶︎
Configuring Client Access to Impala
▶︎
Impala Shell Tool
Impala Shell Configuration Options
Impala Shell Configuration File
Connecting to Impala Daemon in Impala Shell
Running Commands and SQL Statements in Impala Shell
Impala Shell Command Reference
Configuring ODBC for Impala
Configuring JDBC for Impala
Configuring Delegation for Clients
Spooling Query Results
▶︎
Using Hue
Using Hue
Enabling the SQL editor autocompleter
▶︎
Using governance-based data discovery
Searching metadata tags
▶︎
Administering Hue
Reference architecture
Hue configuration files
Hue Advanced Configuration Snippet
Hue logs
Hue supported browsers
Adding a Hue service with Cloudera Manager
Adding a Hue role instance with Cloudera Manager
▶︎
Customizing the Hue web UI
Adding a custom banner
Changing the page logo
Setting the cache timeout
Enabling or disabling anonymous usage date collection
Enabling Hue applications with Cloudera Manager
Running shell commands
Downloading and exporting data from Hue
Backing up the Hue database
Connect an external database
Enabling a multi-threaded environment for Hue
▶︎
Moving the Hue service to a different host
Adding and configuring a new Hue service on a new host
Adding new role instances for Hue server, Hue Load Balancer, and Kerberos Ticket Renewer on new hosts
▶︎
Securing Hue
▶︎
User management in Hue
Understanding Hue users and groups
Finding the list of Hue superusers
Creating a Hue user
Creating a group in Hue
Managing Hue permissions
Resetting Hue user password
Assigning superuser status to an LDAP user
▶︎
User authentication in Hue
Authentication using Kerberos
▶︎
Authentication using LDAP
Import and sync LDAP users and groups
Configuring authentication with LDAP and Search Bind
Configuring authentication with LDAP and Direct Bind
Multi-server LDAP/AD autentication
Testing the LDAP configuration
Configuring group permissions
Enabling LDAP authentication with HiveServer2 and Impala
LDAP properties
Configuring LDAP on unmanaged clusters
▶︎
Authentication using SAML
Configuring SAML authentication on managed clusters
Manually configuring SAML authentication
Integrating your identity provider's SAML server with Hue
SAML properties
Troubleshooting SAML authentication
Authentication using Knox SSO
Applications and permissions reference
Securing Hue passwords with scripts
▶︎
Configuring TLS/SSL for Hue
Creating a truststore file in PEM format
Configuring Hue as a TLS/SSL client
Enabling Hue as a TLS/SSL client
Configuring Hue as a TLS/SSL server
Enabling Hue as a TLS/SSL server using Cloudera Manager
Enabling TLS/SSL for the Hue Load Balancer
Enabling TLS/SSL communication with HiveServer2
Enabling TLS/SSL communication with Impala
Securing database connections with TLS/SSL
Enforcing TLS version 1.2 for Hue
Securing sessions
Specifying HTTP request methods
Restricting supported ciphers for Hue
Specifying domains or pages to which Hue can redirect users
Setting Oozie permissions
▶︎
Tuning Hue
Adding a load balancer
▶︎
Configuring high availability
Configuring Hive and Impala for high availability with Hue
Configuring for HDFS high availability
▶︎
Search Tutorial
▶︎
Tutorial
▶︎
Validating the Cloudera Search Deployment
Create a Test Collection
Index Sample Data
Query Sample Data
▶︎
Indexing Sample Tweets with Cloudera Search
▶︎
Preparing to Index Sample Tweets with Cloudera Search
Create a Collection for Tweets
Copy Sample Tweets to HDFS
▶︎
Using MapReduce Batch Indexing to Index Sample Tweets
Batch Indexing into Online Solr Servers Using GoLive
Batch Indexing into Offline Solr Shards
▶︎
Securing Cloudera Search
Cloudera Search Security Overview
Configure TLS/SSL encryption for Solr
▶︎
Cloudera Search Authentication
Configure Kerberos Authentication for Solr
Enable Kerberos Authentication in Solr
Set Proxy Server Authentication for Clusters Using Kerberos
Overview of Proxy Usage and Load Balancing for Search
Enable LDAP Authentication in Solr
Enabling Solr Clients to Authenticate with a Secure Solr
Enable Ranger Authorization in Solr
▶︎
Tuning Cloudera Search
Solr Server Tuning Categories
Setting Java System Properties for Solr
Setting Lucene Version
Enable multi-threaded faceting
Tuning Garbage Collection
Enable Garbage Collector Logging
Solr and HDFS - the Block Cache
▶︎
Tuning Replication
Adjust the Solr replication factor for index files stored in HDFS
▶︎
Managing Cloudera Search
▶︎
Managing
Generating Solr collection configuration using instance directories
Creating Collections
Using Custom JAR Files with Search
Cloudera Search Configuration Files
▶︎
Managing Configuration Using Configs or Instance Directories
Managing Configs
Managing Instance Directories
Securing configs with ZooKeeper ACLs and Ranger
Config Templates
Updating the Schema in a Solr Collection
▶︎
Managing Collections in Cloudera Search
Creating a Solr Collection
Viewing Existing Solr Collections
Deleting All Documents in a Solr Collection
Backing Up and Restoring Solr Collections
Deleting a Solr Collection
▶︎
Example solrctl Usage
Using solrctl with an HTTP proxy
Creating Replicas of Existing Shards
Converting Instance Directories to Configs
Migrating Solr Replicas
▶︎
Backing Up and Restoring Cloudera Search
Backing Up a Solr Collection
Restoring a Solr Collection
Cloudera Search Backup and Restore Command Reference
solrctl Reference
▶︎
Cloudera Search ETL
▶︎
ETL with Cloudera Morphlines
Example Morphline Usage
▶︎
Indexing Data Using Morphlines
▶︎
Indexing Data
▶︎
Near Real Time Indexing
▶︎
Lily HBase Near Real Time Indexing for Cloudera Search
Enabling Cluster-wide HBase Replication
Adding the Lily HBase Indexer Service
Starting the Lily HBase NRT Indexer Service
▶︎
Using the Lily HBase NRT Indexer Service
Enabling Replication on HBase Column Families
Creating a Collection in Cloudera Search
Creating a Lily HBase Indexer Configuration File
Creating a Morphline Configuration File
Understanding the extractHBaseCells Morphline Command
Registering a Lily HBase Indexer Configuration with the Lily HBase Indexer Service
Verifying that Indexing Works
Using the Indexer HTTP Interface
▶︎
Configuring Lily HBase Indexer Security
Configure Lily HBase Indexer to use TLS/SSL
Configure Lily HBase Indexer Service to Use Kerberos Authentication
▶︎
Batch Indexing
Spark Indexing
▶︎
MapReduce Indexing
▶︎
MapReduceIndexerTool
MapReduceIndexerTool Input Splits
MapReduceIndexerTool Metadata
MapReduceIndexerTool Usage Syntax
▶︎
Lily HBase Batch Indexing for Cloudera Search
Populating an HBase Table
Creating a Collection in Cloudera Search
Creating a Lily HBase Indexer Configuration File
Creating a Morphline Configuration File
Understanding the extractHBaseCells Morphline Command
Running HBaseMapReduceIndexerTool
HBaseMapReduceIndexerTool command line reference
Using --go-live with SSL or Kerberos
Understanding --go-live and HDFS ACLs
▶︎
Operational Database
▶︎
Configuring Apache HBase
Using DNS with HBase
Use the Network Time Protocol (NTP) with HBase
Configure the graceful shutdown timeout property
▶︎
Setting user limits for HBase
Configure ulimit for HBase using Cloudera Manager
Configuring ulimit for HBase
Configure ulimit using Pluggable Authentication Modules using the Command Line
Using dfs.datanode.max.transfer.threads with HBase
Configure encryption in HBase
▶︎
Using hedged reads
Enable hedged reads for HBase
Monitor the performance of hedged reads
▶︎
Understanding HBase garbage collection
Configure HBase garbage collection
Disable the BoundedByteBufferPool
Configure the HBase canary
▶︎
Using HBase blocksize
Configure the blocksize for a column family
▶︎
Configuring HBase BlockCache
Contents of the BlockCache
Size the BlockCache
Decide to use the BucketCache
▶︎
About the Off-heap BucketCache
Off-heap BucketCache
BucketCache IO engine
Configure BucketCache IO engine
Configure the off-heap BucketCache using Cloudera Manager
Configure the off-heap BucketCache using the command line
Cache eviction priorities
Bypass the BlockCache
Monitor the BlockCache
▶︎
Using quota management
Configuring quotas
General Quota Syntax
▶︎
Throttle quotas
Throttle quota examples
Space quotas
Quota enforcement
Quota violation policies
▶︎
Impact of quota violation policy
Live write access
Bulk Write Access
Read access
Metrics and Insight
Examples of overlapping quota policies
Number-of-Tables Quotas
Number-of-Regions Quotas
▶︎
Using HBase scanner heartbeat
Configure the scanner heartbeat using Cloudera Manager
▶︎
Storing medium objects (MOBs)
Prerequisites
Configure columns to store MOBs
Configure the MOB cache using Cloudera Manager
Test MOB storage and retrieval performance
MOB cache properties
▶︎
Limiting the speed of compactions
Configure the compaction speed using Cloudera Manager
Enable HBase indexing
▶︎
Using HBase coprocessors
Add a custom coprocessor
Disable loading of coprocessors
▶︎
Configuring HBase MultiWAL
Configuring MultiWAL support using Cloudera Manager
▶︎
Configuring the storage policy for the Write-Ahead Log (WAL)
Configure the storage policy for WALs using Cloudera Manager
Configure the storage policy for WALs using the Command Line
▶︎
Using RegionServer grouping
Enable RegionServer grouping using Cloudera Manager
Configure RegionServer grouping
Monitor RegionServer grouping
Remove a RegionServer from RegionServer grouping
Enabling ACL for RegionServer grouping
Best practices when using RegionServer grouping
Disable RegionServer grouping
▶︎
Optimizing HBase I/O
HBase I/O components
Advanced configuration for write-heavy workloads
▶︎
Managing Apache HBase Security
▶︎
HBase authentication
Configure HBase servers to authenticate with a secure HDFS cluster
Configure secure HBase replication
Configure the HBase client TGT renewal period
HBase authorization
▶︎
Configuring TLS/SSL for HBase
Prerequisites to configure TLS/SSL for HBase
Configure TLS/SSL for HBase Web UIs
Configure TLS/SSL for HBase REST Server
Configure TLS/SSL for HBase Thrift Server
▶︎
Accessing Apache HBase
▶︎
Use the HBase shell
Virtual machine options for HBase Shell
Script with HBase Shell
Use the HBase command-line utilities
Use the HBase APIs for Java
▶︎
Use the HBase REST server
Installing the REST Server using Cloudera Manager
Using the REST API
Using the REST proxy API
Use the Apache Thrift Proxy API
▶︎
Use the Hue HBase app
Configure the HBase thrift server role
▶︎
Managing Apache HBase
▶︎
Starting and stopping HBase using Cloudera Manager
Start HBase
Stop HBase
▶︎
Graceful HBase shutdown
Gracefully shut down an HBase RegionServer
Gracefully shut down the HBase service
▶︎
Importing data into HBase
Choose the right import method
Use snapshots
Use CopyTable
▶︎
Use BulkLoad
Use cases for BulkLoad
Use cluster replication
Use Sqoop
Use Spark
Use a custom MapReduce job
▶︎
Use HashTable and SyncTable Tool
HashTable/SyncTable tool configuration
Synchronize table data using HashTable/SyncTable tool
▶︎
Writing data to HBase
Variations on Put
Versions
Deletion
Examples
▶︎
Reading data from HBase
Perform scans using HBase Shell
▶︎
HBase filtering
Dynamically loading a custom filter
Logical operators, comparison operators and comparators
Compound operators
Filter types
HBase Shell example
Java API example
HBase online merge
Move HBase Master Role to another host
Expose HBase metrics to a Ganglia server
▶︎
Using the HBase-Spark connector
Example: Using the HBase-Spark connector
▶︎
Configuring Apache HBase High Availability
Enable HBase high availability using Cloudera Manager
HBase read replicas
Timeline consistency
Keep replicas current
Read replica properties
Configure read replicas using Cloudera Manager
▶︎
Using rack awareness for read replicas
Create a topology map
Create a topology script
Activate read replicas on a table
Request a timeline-consistent read
▶︎
Using Apache HBase Backup and Disaster Recovery
HBase backup and disaster recovery strategies
▶︎
Configuring HBase snapshots
About HBase snapshots
Configure snapshots
▶︎
Manage HBase snapshots using Cloudera Manager
Browse HBase tables
Take HBase snapshots
▶︎
Store HBase snapshots on Amazon S3
Configure HBase in Cloudera Manager to store snapshots in Amazon S3
Configure the dynamic resource pool used for exporting and importing snapshots in Amazon S3
HBase snapshots on Amazon S3 with Kerberos enabled
Manage HBase snapshots on Amazon S3 in Cloudera Manager
Delete HBase snapshots from Amazon S3
Restore an HBase snapshot from Amazon S3
Restore an HBase snapshot from Amazon S3 with a new name
Manage Policies for HBase snapshots in Amazon S3
▶︎
Manage HBase snapshots using the HBase shell
Shell commands
Take a snapshot using a shell script
Export a snapshot to another cluster
▶︎
Snapshot failures
Information and debugging
▶︎
Using HBase replication
Common replication topologies
Notes about replication
Replication requirements
▶︎
Deploy HBase replication
Replication across three or more clusters
Enable replication on a specific table
Configure secure replication
▶︎
Configure bulk load replication
Enable bulk load replication using Cloudera Manager
Create empty table on the destination cluster
Disable replication at the peer level
Stop replication in an emergency
▶︎
Initiate replication when data already exist
Replicate pre-exist data in an active-active deployment
Effects of WAL rolling on replication
Configure secure HBase replication
Restore data from a replica
Verify that replication works
Replication caveats
▶︎
Configuring Apache HBase for Apache Phoenix
Configure HBase for use with Phoenix
▶︎
Using Apache Phoenix to Store and Access Data
▶︎
Mapping Phoenix schemas to HBase namespaces
Enable namespace mapping
▶︎
Associating tables of a schema to a namespace
Associate table in a customized Kerberos environment
Associate a table in a non-customized environment without Kerberos
Using JDBC API
▶︎
Connecting to PQS using JDBC
Connect to Phoenix Query Server
Connect to PQS through Apache Knox
Using non-JDBC drivers
▶︎
Understanding Apache Phoenix-Spark connector
Configure Phoenix-Spark connector using Cloudera Manager
Phoenix-Spark connector usage examples
▶︎
Understanding Apache Phoenix-Hive connector
Configure Phoenix-Hive connector using Cloudera Manager
Apache Phoenix-Hive usage examples
Limitations of Phoenix-Hive connector
▶︎
Managing Apache Phoenix Security
Managing Apache Phoenix security
Enable Phoenix ACLs
Configure TLS encryption manually for Phoenix Query Server
▶︎
Data Engineering
▶︎
Configuring Apache Spark
▶︎
Configuring dynamic resource allocation
Customize dynamic resource allocation settings
Configure a Spark job for dynamic resource allocation
Dynamic resource allocation properties
▶︎
Spark security
Enabling Spark authentication
Enabling Spark Encryption
Running Spark applications on secure clusters
Accessing compressed files in Spark
▶︎
Developing Apache Spark Applications
Introduction
Spark application model
Spark execution model
Developing and running an Apache Spark WordCount application
Using the Spark DataFrame API
▶︎
Building Spark Applications
Best practices for building Apache Spark applications
Building reusable modules in Apache Spark applications
Packaging different versions of libraries with an Apache Spark application
▶︎
Using Spark SQL
SQLContext and HiveContext
Querying files into a DataFrame
Spark SQL example
Interacting with Hive views
Performance and storage considerations for Spark SQL DROP TABLE PURGE
TIMESTAMP compatibility for Parquet files
Accessing Spark SQL through the Spark shell
Calling Hive user-defined functions (UDFs)
▶︎
Using Spark Streaming
Spark Streaming and Dynamic Allocation
Spark Streaming Example
Enabling fault-tolerant processing in Spark Streaming
Configuring authentication for long-running Spark Streaming jobs
Building and running a Spark Streaming application
Sample pom.xml file for Spark Streaming with Kafka
▶︎
Accessing external storage from Spark
▶︎
Accessing data stored in Amazon S3 through Spark
Examples of accessing Amazon S3 data from Spark
Accessing Hive from Spark
Accessing HDFS Files from Spark
▶︎
Accessing ORC Data in Hive Tables
Accessing ORC files from Spark
Predicate push-down optimization
Loading ORC data into DataFrames using predicate push-down
Optimizing queries using partition pruning
Enabling vectorized query execution
Reading Hive ORC tables
Accessing Avro data files from Spark SQL applications
Accessing Parquet files from Spark SQL applications
▶︎
Using Spark MLlib
Running a Spark MLlib example
Enabling Native Acceleration For MLlib
Using custom libraries with Spark
▶︎
Running Apache Spark Applications
Introduction
Running your first Spark application
Running sample Spark applications
▶︎
Configuring Spark Applications
Configuring Spark application properties in spark-defaults.conf
Configuring Spark application logging properties
▶︎
Submitting Spark applications
spark-submit command options
Spark cluster execution overview
Canary test for pyspark command
Fetching Spark Maven dependencies
Accessing the Spark History Server
▶︎
Running Spark applications on YARN
Spark on YARN deployment modes
Submitting Spark Applications to YARN
Monitoring and Debugging Spark Applications
Example: Running SparkPi on YARN
Configuring Spark on YARN Applications
Dynamic allocation
▶︎
Submitting Spark applications using Livy
Using Livy with Spark
Using Livy with interactive notebooks
Using the Livy API to run Spark jobs
▶︎
Running an interactive session with the Livy API
Livy objects for interactive sessions
Setting Python path variables for Livy
Livy API reference for interactive sessions
▶︎
Submitting batch applications using the Livy API
Livy batch object
Livy API reference for batch jobs
▶︎
Using PySpark
Running PySpark in a virtual environment
Running Spark Python applications
Automating Spark Jobs with Oozie Spark Action
▶︎
Tuning Apache Spark
Introduction
Check Job Status
Check Job History
Improving Software Performance
▶︎
Tuning Apache Spark Applications
Tuning Spark Shuffle Operations
Choosing Transformations to Minimize Shuffles
When Shuffles Do Not Occur
When to Add a Shuffle Transformation
Secondary Sort
Tuning Resource Allocation
Resource Tuning Example
Tuning the Number of Partitions
Reducing the Size of Data Structures
Choosing Data Formats
▶︎
CDS 3 (Experimental) Powered by Apache Spark
CDS 3 Overview
CDS 3 Requirements
Installing CDS 3
CDS 3 Packaging, and Download
Using the CDS 3 Maven Repo
CDS 3.0 Maven Artifacts
▶︎
Configuring Apache Zeppelin
Introduction
Configuring Livy
Configure User Impersonation for Access to Hive
Configure User Impersonation for Access to Phoenix
▶︎
Enabling Access Control for Zeppelin Elements
Enable Access Control for Interpreter, Configuration, and Credential Settings
Enable Access Control for Notebooks
Enable Access Control for Data
▶︎
Shiro Settings: Reference
Active Directory Settings
LDAP Settings
General Settings
shiro.ini Example
▶︎
Using Apache Zeppelin
Introduction
Launch Zeppelin
▶︎
Working with Zeppelin Notes
Create and Run a Note
Import a Note
Export a Note
Using the Note Toolbar
Import External Packages
▶︎
Configuring and Using Zeppelin Interpreters
Modify interpreter settings
Using Zeppelin Interpreters
Customize interpreter settings in a note
Use the JDBC interpreter to access Hive
Use the JDBC interpreter to access Phoenix
Use the Livy interpreter to access Spark
Using Spark Hive Warehouse and HBase Connector Client .jar files with Livy
▶︎
Security
▶︎
Apache Ranger Auditing
Audit Overview
▶︎
Managing Auditing with Ranger
View audit details
Create a read-only Admin user (Auditor)
▶︎
Apache Ranger Authorization
Using Ranger to Provide Authorization in CDP
▶︎
Ranger Policies Overview
Ranger tag-based policies
Tags and policy evaluation
Ranger access conditions
▶︎
Using the Ranger Console
Accessing the Ranger console
Ranger console navigation
▶︎
Resource-based Services and Policies
▶︎
Configuring resource-based services
Configure a resource-based service: Atlas
Configure a resource-based service: HBase
Configure a resource-based service: HDFS
Configure a resource-based service: Hive
Configure a resource-based service: Kafka
Configure a resource-based service: Knox
Configure a resource-based service: NiFi
Configure a resource-based service: NiFi Registry
Configure a resource-based service: Solr
Configure a resource-based service: YARN
▶︎
Configuring resource-based policies
Configure a resource-based policy: Atlas
Configure a resource-based policy: HBase
Configure a resource-based policy: HDFS
Configure a resource-based policy: Hive
Configure a resource-based policy: Kafka
Configure a resource-based policy: Knox
Configure a resource-based policy: NiFi
Configure a resource-based policy: NiFi Registry
Configure a resource-based policy: Solr
Configure a resource-based policy: YARN
Wildcards and variables in resource-based policies
Preloaded resource-based services and policies
▶︎
Importing and exporting resource-based policies
Import resource-based policies for a specific service
Import resource-based policies for all services
Export resource-based policies for a specific service
Export all resource-based policies for all services
▶︎
Row-level filtering and column masking in Hive
Row-level filtering in Hive with Ranger policies
Dynamic resource-based column masking in Hive with Ranger policies
Dynamic tag-based column masking in Hive with Ranger policies
▶︎
Tag-based Services and Policies
Adding a tag-based service
▶︎
Adding tag-based policies
Using tag attributes and values in Ranger tag-based policy conditions
Adding a tag-based PII policy
Default EXPIRES ON tag policy
▶︎
Importing and exporting tag-based policies
Import tag-based policies
Export tag-based policies
Create a time-bound policy
▶︎
Ranger Security Zones
Overview
Adding a Ranger security zone
▶︎
Administering Ranger Users, Groups, Roles, and Permissions
Add a user
Edit a user
Delete a user
Add a group
Edit a group
Delete a group
Add or edit permissions
▶︎
Administering Ranger Reports
View Ranger reports
Search Ranger reports
Export Ranger reports
▶︎
Configuring Ranger Authentication with UNIX, LDAP, or AD
▶︎
Configuring Ranger Authentication with UNIX, LDAP, AD, or PAM
Configure Ranger authentication for UNIX
Configure Ranger authentication for AD
Configure Ranger authentication for LDAP
Configure Ranger authentication for PAM
▶︎
Ranger AD Integration
Ranger UI authentication
Ranger UI authorization
Ranger Usersync
Ranger user management
Known issue: Ranger group mapping
▶︎
Configuring Advanced Security Options for Apache Ranger
Configure Kerberos authentication for Apache Ranger
Configure TLS/SSL for Apache Ranger
▶︎
Configuring Apache Ranger High Availability
Configure Ranger Admin High Availability
Configure Ranger Admin High Availability with a Load Balancer
▶︎
Apache Knox Authentication
▶︎
Apache Knox Overview
Securing Access to Hadoop Cluster: Apache Knox
Apache Knox Gateway Overview
Knox Supported Services Matrix
Knox Topology Management in Cloudera Manager
Using the Apache Knox Gateway UI
Proxy Cloudera Manager through Apache Knox
▶︎
Installing Apache Knox
Apache Knox Install Role Parameters
▶︎
Managing Knox shared providers in Cloudera Manager
Configure Apache Knox authentication for PAM
Configure Apache Knox authentication for AD/LDAP
▶︎
Managing existing Apache Knox shared providers
Add a new shared provider configuration
Add a new provider in an existing provider configuration
Modify a provider in an existing provider configuration
Disable a provider in an existing provider configuration
Saving aliases
Configure Kerberos authentication in Apache Knox shared providers
▶︎
Managing services for Apache Knox via Cloudera Manager
Enable proxy for a known service in Apache Knox
Disable proxy for a known service in Apache Knox
Add a custom service to Apache Knox Proxy
Add a custom topology in the deployed Apache Knox Gateway
▶︎
Managing Service Parameters for Apache Knox via Cloudera Manager
Add a custom service parameter to a known service
Modify a custom service parameter
Remove a custom service parameter
▶︎
Governance
▶︎
Searching with Metadata
Searching overview
Using Basic Search
Using Search filters
Using Free-text Search
Saving searches
Using advanced search
▶︎
Working with Classifications and Labels
Working with Atlas classifications and labels
Creating classifications
Creating labels
Adding attributes to classifications
Associating classifications with entities
Propagating classifications through lineage
Searching for entities using classifications
▶︎
Exploring using Lineage
Lineage overview
Viewing lineage
Lineage lifecycle
▶︎
Leveraging Business Metadata
Business Metadata overview
Creating Business Metadata
Adding attributes to Business Metadata
Associating Business Metadata attributes with entities
Importing Business Metadata associations in bulk
Searching for entities using Business Metadata attributes
▶︎
Managing Business Terms with Atlas Glossaries
Glossaries overview
Creating glossaries
Creating terms
Associating terms with entities
Defining related terms
Creating categories
Assigning terms to categories
Searching using terms
Importing Glossary terms in bulk
▶︎
Securing Atlas
Securing Atlas
Configuring TLS/SSL for Apache Atlas
▶︎
Configuring Atlas Authentication
Configure Kerberos authentication for Apache Atlas
Configure Atlas authentication for AD
Configure Atlas authentication for LDAP
Configure Atlas PAM authentication
Configure Atlas file-based authentication
▶︎
Configuring Atlas Authorization
Configuring Ranger Authorization for Atlas
Configuring Atlas Authorization using Ranger
▶︎
Configuring Atlas using Cloudera Manager
▶︎
Configuring and Monitoring Atlas
Showing Atlas Server status
Accessing Atlas logs
▶︎
Migrating Data from Cloudera Navigator to Atlas
Migrating Navigator content to Atlas
▶︎
Configuring Oozie
Overview of Oozie
Adding the Oozie service using Cloudera Manager
Considerations for Oozie to work with AWS
▶︎
Redeploying the Oozie ShareLib
Redeploying the Oozie sharelib using Cloudera Manager
▶︎
Oozie configurations with CDP services
▶︎
Using Sqoop actions with Oozie
Deploying and configuring Oozie Sqoop1 Action JDBC drivers
Configuring Oozie Sqoop1 Action workflow JDBC drivers
Configuring Oozie to enable MapReduce jobs to read or write from Amazon S3
Configuring Oozie to use HDFS HA
Using Hive Warehouse Connector with Oozie Spark action
▶︎
Oozie High Availability
Requirements for Oozie High Availability
▶︎
Configuring Oozie High Availability using Cloudera Manager
Enabling Oozie High Availability
Disabling Oozie High Availability
▶︎
Scheduling in Oozie using cron-like syntax
Oozie scheduling examples
▶︎
Configuring an external database for Oozie
Configuring PostgreSQL for Oozie
Configuring MariaDB for Oozie
Configuring MySQL for Oozie
Configuring Oracle for Oozie
▶︎
Working with the Oozie server
Starting the Oozie server
Stopping the Oozie server
Accessing the Oozie server with the Oozie CLIent
Accessing the Oozie server with a browser
Adding schema to Oozie using Cloudera Manager
Enabling the Oozie web console on managed clusters
Enabling Oozie SLA with Cloudera Manager
▶︎
Oozie database configurations
Configuring Oozie data purge settings using Cloudera Manager
Loading the Oozie database
Dumping the Oozie database
Setting the Oozie database timezone
Prerequisites for configuring TLS/SSL for Oozie
Configure TLS/SSL for Oozie
Additional considerations when configuring TLS/SSL for Oozie HA
Configure Oozie client when TLS/SSL is enabled
▶︎
Streams Messaging
▶︎
Configuring Apache Kafka
Operating system requirements
Performance considerations
Quotas
▶︎
JBOD
JBOD setup
JBOD Disk migration
Setting user limits for Kafka
Connecting Kafka clients to Data Hub provisioned clusters
▶︎
Securing Apache Kafka
▶︎
TLS
Step 1: Generate keys and certificates for Kafka brokers
Step 2: Create your own certificate authority
Step 3: Sign the certificate
Step 4: Configure Kafka brokers
Step 5: Configure Kafka clients
Configure Zookeeper TLS/SSL support for Kafka
▶︎
Authentication
Kerberos authentication
▶︎
Delegation token based authentication
Enable or disable authentication with delegation tokens
Manage individual delegation tokens
Rotate the master key/secret
▶︎
Client authentication using delegation tokens
Configure clients on a producer or consumer level
Configure clients on an application level
▶︎
Kafka security hardening with Zookeeper ACLs
Restrict access to Kafka metadata in Zookeeper
Unlock Kafka metadata in Zookeeper
▶︎
LDAP authentication
Configure Kafka brokers
Configure Kafka clients
▶︎
PAM Authentication
Configure Kafka brokers
Configure Kafka clients
▶︎
Authorization
▶︎
Ranger
Enable authorization in Kafka with Ranger
Configure the resource-based Ranger service used for authorization
Using Kafka's inter-broker security
▶︎
Tuning Apache Kafka Performance
Handling large messages
▶︎
Cluster sizing
Sizing estimation based on network and disk message throughput
Choosing the number of partitions for a topic
▶︎
Broker Tuning
JVM and garbage collection
Network and I/O threads
ISR management
Log cleaner
▶︎
System Level Broker Tuning
File descriptor limits
Filesystems
Virtual memory handling
Networking parameters
Configure JMX ephemeral ports
Kafka-ZooKeeper performance tuning
▶︎
Managing Apache Kafka
▶︎
Management basics
Broker log management
Record management
Broker garbage log collection and log rotation
Client and broker compatibility across Kafka versions
▶︎
Managing topics across multiple Kafka clusters
Set up MirrorMaker in Cloudera Manager
Settings to avoid data loss
▶︎
Broker migration
Migrate brokers by modifying broker IDs in meta.properties
Use rsync to copy files from one broker to another
▶︎
Disk management
Monitoring
▶︎
Handling disk failures
Disk Replacement
Disk Removal
Reassigning replicas between log directories
Retrieving log directory replica assignment information
▶︎
Metrics
Building Cloudera Manager charts with Kafka metrics
Essential metrics to monitor
▶︎
Command Line Tools
Unsupported command line tools
kafka-topics
kafka-configs
kafka-console-producer
kafka-console-consumer
kafka-consumer-groups
▶︎
kafka-reassign-partitions
Tool usage
Reassignment examples
kafka-log-dirs
zookeeper-security-migration
kafka-delegation-tokens
kafka-*-perf-test
Configuring log levels for command line tools
Understanding the kafka-run-class Bash Script
▶︎
Developing Apache Kafka Applications
Kafka producers
▶︎
Kafka consumers
Subscribing to a topic
Groups and fetching
Protocol between consumer and broker
Rebalancing partitions
Retries
Kafka clients and ZooKeeper
▶︎
Simple Client Examples
pom.xml
SimpleConsumer.java
SimpleProducer.java
Recommendations for using the producer and consumer APIs
Kafka public APIs
Kafka Streams
▶︎
Kafka Connect
▶︎
Kafka Connect Setup
Installing the Kafka Connect Role
Configuring Streams Messaging Manager for Kafka Connect
▶︎
Using Kafka Connect
Configuring the Kafka Connect Role
Managing, Deploying and Monitoring Connectors
▶︎
Securing Kafka Connect
Configure TLS/SSL Encryption for the Kafka Connect Role
Configure Kerberos Authentication for the Kafka Connect role
Kafka Connect API Security
▶︎
Connectors
Installing Connectors
▶︎
HDFS Sink Connector
Configuration Example
▶︎
Amazon S3 Sink Connector
Configuration Example
▶︎
Configuring Cruise Control
Add Cruise Control as a service
Configuring capacity estimations and goals
▶︎
Securing Cruise Control
Enable security for Cruise Control
▶︎
Managing Cruise Control
▶︎
Rebalancing with Cruise Control
Cruise Control REST API endpoints
Rebalance after adding Kafka broker
Rebalance after demoting Kafka broker
Rebalance after removing Kafka broker
▶︎
Securing Streams Messaging Manager
Securing Streams Messaging Manager
▶︎
Monitoring Kafka Clusters using Streams Messaging Manager
Monitoring Clusters
Monitoring Producers
Monitoring Topics
Monitoring Brokers
Monitoring Consumers
▶︎
Managing Alert Policies using Streams Messaging Manager
Alert Policies Overview
Component Types and Metrics for Alert Policies
Notifiers
▶︎
Managing Alert Policies and Notifiers
Creating a Notifier
Updating a Notifier
Deleting a Notifier
Creating an Alert Policy
Updating an Alert Policy
Enabling an Alert Policy
Disabling an Alert Policy
Deleting an Alert Policy
▶︎
Managing Kafka Topics using Streams Messaging Manager
Creating a Kafka Topic
Modify a Kafka Topic
Deleting a Kafka Topic
▶︎
Monitoring End-to-End Latency using Streams Messaging Manager
End to End Latency Overview
Granularity of Metrics
Enabling Interceptors
Monitoring End-to-end Latency
End to End Latency Use Cases
▶︎
Monitoring Kafka Cluster Replications using Streams Messaging Manager
Monitoring Cluster Replications Overview
Configuring SMM for Monitoring SRM Replications
▶︎
Viewing Replication Details
Searching Cluster Replications by Source
Monitoring Cluster Replications by Quick Ranges
Monitoring Status of the Clusters to be Replicated
▶︎
Monitoring Topics to be Replicated
Searching by Topic Name
Monitoring Throughput for Cluster Replication
Monitoring Replication Latency for Cluster Replication
Monitoring Checkpoint Latency for Cluster Replication
Monitoring Throughput and Latency by Values
▶︎
Monitoring Kafka Connect using Streams Messaging Manager
Kafka Connect Overview
Default view of Kafka Connect in the SMM UI
Creating a Connector
Modifying a Connector
Deleting a Connector
▶︎
Monitoring Connectors
Monitoring Connector Profile
Monitoring Connector Settings
Monitoring Cluster Profile
▶︎
Configuring Streams Replication Manager
Add Streams Replication Manager to an existing cluster
Configuring clusters and replications
Configuring the driver role target clusters
Configuring the service role target cluster
Configuring properties not exposed in Cloudera Manager
New topic and consumer group discovery
▶︎
Configuration examples
Bidirectional replication example of two active clusters
Cross data center replication example of multiple clusters
▶︎
Using Streams Replication Manager
▶︎
SRM Command Line Tools
▶︎
srm-control
▶︎
Configuring srm-control
Configure srm-control for unsecured environments using environment variables
Configure srm-control for unsecured environments using Cloudera Manager
Configure srm-control for secured environments using environment variables
Configure srm-control for secured environments using Cloudera Manager
Topics and Groups Subcommand
Offsets Subcommand
Monitoring Replication with Streams Messaging Manager
Replicating Data
▶︎
How to Set up Failover and Failback
Configure SRM for Failover and Failback
Migrating Consumer Groups Between Clusters
▶︎
Securing Streams Replication Manager
Security overview
SRM security example for a cluster environment managed by a single Cloudera Manager instance
SRM security example for a cluster environment managed by multiple Cloudera Manager instances
▶︎
Integrating with Schema Registry
▶︎
Integrating with NiFi
Understanding NiFi Record Based Processing
Setting up the HortonworksSchemaRegistry Controller Service
Adding and Configuring Record Reader and Writer Controller Services
Using Record-Enabled Processors
▶︎
Integrating with Kafka
Integrating Kafka and Schema Registry Using NiFi Processors
Integrating Kafka and Schema Registry
▶︎
Using Schema Registry
Adding a new schema
Querying a schema
Evolving a schema
Deleting a schema
▶︎
Securing Schema Registry
Schema Registry Authorization through Ranger Access Policies
Pre-defined Access Policies for Schema Registry
Add the user or group to a pre-defined access policy
Create a Custom Access Policy
▶︎
Troubleshooting
▶︎
Troubleshooting Apache Hive
HeapDumpPath (/tmp) in Hive data nodes gets full due to .hprof files
Query fails with "Counters limit exceeded" error message
HiveServer is unresponsive due to large queries running in parallel
▶︎
Troubleshooting Apache Impala
Troubleshooting Impala
Using Breakpad Minidumps for Crash Reporting
▶︎
Troubleshooting Apache Hadoop YARN
Troubleshooting Docker on YARN
Troubleshooting on YARN
Troubleshooting Linux Container Executor
▶︎
Troubleshooting Apache HBase
Troubleshooting HBase
▶︎
Using the HBCK2 tool to remediate HBase clusters
Running the HBCK2 tool
Finding issues
Fixing issues
HBCK2 tool command reference
Thrift Server crashes after receiving invalid data
HBase is using more disk space than expected
Troubleshoot RegionServer grouping
▶︎
Troubleshooting Apache Kudu
▶︎
Troubleshooting Apache Kudu
▶︎
Issues starting or restarting the master or the tablet server
Errors during hole punching test
Already present: FS layout already exists
▶︎
NTP clock synchronization
Installing NTP-related packages
▶︎
Monitoring NTP status
Using chrony for time synchronization
NTP configuration best practices
Troubleshooting NTP stability problems
Disk space usage
Reporting Kudu crashes using breakpad
▶︎
Troubleshooting performance issues
▶︎
Kudu tracing
Accessing the tracing web interface
RPC timeout traces
Kernel stack watchdog traces
Memory limits
Block cache size
Heap sampling
Slow name resolution and nscd
▶︎
Usability issues
ClassNotFoundException: com.cloudera.kudu.hive.KuduStorageHandler
Runtime error: Could not create thread: Resource temporarily unavailable (error 11)
Tombstoned or STOPPED tablet replicas
Corruption: checksum error on CFile block
Generating a table list
Spark tuning
Symbolizing stack traces
▶︎
Troubleshooting Cloudera Search
▶︎
Troubleshooting
▶︎
Cloudera Search Configuration and Log Files
Cloudera Search Configuration Files
View and Modify Cloudera Search Configuration
Cloudera Search Log Files
View and Modify Log Levels for Cloudera Search and Related Services
Identifying problems in a Cloudera Search deployment
▶︎
Troubleshooting Data Analytics Studio
▶︎
Problem area: Queries page
Queries are not appearing on the Queries page
Query column is empty but you can see the DAG ID and Application ID
Cannot see the DAG ID and the Application ID
Cannot view queries of other users
▶︎
Problem area: Compose page
Cannot see databases, or the query editor is missing
Unable to view new databases and tables, or unable to see changes to the existing databases or tables
Troubleshooting replication failure in the DAS Event Processor
Problem area: Reports page
How DAS helps to debug Hive on Tez queries
▶︎
Troubleshooting Hue
Unable to authenticate users in Hue using SAML
Unable to terminate Hive queries from Job Browser
Unable to view or create Oozie workflows
MySQL: 1040, 'Too many connections' exception
Unable to connect Oracle database to Hue using SCAN
Increasing the maximum number of processes for Oracle database
UTF-8 codec error
ASCII codec error
Fixing authentication issues between HBase and Hue
Lengthy BalancerMember Route length
Cannot alter compressed tables in Hue
▶︎
Troubleshooting Apache Sqoop
Merge process stops during Sqoop incremental imports
Sqoop Hive import stops when HS2 does not use Kerberos authentication
▶︎
Reference
▶︎
Apache Hadoop YARN Reference
▶︎
Tuning Apache Hadoop YARN
YARN tuning overview
Step 1: Worker host configuration
Step 2: Worker host planning
Step 3: Cluster size
Steps 4 and 5: Verify settings
Step 6: Verify container settings on cluster
Step 6A: Cluster container capacity
Step 6B: Container sanity checking
Step 7: MapReduce configuration
Step 7A: MapReduce sanity checking
Set properties in Cloudera Manager
Configure memory settings
YARN Configuration Properties
Use the YARN REST APIs to manage applications
▶︎
Comparison of Fair Scheduler with Capacity Scheduler
Why one scheduler?
Scheduler performance improvements
Feature comparison
Migration from Fair Scheduler to Capacity Scheduler
▶︎
Apache Atlas Reference
Apache Atlas Advanced Search language reference
Apache Atlas Statistics reference
Apache Atlas metadata attributes
Defining Apache Atlas enumerations
▶︎
Purging deleted entities
Auditing purged entities
PUT /admin/purge/ API
POST /admin/audit/ API
▶︎
Apache Atlas technical metadata migration reference
System metadata migration
HDFS entity metadata migration
Hive entity metadata migration
Impala entity metadata migration
Spark entity metadata migration
AWS S3 entity metadata migration
▶︎
HiveServer metadata collection
HiveServer actions that produce Atlas entities
HiveServer entities created in Atlas
HiveServer relationships
HiveServer lineage
HiveServer audit entries
▶︎
HBase metadata collection
HBase actions that produce Atlas entities
HBase entities created in Atlas
Hbase lineage
HBase audit entries
▶︎
Impala metadata collection
Impala actions that produce Atlas entities
Impala entities created in Atlas
Impala lineage
Impala audit entries
▶︎
ML metadata collection
ML operations entities created in Atlas
▶︎
Spark metadata collection
Spark actions that produce Atlas entities
Spark entities created in Apache Atlas
Spark lineage
Spark relationships
Spark audit entries
Spark troubleshooting
▶︎
Apache Hive Materialized View Commands
ALTER MATERIALIZED VIEW REBUILD
ALTER MATERIALIZED VIEW REWRITE
CREATE MATERIALIZED VIEW
DESCRIBE EXTENDED and DESCRIBE FORMATTED
DROP MATERIALIZED VIEW
SHOW MATERIALIZED VIEWS
▶︎
Apache Impala Reference
▶︎
Performance Considerations
Performance Best Practices
Query Join Performance
▶︎
Table and Column Statistics
Generating Table and Column Statistics
Runtime Filtering
▶︎
Partitioning
Partition Pruning for Queries
HDFS Caching
HDFS Block Skew
Understanding Performance using EXPLAIN Plan
Understanding Performance using SUMMARY Report
Understanding Performance using Query Profile
▶︎
Scalability Considerations
Scaling Limits and Guidelines
Dedicated Coordinator
▶︎
Hadoop File Formats Support
Using Text Data Files
Using Parquet Data Files
Using ORC Data Files
Using Avro Data Files
Using RCFile Data Files
Using SequenceFile Data Files
▶︎
Storage Systems Supports
Impala with HDFS
▶︎
Impala with Kudu
Configuring for Kudu Tables
▶︎
Impala DDL for Kudu
Partitioning for Kudu Tables
Impala DML for Kudu Tables
Impala with HBase
Impala with Azure Data Lake Store (ADLS)
▶︎
Impala with Amazon S3
Specifying Impala Credentials to Access S3
Ports Used by Impala
Migration Guide
Impala Authorization
Modifying Impala Startup Options
Setting up Data Cache for Remote Reads
Managing Metadata in Impala
On-demand Metadata
Transactions
▶︎
Apache Impala SQL Reference
▶︎
Impala SQL
▶︎
Schema objects
Aliases
Databases
Functions
Identifiers
Tables
Views
▶︎
Data types
ARRAY complex type
BIGINT data type
BOOLEAN data type
CHAR data type
DATE data type
DECIMAL data type
DOUBLE data type
FLOAT data type
INT data type
MAP complex type
REAL data type
SMALLINT data type
STRING data type
STRUCT complex type
▶︎
TIMESTAMP data type
Customizing time zones
TINYINT data type
VARCHAR data type
Complex types
Literals
Operators
Comments
▶︎
SQL statements
DDL statements
DML statements
ALTER DATABASE statement
ALTER TABLE statement
ALTER VIEW statement
COMMENT statement
COMPUTE STATS statement
CREATE DATABASE statement
CREATE FUNCTION statement
CREATE TABLE statement
CREATE VIEW statement
DELETE statement
DESCRIBE statement
DROP DATABASE statement
DROP FUNCTION statement
DROP STATS statement
DROP TABLE statement
DROP VIEW statement
EXPLAIN statement
GRANT statement
INSERT statement
INVALIDATE METADATA statement
LOAD DATA statement
REFRESH statement
REFRESH AUTHORIZATION statement
REFRESH FUNCTIONS statement
REVOKE statement
▶︎
SELECT statement
Joins in Impala SELECT statements
ORDER BY clause
GROUP BY clause
HAVING clause
LIMIT clause
OFFSET clause
UNION clause
Subqueries in Impala SELECT statements
TABLESAMPLE clause
WITH clause
DISTINCT operator
SET statement
SHOW statement
SHUTDOWN statement
TRUNCATE TABLE statement
UPDATE statement
UPSERT statement
USE statement
VALUES statement
Optimizer hints
Query options
▶︎
Built-in functions
Mathematical functions
Bit functions
Conversion functions
Date and time functions
Conditional functions
String functions
Miscellaneous functions
▶︎
Aggregate functions
APPX_MEDIAN function
AVG function
COUNT function
GROUP_CONCAT function
MAX function
MIN function
NDV function
STDDEV, STDDEV_SAMP, STDDEV_POP functions
SUM function
VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP functions
▶︎
Analytic functions
OVER
WINDOW
AVG
COUNT
CUME_DIST
DENSE_RANK
FIRST_VALUE
LAG
LAST_VALUE
LEAD
MAX
MIN
NTILE
PERCENT_RANK
RANK
ROW_NUMBER
SUM
▶︎
User-defined functions (UDFs)
UDF concepts
Runtime environment for UDFs
Installing the UDF development package
Writing UDFs
Writing user-defined aggregate functions (UDAFs)
Building and deploying UDFs
Performance considerations for UDFs
Examples of creating and using UDFs
Security considerations for UDFs
Limitations and restrictions for Impala UDFs
Transactions
Reserved words
Impala SQL and Hive SQL
SQL migration
▶︎
Apache Phoenix Frequently Asked Questions
Frequently asked questions
▶︎
Apache Phoenix Performance Tuning
Performance tuning
▶︎
Cloudera Search Frequently Asked Questions
FAQ
▶︎
Encryption Reference
Auto-TLS Requirements and Limitations
The certmanager utility
Rotate Auto-TLS Certificate Authority and Host Certificates
Auto-TLS Agent File Locations
▶︎
Streams Replication Manager Reference
srm-control Options Reference
Configuration Properties Reference for Properties not Available in Cloudera Manager
▶︎
Kafka Connect Connector Reference
HDFS Sink Connector Properties Reference
Amazon S3 Sink Connector Properties Reference
SMM REST API Reference
SRM REST API Reference
Cruise Control REST API Reference
A List of S3A Configuration Properties
About HBase snapshots
About the Off-heap BucketCache
Access HDFS from the NFS Gateway
Access the Recon web user interface
Access the YARN Web User Interface
Accessing Apache HBase
Accessing Atlas logs
Accessing Avro data files from Spark SQL applications
Accessing Cloud Data
Accessing compressed files in Spark
Accessing data stored in Amazon S3 through Spark
Accessing external storage from Spark
Accessing HDFS Files from Spark
Accessing Hive from Spark
Accessing ORC Data in Hive Tables
Accessing ORC files from Spark
Accessing Parquet files from Spark SQL applications
Accessing Spark SQL through the Spark shell
Accessing the Oozie server with a browser
Accessing the Oozie server with the Oozie CLIent
Accessing the Ranger console
Accessing the Spark History Server
Accessing the tracing web interface
ACID operations
ACL examples
ACLS on HDFS features
Activate read replicas on a table
Activating the Hive Web UI
Active / Active Architecture
Active / Stand-by Architecture
Active Directory Settings
Add a custom coprocessor
Add a custom service parameter to a known service
Add a custom service to Apache Knox Proxy
Add a custom topology in the deployed Apache Knox Gateway
Add a group
Add a new provider in an existing provider configuration
Add a new shared provider configuration
Add a user
Add a ZooKeeper service
Add Cruise Control as a service
Add HDFS system mount
Add or edit permissions
Add queues using YARN Queue Manager UI
Add storage directories using Cloudera Manager
Add Streams Replication Manager to an existing cluster
Add the HttpFS role
Add the user or group to a pre-defined access policy
Adding a custom banner
Adding a HiveServer role
Adding a Hue role instance with Cloudera Manager
Adding a Hue service with Cloudera Manager
Adding a load balancer
Adding a new schema
Adding a Ranger security zone
Adding a tag-based PII policy
Adding a tag-based service
Adding and configuring a new Hue service on a new host
Adding and Configuring Record Reader and Writer Controller Services
Adding and Removing Range Partitions
Adding attributes to Business Metadata
Adding attributes to classifications
Adding new role instances for Hue server, Hue Load Balancer, and Kerberos Ticket Renewer on new hosts
Adding schema to Oozie using Cloudera Manager
Adding tag-based policies
Adding the Lily HBase Indexer Service
Adding the Oozie service using Cloudera Manager
Additional Configuration Options for GCS
Additional considerations when configuring TLS/SSL for Oozie HA
Additional HDFS haadmin commands to administer the cluster
Adjust the Solr replication factor for index files stored in HDFS
Admin ACLs
Administering Apache Kudu
Administering Hue
Administering Ranger Reports
Administering Ranger Users, Groups, Roles, and Permissions
Administrative commands
Admission Control and Query Queuing
Admission Control Sample Scenario
Advanced Committer Configuration
Advanced configuration for write-heavy workloads
Advanced erasure coding configuration
Advanced ORC properties
Advanced partitioning
Aggregate and group data
Aggregate functions
Aggregation for Analytics
Alert Policies Overview
Aliases
Allocating DataNode memory as storage
Already present: FS layout already exists
ALTER DATABASE statement
ALTER MATERIALIZED VIEW REBUILD
ALTER MATERIALIZED VIEW REWRITE
ALTER TABLE statement
ALTER VIEW statement
Altering a table
Altering table properties
Amazon S3 Sink Connector
Amazon S3 Sink Connector Properties Reference
Analytic functions
Apache Atlas Advanced Search language reference
Apache Atlas dashboard tour
Apache Atlas metadata attributes
Apache Atlas metadata collection overview
Apache Atlas Reference
Apache Atlas Statistics reference
Apache Atlas technical metadata migration reference
Apache Hadoop HDFS Overview
Apache Hadoop YARN Overview
Apache Hadoop YARN Reference
Apache HBase Overview
Apache Hive 3 architectural overview
Apache Hive 3 tables
Apache Hive content roadmap
Apache Hive key features
Apache Hive Materialized View Commands
Apache Hive Metastore Overview
Apache Hive Overview
Apache Hive Performance Tuning
Apache Hive query basics
Apache Hive-Kafka integration
Apache Impala Overview
Apache Impala Reference
Apache Impala SQL Reference
Apache Kafka Overview
Apache Knox Authentication
Apache Knox Gateway Overview
Apache Knox Install Role Parameters
Apache Knox Overview
Apache Kudu administration
Apache Kudu architecture in a CDP Data Center deployment
Apache Kudu background maintenance tasks
Apache Kudu concepts
Apache Kudu Design
Apache Kudu Overview
Apache Kudu overview
Apache Kudu schema design
Apache Kudu transaction semantics
Apache Kudu usage limitations
Apache Phoenix Frequently Asked Questions
Apache Phoenix Overview
Apache Phoenix Performance Tuning
Apache Phoenix-Hive usage examples
Apache Ranger
Apache Ranger Auditing
Apache Ranger Authorization
Apache Spark access to Apache Hive
Apache Spark Overview
Apache Spark Overview
Apache Zeppelin Overview
API operations
APIs for accessing HDFS
Application ACL evaluation
Application ACLs
Application logs' ACLs
Application not running message
Application reservations
Applications and permissions reference
APPX_MEDIAN function
ARRAY complex type
ASCII codec error
Assigning administrator privileges to users
Assigning superuser status to an LDAP user
Assigning terms to categories
Associate a table in a non-customized environment without Kerberos
Associate node labels with queues
Associate table in a customized Kerberos environment
Associating Business Metadata attributes with entities
Associating classifications with entities
Associating tables of a schema to a namespace
Associating terms with entities
Atlas
Atlas
Atlas
Atlas classifications drive Ranger policies
Atlas metadata model overview
Audit Overview
Auditing purged entities
Authentication
Authentication tokens
Authentication using Kerberos
Authentication using Knox SSO
Authentication using LDAP
Authentication using SAML
Authorization
Authorization tokens
Authorizing Apache Hive Access
Auto-TLS Agent File Locations
Auto-TLS Requirements and Limitations
Automate partition discovery and repair
Automatic Invalidation of Metadata Cache
Automatic Invalidation/Refresh of Metadata
Automating mode selection
Automating Spark Jobs with Oozie Spark Action
AVG
AVG function
AWS S3 entity metadata migration
Back up HDFS metadata
Back up HDFS metadata using Cloudera Manager
Backing Up a Solr Collection
Backing Up and Restoring Cloudera Search
Backing Up and Restoring Solr Collections
Backing up HDFS metadata
Backing up NameNode metadata
Backing up tables
Backing up the Hue database
Backup and restore
Backup directory structure
Backup tools
Balancer commands
Balancing data across an HDFS cluster
Balancing data across disks of a DataNode
Basic partitioning
Basics
Batch Indexing
Batch Indexing into Offline Solr Shards
Batch Indexing into Online Solr Servers Using GoLive
Behavioral Changes In Cloudera Runtime 7.1.1
Benefits of centralized cache management in HDFS
Best practices for building Apache Spark applications
Best practices for performance tuning
Best practices for rack and node setup for EC
Best practices when adding new tablet servers
Best practices when using RegionServer grouping
Bi-directional Replication Flows
Bidirectional replication example of two active clusters
BIGINT data type
Bit functions
Block cache size
Block move execution
Block move scheduling
BOOLEAN data type
Bringing a tablet that has lost a majority of replicas back online
Broker garbage log collection and log rotation
Broker log management
Broker migration
Broker Tuning
Brokers
Browse HBase tables
BucketCache IO engine
Build the project and upload the JAR
Building and deploying UDFs
Building and running a Spark Streaming application
Building Cloudera Manager charts with Kafka metrics
Building reusable modules in Apache Spark applications
Building Spark Applications
Building the Java client
Built-in functions
Bulk Write Access
Business Metadata overview
Bypass the BlockCache
Cache eviction priorities
Caching terminology
Call the UDF in a query
Calling Hive user-defined functions (UDFs)
Canary test for pyspark command
Cancelling a Query
Cannot alter compressed tables in Hue
Cannot see databases, or the query editor is missing
Cannot see the DAG ID and the Application ID
Cannot view queries of other users
Catalog operations
Catalog table
CDP Security Overview
CDS 3 (Experimental) Powered by Apache Spark
CDS 3 Overview
CDS 3 Packaging, and Download
CDS 3 Requirements
CDS 3.0 Maven Artifacts
Centralized cache management architecture
Changing a nameservice name for Highly Available HDFS using Cloudera Manager
Changing directory configuration
Changing master hostnames
Changing the page logo
Changing the retention period of DAS event logs
CHAR data type
CHAR data type support
Check Job History
Check Job Status
Check query execution
Choose the right import method
Choosing a DynamoDB Table and IO Capacity
Choosing Data Formats
Choosing the number of partitions for a topic
Choosing Transformations to Minimize Shuffles
ClassNotFoundException: com.cloudera.kudu.hive.KuduStorageHandler
Cleaning up after failed jobs
Cleaning up old queries, DAG information, and reports data
Cleaning up old queries, DAG information, and reports data using Ambari
CLI commands to perform snapshot operations
Client and broker compatibility across Kafka versions
Client authentication to secure Kudu clusters
Client authentication using delegation tokens
Client connections to HiveServer
Close HiveWarehouseSession operations
Cloud storage connectors overview
Cloudera Runtime
Cloudera Runtime Component Versions
Cloudera Runtime Release Notes
Cloudera Runtime Security and Governance
Cloudera Search Architecture
Cloudera Search Authentication
Cloudera Search Backup and Restore Command Reference
Cloudera Search Configuration and Log Files
Cloudera Search Configuration Files
Cloudera Search Configuration Files
Cloudera Search ETL
Cloudera Search Frequently Asked Questions
Cloudera Search Log Files
Cloudera Search Overview
Cloudera Search Overview
Cloudera Search Security Overview
Cloudera Search Tasks and Processes
Cluster balancing algorithm
Cluster management limitations
Cluster Migration Architectures
Cluster sizing
Coarse-grained authorization
Collecting metrics via HTTP
Column compression
Column design
Column encoding
Columnar datastore
Command Line Tools
Commands for configuring storage policies
Commands for managing buckets
Commands for managing keys
Commands for managing volumes
Commands for using cache pools and directives
COMMENT statement
Comments
Commit transaction in Spark Direct Reader mode
Common Kudu workflows
Common replication topologies
Common web interface pages
Compacting on-disk data
Compaction prerequisites
Compactor properties
Compare queries
Comparing replication and erasure coding
Comparison of Fair Scheduler with Capacity Scheduler
Compatibility Policies
Complex types
Component Types and Metrics for Alert Policies
Components
Compose queries
Compound operators
Compute
COMPUTE STATS statement
Conditional functions
Config Templates
Configuration Example
Configuration Example
Configuration examples
Configuration properties
Configuration Properties Reference for Properties not Available in Cloudera Manager
Configuration updates for Spark to work with OzoneFS
Configurations and CLI options for the HDFS Balancer
Configure a resource-based policy: Atlas
Configure a resource-based policy: HBase
Configure a resource-based policy: HDFS
Configure a resource-based policy: Hive
Configure a resource-based policy: Kafka
Configure a resource-based policy: Knox
Configure a resource-based policy: NiFi
Configure a resource-based policy: NiFi Registry
Configure a resource-based policy: Solr
Configure a resource-based policy: YARN
Configure a resource-based service: Atlas
Configure a resource-based service: HBase
Configure a resource-based service: HDFS
Configure a resource-based service: Hive
Configure a resource-based service: Kafka
Configure a resource-based service: Knox
Configure a resource-based service: NiFi
Configure a resource-based service: NiFi Registry
Configure a resource-based service: Solr
Configure a resource-based service: YARN
Configure a Spark job for dynamic resource allocation
Configure acceptable Kerberos principal patterns
Configure Access to GCS from Your Cluster
Configure Apache Knox authentication for AD/LDAP
Configure Apache Knox authentication for PAM
Configure archival storage
Configure Atlas authentication for AD
Configure Atlas authentication for LDAP
Configure Atlas file-based authentication
Configure Atlas PAM authentication
Configure BucketCache IO engine
Configure bulk load replication
Configure clients on a producer or consumer level
Configure clients on an application level
Configure columns to store MOBs
Configure CPU scheduling and isolation
Configure Cross-Origin Support for YARN UIs and REST APIs
Configure data locality
Configure DataNode memory as storage
Configure Debug Delay
Configure Docker
Configure dynamic queue properties
Configure encryption in HBase
Configure four-letter-word commands in ZooKeeper
Configure FPGA scheduling and isolation
Configure GPU scheduling and isolation
Configure HBase for use with Phoenix
Configure HBase garbage collection
Configure HBase in Cloudera Manager to store snapshots in Amazon S3
Configure HBase servers to authenticate with a secure HDFS cluster
Configure HDFS RPC protection
Configure HiveServer for ETL using YARN queues
Configure HMS properties for authorization
Configure JMX ephemeral ports
Configure Kafka brokers
Configure Kafka brokers
Configure Kafka clients
Configure Kafka clients
Configure Kerberos authentication for Apache Atlas
Configure Kerberos authentication for Apache Ranger
Configure Kerberos Authentication for Solr
Configure Kerberos Authentication for the Kafka Connect role
Configure Kerberos authentication in Apache Knox shared providers
Configure Lily HBase Indexer Service to Use Kerberos Authentication
Configure Lily HBase Indexer to use TLS/SSL
Configure Log Aggregation
Configure memory settings
Configure metastore database properties
Configure mountable HDFS
Configure node labels
Configure NodeManager heartbeat
Configure Oozie client when TLS/SSL is enabled
Configure partitions for transactions
Configure Per Queue Properties
Configure Phoenix-Hive connector using Cloudera Manager
Configure Phoenix-Spark connector using Cloudera Manager
Configure placement rules
Configure preemption
Configure queue mapping to use the user name from the application tag using Cloudera Manager
Configure queue ordering policies
Configure Ranger Admin High Availability
Configure Ranger Admin High Availability with a Load Balancer
Configure Ranger authentication for AD
Configure Ranger authentication for LDAP
Configure Ranger authentication for PAM
Configure Ranger authentication for UNIX
Configure read replicas using Cloudera Manager
Configure RegionServer grouping
Configure S3 credentials for working with Ozone
Configure Scheduler Properties at the Global Level
Configure secure HBase replication
Configure secure HBase replication
Configure secure replication
Configure snapshots
Configure source and destination realms in krb5.conf
Configure SRM for Failover and Failback
Configure srm-control for secured environments using Cloudera Manager
Configure srm-control for secured environments using environment variables
Configure srm-control for unsecured environments using Cloudera Manager
Configure srm-control for unsecured environments using environment variables
Configure storage balancing for DataNodes using Cloudera Manager
Configure temporary table storage
Configure the blocksize for a column family
Configure the compaction speed using Cloudera Manager
Configure the dynamic resource pool used for exporting and importing snapshots in Amazon S3
Configure the G1GC garbage collector
Configure the graceful shutdown timeout property
Configure the HBase canary
Configure the HBase client TGT renewal period
Configure the HBase thrift server role
Configure the MOB cache using Cloudera Manager
Configure the NFS Gateway
Configure the off-heap BucketCache using Cloudera Manager
Configure the off-heap BucketCache using the command line
Configure the resource-based Ranger service used for authorization
Configure the scanner heartbeat using Cloudera Manager
Configure the storage policy for WALs using Cloudera Manager
Configure the storage policy for WALs using the Command Line
Configure TLS encryption manually for Phoenix Query Server
Configure TLS/SSL encryption for Solr
Configure TLS/SSL Encryption for the Kafka Connect Role
Configure TLS/SSL for Apache Ranger
Configure TLS/SSL for Core Hadoop Services
Configure TLS/SSL for HBase REST Server
Configure TLS/SSL for HBase Thrift Server
Configure TLS/SSL for HBase Web UIs
Configure TLS/SSL for HDFS
Configure TLS/SSL for Oozie
Configure TLS/SSL for YARN
Configure Transparent Data Encryption for Ozone
Configure ulimit for HBase using Cloudera Manager
Configure ulimit using Pluggable Authentication Modules using the Command Line
Configure User Impersonation for Access to Hive
Configure User Impersonation for Access to Phoenix
Configure work preserving recovery on NodeManager
Configure work preserving recovery on ResourceManager
Configure YARN for managing Docker containers
Configure YARN ResourceManager high availability
Configure YARN Security for Long-Running Applications
Configure YARN Services API to Manage Long-running Applications
Configure YARN Services using Cloudera Manager
Configure ZooKeeper client shell for Kerberos authentication
Configure ZooKeeper server for Kerberos authentication
Configure Zookeeper TLS/SSL support for Kafka
Configure ZooKeeper TLS/SSL using Cloudera Manager
Configuring a secure Kudu cluster using Cloudera Manager
Configuring a secure Kudu cluster using the command line
Configuring Access to Google Cloud Storage
Configuring Access to S3
Configuring Access to S3 on CDP Private Cloud Base
Configuring Access to S3 on CDP Public Cloud
Configuring ACLs on HDFS
Configuring Advanced Security Options for Apache Ranger
Configuring an external database for Oozie
Configuring and Monitoring Atlas
Configuring and running the HDFS balancer using Cloudera Manager
Configuring and tuning S3A block upload
Configuring and Using Zeppelin Interpreters
Configuring Apache Hadoop YARN High Availability
Configuring Apache Hadoop YARN Log Aggregation
Configuring Apache Hadoop YARN Security
Configuring Apache HBase
Configuring Apache HBase for Apache Phoenix
Configuring Apache HBase High Availability
Configuring Apache Hive
Configuring Apache Kafka
Configuring Apache Ranger High Availability
Configuring Apache Spark
Configuring Apache Zeppelin
Configuring Apache ZooKeeper
Configuring Atlas Authentication
Configuring Atlas Authorization
Configuring Atlas Authorization using Ranger
Configuring Atlas using Cloudera Manager
Configuring authentication for long-running Spark Streaming jobs
Configuring authentication with LDAP and Direct Bind
Configuring authentication with LDAP and Search Bind
Configuring Authorization
Configuring block size
Configuring capacity estimations and goals
Configuring Client Access to Impala
Configuring cluster capacity with Queues
Configuring clusters and replications
Configuring coarse-grained authorization with ACLs
Configuring concurrent moves
Configuring Cruise Control
Configuring Data Protection
Configuring Dedicated Coordinators and Executors
Configuring Delegation for Clients
Configuring Directories for Intermediate Data
Configuring dynamic resource allocation
Configuring Dynamic Resource Pool
Configuring Encryption for Specific Buckets
Configuring Event Based Automatic Metadata Sync
Configuring external file authorization
Configuring Fault Tolerance
Configuring for HDFS high availability
Configuring for Kudu Tables
Configuring group permissions
Configuring HBase BlockCache
Configuring HBase MultiWAL
Configuring HBase snapshots
Configuring HBase to use HDFS HA
Configuring HDFS ACLs
Configuring HDFS High Availability
Configuring HDFS trash
Configuring heterogeneous storage in HDFS
Configuring high availability
Configuring Hive and Impala for high availability with Hue
Configuring HiveServer high availability using a load balancer
Configuring HiveServer high availability using ZooKeeper
Configuring HMS for high availability
Configuring HSTS for HDFS Web UIs
Configuring HTTPS encryption for the Kudu master and tablet server web UIs
Configuring Hue as a TLS/SSL client
Configuring Hue as a TLS/SSL server
Configuring Impala
Configuring Impala TLS/SSL
Configuring Impala to work with HDFS HA
Configuring Impala Web UI
Configuring JDBC execution mode
Configuring JDBC for Impala
Configuring Kerberos Authentication
Configuring Kudu's integration with Apache Ranger
Configuring LDAP Authentication
Configuring LDAP on unmanaged clusters
Configuring Lily HBase Indexer Security
Configuring Livy
Configuring Load Balancer for Impala
Configuring log levels for command line tools
Configuring MariaDB for Oozie
Configuring metastore location
Configuring MultiWAL support using Cloudera Manager
Configuring MySQL for Oozie
Configuring ODBC for Impala
Configuring Oozie
Configuring Oozie data purge settings using Cloudera Manager
Configuring Oozie High Availability using Cloudera Manager
Configuring Oozie Sqoop1 Action workflow JDBC drivers
Configuring Oozie to enable MapReduce jobs to read or write from Amazon S3
Configuring oozie to use HDFS HA
Configuring Oozie to use HDFS HA
Configuring Oracle for Oozie
Configuring other CDP components to use HDFS HA
Configuring Ozone Security
Configuring Ozone to work with Prometheus
Configuring Per-Bucket Settings
Configuring Per-Bucket Settings to Access Data Around the World
Configuring PostgreSQL for Oozie
Configuring properties not exposed in Cloudera Manager
Configuring Proxy Users to Access HDFS
Configuring query vectorization
Configuring quotas
Configuring Ranger Authentication with UNIX, LDAP, AD, or PAM
Configuring Ranger Authentication with UNIX, LDAP, or AD
Configuring Ranger Authorization for Atlas
Configuring resource-based policies
Configuring resource-based services
Configuring S3Guard
Configuring S3Guard in Cloudera Manager
Configuring SAML authentication on managed clusters
Configuring SMM for Monitoring SRM Replications
Configuring Spark application logging properties
Configuring Spark application properties in spark-defaults.conf
Configuring Spark Applications
Configuring Spark Direct Reader mode
Configuring Spark on YARN Applications
Configuring srm-control
Configuring storage balancing for DataNodes
Configuring Streams Messaging Manager for Kafka Connect
Configuring Streams Replication Manager
Configuring the balancer threshold
Configuring the driver role target clusters
Configuring the Hive Delegation Token Store
Configuring the Hive Metastore to use HDFS HA
Configuring the HiveServer load balancer
Configuring the Kafka Connect Role
Configuring the service role target cluster
Configuring the storage policy for the Write-Ahead Log (WAL)
Configuring TLS/SSL encryption manually for DAS using Cloudera Manager
Configuring TLS/SSL for Apache Atlas
Configuring TLS/SSL for HBase
Configuring TLS/SSL for Hue
Configuring ulimit for HBase
Configuring user authentication
Configuring user authentication using LDAP
Configuring user authentication using SPNEGO
Configuring YARN Docker Containers Support
Confirm the election status of a ZooKeeper service
Connect an external database
Connect to Phoenix Query Server
Connect to PQS through Apache Knox
Connect workers
Connecting Hive to BI tools using a JDBC/ODBC driver
Connecting Kafka clients to Data Hub provisioned clusters
Connecting to an Apache Hive endpoint through Apache Knox
Connecting to Impala Daemon in Impala Shell
Connecting to PQS using JDBC
Connectors
Connectors
Considerations for backfill inserts
Considerations for configuring High Availability on Ozone Manager
Considerations for Oozie to work with AWS
Considerations for working with HDFS snapshots
Contents of the BlockCache
Control access to queues using ACLs
Controlling Data Access with Tags
Conversion functions
Convert a managed, non-transactional table to external
Convert an HDFS file to ORC
Converting from an NFS-mounted shared edits directory to Quorum-Based Storage
Converting Hive CLI scripts to Beeline
Converting Instance Directories to Configs
Copy Sample Tweets to HDFS
Copying data between a secure and an insecure cluster using DistCp and WebHDFS
Corruption: checksum error on CFile block
COUNT
COUNT function
Create a Collection for Tweets
Create a CRUD transactional table
Create a Custom Access Policy
Create a Custom Role
Create a custom YARN service
Create a default directory for managed tables
Create a GCP Service Account
Create a Hadoop archive
Create a read-only Admin user (Auditor)
Create a snapshot policy
Create a Sqoop import command
Create a standard YARN service
Create a table for a Kafka stream
Create a temporary table
Create a Test Collection
Create a time-bound policy
Create a topology map
Create a topology script
Create a user-defined function
Create an insert-only transactional table
Create and Run a Note
Create and use a materialized view
Create and use a partitioned materialized view
CREATE DATABASE statement
Create empty table on the destination cluster
CREATE FUNCTION statement
CREATE MATERIALIZED VIEW
Create new YARN services using UI
Create partitions dynamically
Create placement rules
Create snapshots on a directory
Create snapshots using Cloudera Manager
CREATE TABLE AS SELECT
CREATE TABLE statement
Create the S3Guard Table in DynamoDB
Create the UDF class
CREATE VIEW statement
Create, use, and drop an external table
Creating a Collection in Cloudera Search
Creating a Collection in Cloudera Search
Creating a Connector
Creating a group in Hue
Creating a Hue user
Creating a Kafka Topic
Creating a Lily HBase Indexer Configuration File
Creating a Lily HBase Indexer Configuration File
Creating a Morphline Configuration File
Creating a Morphline Configuration File
Creating a new Kudu table from Impala
Creating a Notifier
Creating a Solr Collection
Creating a truststore file in PEM format
Creating an Alert Policy
Creating Business Metadata
Creating categories
Creating classifications
Creating Collections
Creating DynamoDB Access Policy
Creating glossaries
Creating labels
Creating Replicas of Existing Shards
Creating Static Pools
Creating system tables to run query on Hive and Tez DAG events
Creating tables
Creating terms
Cross Data Center Replication
Cross data center replication example of multiple clusters
Cruise Control
Cruise Control
Cruise Control Overview
Cruise Control REST API endpoints
CUME_DIST
Customize dynamic resource allocation settings
Customize interpreter settings in a note
Customize the HDFS home directory
Customizing HDFS
Customizing Per-Bucket Secrets Held in Credential Files
Customizing the Hue web UI
Customizing time zones
DAS
DAS
DAS
DAS administration using Ambari in CDP
DAS administration using Cloudera Manager in CDP
DAS architecture
DAS architecture
Data Access
Data Access
Data Analytics Studio Overview
Data Analytics Studio overview
Data compaction
Data Engineering
Data Engineering
Data migration to Apache Hive
Data protection
Data Stewardship with Apache Atlas
Data storage metrics
Data types
Databases
DataNodes
DataNodes
DataNodes page
Date and time functions
DATE data type
DDL statements
Debug Web UI for Catalog Server
Debug Web UI for Impala Daemon
Debug Web UI for StateStore
Decide to use the BucketCache
DECIMAL data type
Decimal type
Decommissioning or permanently removing a tablet server from a cluster
Dedicated Coordinator
Default EXPIRES ON tag policy
Default view of Kafka Connect in the SMM UI
Defining a Schema is Recommended for Production Use
Defining Apache Atlas enumerations
Defining related terms
Delegation token based authentication
Delete a group
Delete a user
Delete data from a table
Delete HBase snapshots from Amazon S3
Delete placement rules
Delete queues
Delete snapshots using Cloudera Manager
DELETE statement
Deleting a Connector
Deleting a Kafka Topic
Deleting a Notifier
Deleting a row
Deleting a schema
Deleting a Solr Collection
Deleting All Documents in a Solr Collection
Deleting an Alert Policy
Deleting in bulk
Deleting tables
Deletion
DENSE_RANK
Deploy and manage services on YARN
Deploy HBase replication
Deploying and configuring Oozie Sqoop1 Action JDBC drivers
Deployment Planning for Cloudera Search
Deployment Planning for Cloudera Search
Deprecation Notices In Cloudera Runtime 7.1.1
Describe a materialized view
DESCRIBE EXTENDED and DESCRIBE FORMATTED
DESCRIBE statement
Detecting slow DataNodes
Determine the table type
Developing and running an Apache Spark WordCount application
Developing Apache Kafka Applications
Developing Apache Spark Applications
Developing Applications with Apache Kudu
Developing applications with Apache Kudu
Diagnostics logging
Disable a provider in an existing provider configuration
Disable automatic compaction
Disable loading of coprocessors
Disable proxy for a known service in Apache Knox
Disable RegionServer grouping
Disable replication at the peer level
Disable the BoundedByteBufferPool
Disabling an Alert Policy
Disabling and redeploying HDFS HA
Disabling impersonation (doas)
Disabling Oozie High Availability
Disabling S3Guard and destroying a table
Disk Balancer commands
Disk management
Disk Removal
Disk Replacement
Disk space usage
Disk space versus namespace
Distcp between secure clusters in different Kerberos realms
Distcp syntax and examples
DISTINCT operator
DML statements
Docker on YARN configuration properties
Docker on YARN example: DistributedShell
Docker on YARN example: MapReduce job
Docker on YARN example: Spark-on-Docker-on-YARN
DOUBLE data type
Downloading and exporting data from Hue
Downloading Hdfsfindtool from the CDH archives
Drop a materialized view
Drop an external table along with data
DROP DATABASE statement
DROP FUNCTION statement
DROP MATERIALIZED VIEW
DROP STATS statement
DROP TABLE statement
DROP VIEW statement
Dropping a Kudu table using Impala
Dumping the Oozie database
Dynamic allocation
Dynamic queues
Dynamic resource allocation properties
Dynamic Resource Pool Settings
Dynamic resource-based column masking in Hive with Ranger policies
Dynamic tag-based column masking in Hive with Ranger policies
Dynamically loading a custom filter
Edit a group
Edit a user
Edit or delete a snapshot policy
Edit placement rules
Editing rack assignments for hosts
Editing tables
Effects of WAL rolling on replication
Elements of the Recon web user interface
Enable Access Control for Data
Enable Access Control for Interpreter, Configuration, and Credential Settings
Enable Access Control for Notebooks
Enable and disable snapshot creation using Cloudera Manager
Enable asynchronous scheduler
Enable authorization for additional HDFS web UIs
Enable authorization for HDFS web UIs
Enable authorization in Kafka with Ranger
Enable automatic compaction
Enable bulk load replication using Cloudera Manager
Enable Cgroups
Enable detection of slow DataNodes
Enable disk IO statistics
Enable Garbage Collector Logging
Enable GZipCodec as the default compression codec
Enable HBase high availability using Cloudera Manager
Enable HBase indexing
Enable hedged reads for HBase
Enable Intra-Queue preemption
Enable Intra-Queue Preemption for a specific queue
Enable Kerberos Authentication in Solr
Enable LDAP Authentication in Solr
Enable multi-threaded faceting
Enable namespace mapping
Enable or disable authentication with delegation tokens
Enable override of default queue mappings at individual queue level
Enable Phoenix ACLs
Enable preemption for a specific queue
Enable proxy for a known service in Apache Knox
Enable Ranger Authorization in Solr
Enable RegionServer grouping using Cloudera Manager
Enable replication on a specific table
Enable SASL in HiveServer
Enable scheduled queries
Enable security for Cruise Control
Enable server-server mutual authentication
Enable snapshot creation on a directory
Enable the AdminServer
Enable TLS/SSL for HiveServer
Enabling a multi-threaded environment for Hue
Enabling Access Control for Zeppelin Elements
Enabling ACL for RegionServer grouping
Enabling Admission Control
Enabling an Alert Policy
Enabling and disabling trash
Enabling Cluster-wide HBase Replication
Enabling core dump for the Kudu service
Enabling fault-tolerant processing in Spark Streaming
Enabling HDFS HA
Enabling High Availability and automatic failover
Enabling Hue applications with Cloudera Manager
Enabling Hue as a TLS/SSL client
Enabling Hue as a TLS/SSL server using Cloudera Manager
Enabling Interceptors
Enabling Kerberos authentication and RPC encryption
Enabling LDAP Authentication for impala-shell
Enabling LDAP authentication with HiveServer2 and Impala
Enabling LDAP for in Hue
Enabling Native Acceleration For MLlib
Enabling Oozie High Availability
Enabling Oozie SLA with Cloudera Manager
Enabling or disabling anonymous usage date collection
Enabling Ranger authorization
Enabling Replication on HBase Column Families
Enabling Solr Clients to Authenticate with a Secure Solr
Enabling Spark authentication
Enabling Spark Encryption
Enabling Speculative Execution
Enabling SSE-C
Enabling SSE-KMS
Enabling SSE-S3
Enabling the Oozie web console on managed clusters
Enabling the SQL editor autocompleter
Enabling TLS/SSL communication with HiveServer2
Enabling TLS/SSL communication with Impala
Enabling TLS/SSL for the Hue Load Balancer
Enabling vectorized query execution
Encrypting an S3 Bucket with Amazon S3 Default Encryption
Encrypting Communication
Encrypting Data on S3
Encryption
Encryption Reference
End to End Latency Overview
End to End Latency Use Cases
Enforcing TLS version 1.2 for Hue
Environment variables for sizing NameNode heap memory
Erasure coding CLI command
Erasure coding examples
Erasure coding overview
Errors during hole punching test
Escape an illegal identifier
Essential metrics to monitor
ETL with Cloudera Morphlines
Evolving a schema
Example Morphline Usage
Example solrctl Usage
Example use cases
Example workload
Example: Configuration for work preserving recovery
Example: Running SparkPi on YARN
Example: Using the HBase-Spark connector
Examples
Examples of accessing Amazon S3 data from Spark
Examples of controlling data access using classifications
Examples of creating and using UDFs
Examples of DistCp commands using the S3 protocol and hidden credentials
Examples of estimating NameNode heap memory
Examples of Interacting with Schema Registry
Examples of overlapping quota policies
Examples of using the Amazon Web Services command-line interface for S3 Gateway
Execute the Disk Balancer plan
Exit statuses for the HDFS Balancer
EXPLAIN statement
Exploring using Lineage
Export a Note
Export a snapshot to another cluster
Export all resource-based policies for all services
Export Ranger reports
Export resource-based policies for a specific service
Export tag-based policies
Expose HBase metrics to a Ganglia server
Extending Atlas to Manage Metadata from Additional Sources
External table access
Failures during INSERT, UPDATE, UPSERT, and DELETE operations
Fan-in and Fan-out Replication Flows
FAQ
Feature comparison
Fetching Spark Maven dependencies
File descriptor limits
File descriptors
Files and directories
Files and directories
Filesystems
Filter HMS results
Filter types
Finding issues
Finding the list of Hue superusers
Fine-grained authorization
FIRST_VALUE
Fixed Issues In Cloudera Runtime 7.1.1
Fixing authentication issues between HBase and Hue
Fixing block inconsistencies
Fixing issues
FLOAT data type
Flushing data to disk
Format for using Hadoop archives with MapReduce
Frequently asked questions
Functions
General Quota Syntax
General Settings
Generate and view Apache Hive statistics
Generate surrogate keys
Generating a table list
Generating Solr collection configuration using instance directories
Generating statistics
Generating Table and Column Statistics
Get scheduled query information and monitor the query
Getting the JDBC driver
Glossaries overview
Governance
Governance
Governance Overview
Graceful HBase shutdown
Gracefully shut down an HBase RegionServer
Gracefully shut down the HBase service
GRANT statement
Granularity of Metrics
GROUP BY clause
Groups and fetching
GROUP_CONCAT function
Guidelines for Deploying Cloudera Search
Guidelines for Schema Design
Hadoop
Hadoop
Hadoop archive components
Hadoop File Formats Support
Handling bucketed tables
Handling disk failures
Handling large messages
Hash and hash partitioning
Hash and range partitioning
Hash partitioning
Hash partitioning
HashTable/SyncTable tool configuration
HAVING clause
HBase
HBase
HBase
HBase
HBase actions that produce Atlas entities
HBase audit entries
HBase authentication
HBase authorization
HBase backup and disaster recovery strategies
HBase entities created in Atlas
HBase filtering
HBase I/O components
HBase is using more disk space than expected
Hbase lineage
HBase metadata collection
HBase on CDP
HBase online merge
HBase read replicas
HBase Shell example
HBase snapshots on Amazon S3 with Kerberos enabled
HBaseMapReduceIndexerTool command line reference
HBCK2 tool command reference
HDFS
HDFS
HDFS
HDFS ACLs
HDFS Block Skew
HDFS Caching
HDFS commands for metadata files and directories
HDFS entity metadata migration
HDFS Metrics
HDFS Sink Connector
HDFS Sink Connector Properties Reference
HDFS storage policies
HDFS storage types
HDFS storage types
Heap sampling
HeapDumpPath (/tmp) in Hive data nodes gets full due to .hprof files
Hierarchical queue characteristics
High Availability on HDFS clusters
Highly Available Kafka Architectures
Hive
Hive
Hive
Hive
Hive 3 ACID transactions
Hive Authentication
Hive entity metadata migration
Hive Warehouse Connector for accessing Apache Spark data
Hive Warehouse Connector Interfaces
HiveServer actions that produce Atlas entities
HiveServer audit entries
HiveServer entities created in Atlas
HiveServer is unresponsive due to large queries running in parallel
HiveServer lineage
HiveServer metadata collection
HiveServer relationships
HMS table storage
How Cloudera Search Works
How Cruise Control rebalancing works
How Cruise Control retrieves metrics
How DAS helps to debug Hive on Tez queries
How NameNode manages blocks on a failed DataNode
How NFS Gateway authenticates and maps users
How Ozone manages read operations
How Ozone manages write operations
How tag-based access control works
How to Set up Failover and Failback
HttpFS authentication
Hue
Hue
Hue
Hue Advanced Configuration Snippet
Hue configuration files
Hue logs
Hue Overview
Hue supported browsers
HWC API Examples
HWC configuration planning
HWC execution modes
HWC supported types mapping
IAM Role permissions for working with SSE-KMS
Identifiers
Identifying problems in a Cloudera Search deployment
Impact of quota violation policy
Impala
Impala
Impala
Impala actions that produce Atlas entities
Impala audit entries
Impala Authentication
Impala Authorization
Impala Authorization
Impala database containment model
Impala DDL for Kudu
Impala DML for Kudu Tables
Impala entities created in Atlas
Impala entity metadata migration
Impala integration limitations
Impala lineage
Impala lineage
Impala Logs
Impala metadata collection
Impala Shell Command Reference
Impala Shell Configuration File
Impala Shell Configuration Options
Impala Shell Tool
Impala SQL
Impala SQL and Hive SQL
Impala with Amazon S3
Impala with Azure Data Lake Store (ADLS)
Impala with HBase
Impala with HDFS
Impala with Kudu
Import a Note
Import and sync LDAP users and groups
Import command options
Import External Packages
Import RDBMS data into Hive
Import RDBMS data to HDFS
Import resource-based policies for a specific service
Import resource-based policies for all services
Import tag-based policies
Importing a Bucket into S3Guard
Importing and exporting resource-based policies
Importing and exporting tag-based policies
Importing Business Metadata associations in bulk
Importing data into HBase
Importing Glossary terms in bulk
Improving Performance for S3A
Improving performance using partitions
Improving performance with centralized cache management
Improving performance with short-circuit local reads
Improving Software Performance
Increasing StateStore Timeout
Increasing storage capacity with HDFS compression
Increasing the maximum number of processes for Oracle database
Incrementally update an imported table
Index Sample Data
Indexing
Indexing Data
Indexing Data Using Morphlines
Indexing Sample Tweets with Cloudera Search
Information and debugging
Ingestion
Initiate replication when data already exist
INSERT and primary key uniqueness violations
Insert data into a table
INSERT statement
Inserting a row
Inserting in bulk
Install Docker
Installing Apache Knox
Installing CDS 3
Installing Connectors
Installing Hive on Tez and adding a HiveServer role
Installing NTP-related packages
Installing the Kafka Connect Role
Installing the REST Server using Cloudera Manager
Installing the UDF development package
INT data type
Integrating Apache Hive with Apache Spark and BI
Integrating Hive and a BI tool
Integrating Kafka and Schema Registry
Integrating Kafka and Schema Registry Using NiFi Processors
Integrating with Kafka
Integrating with NiFi
Integrating with Schema Registry
Integrating your identity provider's SAML server with Hue
Interacting with Hive views
Internal and external Impala tables
Internal private key infrastructure (PKI)
Introducing the S3A Committers
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction
Introduction to HDFS metadata files and directories
Introduction to Hive metastore
Introduction to Ozone
Introduction to S3Guard
INVALIDATE METADATA statement
ISR management
Issues starting or restarting the master or the tablet server
Java API example
JBOD
JBOD Disk migration
JBOD setup
JDBC connection string syntax
JDBC connection string syntax
JDBC execution mode
Joins in Impala SELECT statements
JournalNodes
JournalNodes
JVM and garbage collection
Kafka
Kafka
Kafka
Kafka
Kafka Architecture
Kafka brokers and Zookeeper
Kafka clients and ZooKeeper
Kafka Connect
Kafka Connect API Security
Kafka Connect Connector Reference
Kafka Connect Overview
Kafka Connect Setup
Kafka consumers
Kafka FAQ
Kafka Introduction
Kafka producers
Kafka public APIs
Kafka security hardening with Zookeeper ACLs
Kafka storage handler and table properties
Kafka Streams
kafka-*-perf-test
kafka-configs
kafka-console-consumer
kafka-console-producer
kafka-consumer-groups
kafka-delegation-tokens
kafka-log-dirs
kafka-reassign-partitions
kafka-topics
Kafka-ZooKeeper performance tuning
Keep replicas current
Kerberos
Kerberos authentication
Kerberos configuration for Ozone
Kerberos configurations for HWC
Kerberos principal and keytab properties for Ozone service daemons
Kerberos setup guidelines for Distcp between secure clusters
Kernel stack watchdog traces
Key Features
Known issue: Ranger group mapping
Known issues and limitations
Known issues and limitations
Known Issues In Cloudera Runtime 7.1.1
Knox
Knox
Knox Supported Services Matrix
Knox Topology Management in Cloudera Manager
Kudu
Kudu
Kudu
Kudu
Kudu architecture in a CDP private cloud base deployment
Kudu authentication with Kerberos
Kudu example applications
Kudu integration with Spark
Kudu master web interface
Kudu metrics
Kudu Python client
Kudu security
Kudu tablet server web interface
Kudu tracing
Kudu web interfaces
Kudu-Impala integration
LAG
LAST_VALUE
Launch a YARN service
Launch distcp
Launch Zeppelin
LAZY_PERSIST memory storage policy
LDAP authentication
LDAP properties
LDAP Settings
LEAD
Leader positions and in-sync replicas
Lengthy BalancerMember Route length
Leveraging Business Metadata
Lily HBase Batch Indexing for Cloudera Search
Lily HBase Near Real Time Indexing for Cloudera Search
LIMIT clause
Limit concurrent connections
Limit CPU usage with Cgroups
Limitations and restrictions for Impala UDFs
Limitations of Amazon S3
Limitations of erasure coding
Limitations of Phoenix-Hive connector
Limitations of the S3A Committers
Limiting the speed of compactions
Lineage lifecycle
Lineage overview
Linux Container Executor
List files in Hadoop archives
Listing available metrics
Literals
Live write access
Livy API reference for batch jobs
Livy API reference for interactive sessions
Livy batch object
Livy objects for interactive sessions
LOAD DATA statement
Loading ORC data into DataFrames using predicate push-down
Loading the Oozie database
Locating Hive tables and changing the location
Log Aggregation File Controllers
Log Aggregation Properties
Log cleaner
Log redaction
Logical operators, comparison operators and comparators
Logical replication
Logs and log segments
Low-latency analytical processing
Main Use Cases
Maintenance manager
Manage databases and tables
Manage HBase snapshots on Amazon S3 in Cloudera Manager
Manage HBase snapshots using Cloudera Manager
Manage HBase snapshots using the HBase shell
Manage individual delegation tokens
Manage partition retention time
Manage partitions
Manage Policies for HBase snapshots in Amazon S3
Manage queries
Manage query rewrites
Manage Queues
Manage reports
Manage the YARN service life cycle through the REST API
Management basics
Managing
Managing Access Control Lists
Managing Alert Policies and Notifiers
Managing Alert Policies using Streams Messaging Manager
Managing and Allocating Cluster Resources using Capacity Scheduler
Managing Apache Hadoop YARN Services
Managing Apache HBase
Managing Apache HBase Security
Managing Apache Hive
Managing Apache Impala
Managing Apache Kafka
Managing Apache Phoenix Security
Managing Apache Phoenix security
Managing Apache ZooKeeper
Managing Apache ZooKeeper Security
Managing Auditing with Ranger
Managing Business Terms with Atlas Glossaries
Managing Cloudera Search
Managing Collections in Cloudera Search
Managing columns
Managing Configs
Managing Configuration Using Configs or Instance Directories
Managing Cruise Control
Managing Data Storage
Managing disk space for Impala data
Managing existing Apache Knox shared providers
Managing Hue permissions
Managing Instance Directories
Managing Kafka Topics using Streams Messaging Manager
Managing Knox shared providers in Cloudera Manager
Managing Kudu with Cloudera Manager
Managing Logs
Managing Metadata in Impala
Managing Metadata in Impala
Managing partitions
Managing Resources in Impala
Managing Service Parameters for Apache Knox via Cloudera Manager
Managing services for Apache Knox via Cloudera Manager
Managing snapshot policies using Cloudera Manager
Managing storage elements by using the command-line interface
Managing tables
Managing topics across multiple Kafka clusters
Managing YARN Docker Containers
Managing YARN queue users
Managing, Deploying and Monitoring Connectors
Manually configuring SAML authentication
Manually failing over to the standby NameNode
MAP complex type
Mapping for an Ozone volume in Amazon S3 API
Mapping Phoenix schemas to HBase namespaces
MapReduce Indexing
MapReduce Job ACLs
MapReduceIndexerTool
MapReduceIndexerTool Input Splits
MapReduceIndexerTool Metadata
MapReduceIndexerTool Usage Syntax
Master
Mathematical functions
Maven artifacts
Maven Artifacts for Cloudera Runtime 7.1.1.0
MAX
MAX function
Maximizing storage resources using ORC
Memory
Memory limits
Merge data in tables
Merge process stops during Sqoop incremental imports
Metrics
Metrics and Insight
Migrate brokers by modifying broker IDs in meta.properties
Migrating Consumer Groups Between Clusters
Migrating Data from Cloudera Navigator to Atlas
Migrating Data Using Sqoop
Migrating Kudu data from one directory to another on the same host
Migrating Navigator content to Atlas
Migrating Solr Replicas
Migrating to multiple Kudu masters
Migration from Fair Scheduler to Capacity Scheduler
Migration Guide
MIN
MIN function
Minimizing cluster disruption during temporary planned downtime of a single tablet server
Miscellaneous functions
Missing Containers page
ML metadata collection
ML operations entities created in Atlas
MOB cache properties
Modify a custom service parameter
Modify a Kafka Topic
Modify a provider in an existing provider configuration
Modify GCS Bucket Permissions
Modify interpreter settings
Modifying a Connector
Modifying Impala Startup Options
Modifying Impala Startup Options
Modifying the session cookie timeout value
Monitor RegionServer grouping
Monitor the BlockCache
Monitor the performance of hedged reads
Monitoring
Monitoring and Debugging Spark Applications
Monitoring and Maintaining S3Guard
Monitoring Brokers
Monitoring Checkpoint Latency for Cluster Replication
Monitoring cluster health with ksck
Monitoring Cluster Profile
Monitoring Cluster Replications by Quick Ranges
Monitoring Cluster Replications Overview
Monitoring Clusters
Monitoring Connector Profile
Monitoring Connector Settings
Monitoring Connectors
Monitoring Consumers
Monitoring End-to-end Latency
Monitoring End-to-End Latency using Streams Messaging Manager
Monitoring heap memory usage
Monitoring Impala
Monitoring Kafka Cluster Replications using Streams Messaging Manager
Monitoring Kafka Clusters using Streams Messaging Manager
Monitoring Kafka Connect using Streams Messaging Manager
Monitoring NTP status
Monitoring Producers
Monitoring Replication Latency for Cluster Replication
Monitoring Replication with Streams Messaging Manager
Monitoring Status of the Clusters to be Replicated
Monitoring Throughput and Latency by Values
Monitoring Throughput for Cluster Replication
Monitoring Topics
Monitoring Topics to be Replicated
More Resources
Move HBase Master Role to another host
Moving a NameNode to a different host using Cloudera Manager
Moving data from databases to Apache Hive
Moving data from HDFS to Apache Hive
Moving highly available NameNode, failover controller, and JournalNode roles using the Migrate Roles wizard
Moving NameNode roles
Moving the Hue service to a different host
Moving the JournalNode edits directory for a role group using Cloudera Manager
Moving the JournalNode edits directory for a role instance using Cloudera Manager
Multi-server LDAP/AD autentication
Multilevel partitioning
MySQL: 1040, 'Too many connections' exception
NameNode architecture
NameNodes
NameNodes
NDV function
Near Real Time Indexing
Network and I/O threads
Networking parameters
New topic and consumer group discovery
Next steps
Non-covering range partitions
Notes about replication
Notifiers
NTILE
NTP clock synchronization
NTP configuration best practices
Number-of-Regions Quotas
Number-of-Tables Quotas
Off-heap BucketCache
OFFSET clause
Offsets Subcommand
On-demand Metadata
On-demand Metadata
On-premise to Cloud and Kafka Version Upgrade
Oozie
Oozie
Oozie
Oozie configurations with CDP services
Oozie database configurations
Oozie High Availability
Oozie scheduling examples
Operating system requirements
Operational Database
Operational Database
Operators
Optimize mountable HDFS
Optimizer hints
Optimizing data storage
Optimizing HBase I/O
Optimizing NameNode disk space with Hadoop archives
Optimizing performance
Optimizing performance for evaluating SQL predicates
Optimizing queries using partition pruning
Optimizing S3A read performance for different file types
Options to determine differences between contents of snapshots
ORC vs Parquet in CDP
Orchestrating a rolling restart with no downtime
ORDER BY clause
Other known issues
OVER
Overview
Overview
Overview
Overview
Overview of Apache HBase
Overview of Apache Phoenix
Overview of Hadoop archives
Overview of HDFS
Overview of Oozie
Overview of Proxy Usage and Load Balancing for Search
Overview of the Ozone Manager in High Availability
Overview page
Ozone
Ozone
Ozone
Ozone architecture
Ozone Manager nodes in High Availability
Packaging different versions of libraries with an Apache Spark application
PAM Authentication
Parameters to configure the Disk Balancer
Partition a cluster using node labels
Partition pruning
Partition Pruning for Queries
Partitioning
Partitioning
Partitioning examples
Partitioning for Kudu Tables
Partitioning guidelines
Partitioning limitations
Partitioning tables
Partitions
Partitions introduction
PERCENT_RANK
Perform a backup of the HDFS metadata
Perform a disk hot swap for DataNodes using Cloudera Manager
Perform ETL by ingesting data from Kafka into Hive
Perform hostname changes
Perform scans using HBase Shell
Perform the migration
Perform the recovery
Perform the removal
Performance and storage considerations for Spark SQL DROP TABLE PURGE
Performance Best Practices
Performance considerations
Performance Considerations
Performance considerations for UDFs
Performance Impact of Encryption
Performance tuning
Periodically rebuild a materialized view
Phoenix
Phoenix
Phoenix-Spark connector usage examples
Physical backups of an entire node
Physical backups of an entire node
Pipelines page
Plan the data movement across disks
Planning for Apache Impala
Planning for Streams Replication Manager
Pluggable authentication modules in HiveServer
pom.xml
Populating an HBase Table
Ports Used by Impala
POST /admin/audit/ API
Post-migration verification
Pre-defined Access Policies for Schema Registry
Predicate push-down optimization
Preloaded resource-based services and policies
Prepare for hostname changes
Prepare for removal
Prepare for the migration
Prepare for the recovery
Prepare to back up the HDFS metadata
Preparing the hardware resources for HDFS High Availability
Preparing the S3 Bucket
Preparing to Index Sample Tweets with Cloudera Search
Prerequisite
Prerequisites
Prerequisites for configuring short-ciruit local reads
Prerequisites for configuring TLS/SSL for Oozie
Prerequisites for enabling erasure coding
Prerequisites for enabling HDFS HA using Cloudera Manager
Prerequisites for installing Docker
Prerequisites to configure TLS/SSL for HBase
Preventing inadvertent deletion of directories
Previewing tables using Data Preview
Primary key design
Primary key index
Problem area: Compose page
Problem area: Queries page
Problem area: Reports page
Propagating classifications through lineage
Properties for configuring centralized caching
Properties for configuring short-circuit local reads on HDFS
Properties for configuring the Balancer
Properties to set the size of the NameNode edits directory
Protocol between consumer and broker
Proxy Cloudera Manager through Apache Knox
Proxy users for Kerberos-enabled clusters
Pruning Old Data from S3Guard Tables
Purging deleted entities
PUT /admin/purge/ API
Queries are not appearing on the Queries page
Query column is empty but you can see the DAG ID and Application ID
Query correlated data
Query fails with "Counters limit exceeded" error message
Query Join Performance
Query live data from Kafka
Query options
Query results cache
Query Sample Data
Query the information_schema database
Query vectorization
Querying
Querying a schema
Querying an existing Kudu table from Impala
Querying files into a DataFrame
Querying Kafka data
Queue ACLs
Quota enforcement
Quota violation policies
Quotas
Rack awareness (Location awareness)
Raft consensus algorithm
Range partitioning
Range partitioning
Ranger
Ranger
Ranger access conditions
Ranger AD Integration
Ranger client caching
Ranger console navigation
Ranger KMS
Ranger Policies Overview
Ranger Security Zones
Ranger tag-based policies
Ranger UI authentication
Ranger UI authorization
Ranger user management
Ranger Usersync
RANK
Read access
Read and write operations
Read and write requests with Ozone Manager in High Availability
Read operations (scans)
Read replica properties
Reading data from HBase
Reading Hive ORC tables
Reading managed tables through HWC
Reads (scans)
REAL data type
Reassigning replicas between log directories
Reassignment examples
Rebalance after adding Kafka broker
Rebalance after demoting Kafka broker
Rebalance after removing Kafka broker
Rebalancing partitions
Rebalancing with Cruise Control
Rebuilding a Kudu filesystem layout
Recommendations for managing Docker containers on YARN
Recommendations for using the producer and consumer APIs
Recommended configurations for the Balancer
Recommended configurations for the balancer
Recommended deployment architecture
Recommended settings for G1GC
Record management
Record order and assignment
Records
Recover data from a snapshot
Recovering from a dead Kudu master in a multi-master deployment
Recovering from disk failure
Recovering from full disks
Redeploying the Oozie ShareLib
Redeploying the Oozie sharelib using Cloudera Manager
Reducing the Size of Data Structures
Refer to a table using dot notation
Reference architecture
Referencing S3 Data in Applications
Refining query search using filters
REFRESH AUTHORIZATION statement
REFRESH FUNCTIONS statement
REFRESH statement
Register the UDF
Registering a Lily HBase Indexer Configuration with the Lily HBase Indexer Service
Reload, view, and filter functions
Remote Topics
Remove a custom service parameter
Remove a DataNode
Remove a RegionServer from RegionServer grouping
Remove storage directories using Cloudera Manager
Removing Kudu masters from a multi-master deployment
Removing scratch directories
Reorder placement rules
Repair partitions manually using MSCK repair
Replace a disk on a DataNode host
Replace a ZooKeeper disk
Replace a ZooKeeper role on an unmanaged cluster
Replace a ZooKeeper role with ZooKeeper service downtime
Replace a ZooKeeper role without ZooKeeper service downtime
Replicate pre-exist data in an active-active deployment
Replicating Data
Replication
Replication across three or more clusters
Replication caveats
Replication Flows Overview
Replication requirements
Reporting Kudu crashes using breakpad
Request a timeline-consistent read
Requirements for Oozie High Availability
Reserved words
Resetting Hue user password
Resource distribution workflow
Resource Scheduling and Management
Resource Tuning Example
Resource-based Services and Policies
REST endpoints supported on Ozone S3 Gateway
Restore an HBase snapshot from Amazon S3
Restore an HBase snapshot from Amazon S3 with a new name
Restore data from a replica
Restore HDFS metadata from a backup using Cloudera Manager
Restoring a Solr Collection
Restoring NameNode metadata
Restoring tables from backups
Restrict access to Kafka metadata in Zookeeper
Restricting Access to S3Guard Tables
Restricting supported ciphers for Hue
Retries
Retrieving log directory replica assignment information
REVOKE statement
Rotate Auto-TLS Certificate Authority and Host Certificates
Rotate the master key/secret
Row-level filtering and column masking in Hive
Row-level filtering in Hive with Ranger policies
ROW_NUMBER
RPC timeout traces
Run a Hive command
Running a query on a different Hive instance
Running a query on a different Hive instance
Running a Spark MLlib example
Running a tablet rebalancing tool in Cloudera Manager
Running a tablet rebalancing tool on a rack-aware cluster
Running an interactive session with the Livy API
Running Apache Spark Applications
Running Commands and SQL Statements in Impala Shell
Running Dockerized Applications on YARN
Running HBaseMapReduceIndexerTool
Running PySpark in a virtual environment
Running sample Spark applications
Running shell commands
Running Spark applications on secure clusters
Running Spark applications on YARN
Running Spark Python applications
Running tablet rebalancing tool
Running the balancer
Running the HBCK2 tool
Running YARN Services
Running your first Spark application
Runtime environment for UDFs
Runtime error: Could not create thread: Resource temporarily unavailable (error 11)
Runtime Filtering
S3 Performance Checklist
S3A and Checksums (Advanced Feature)
S3Guard: Operational Issues
Safely Writing to S3 Through the S3A Committers
SAML properties
Sample pom.xml file for Spark Streaming with Kafka
Save a YARN service definition
Saving aliases
Saving searches
Saving the search results
Scalability
Scalability Considerations
Scaling Kudu
Scaling Limits and Guidelines
Scaling recommendations and limitations
Scaling storage on Kudu master and tablet servers in the cloud
Scheduler performance improvements
Scheduling among queues
Scheduling in Oozie using cron-like syntax
Scheduling queries
Schema alterations
Schema design limitations
Schema design limitations
Schema Entities
Schema objects
Schema Registry
Schema Registry
Schema Registry Authorization through Ranger Access Policies
Schema Registry Component Architecture
Schema Registry Concepts
Schema Registry Overview
Schema Registry Overview
Schema Registry Use Cases
Schemaless Mode Overview and Best Practices
Script with HBase Shell
Search
Search
Search
Search
Search
Search and other Runtime components
Search applications
Search Ranger reports
Search Tutorial
Searching by Topic Name
Searching Cluster Replications by Source
Searching for entities using Business Metadata attributes
Searching for entities using classifications
Searching metadata tags
Searching overview
Searching queries
Searching tables
Searching using terms
Searching with Metadata
Secondary Sort
Secure Hive Metastore
Secure HiveServer using LDAP
Securing Access to Hadoop Cluster: Apache Knox
Securing Apache Hive
Securing Apache Kafka
Securing Atlas
Securing Atlas
Securing Cloudera Search
Securing configs with ZooKeeper ACLs and Ranger
Securing Cruise Control
Securing database connections with TLS/SSL
Securing DataNodes
Securing Hue
Securing Hue passwords with scripts
Securing Impala
Securing Kafka Connect
Securing Schema Registry
Securing sessions
Securing Streams Messaging Manager
Securing Streams Messaging Manager
Securing Streams Replication Manager
Securing the S3A Committers
Security
Security considerations
Security considerations for UDFs
Security limitations
Security Model and Operations on S3
Security overview
Security tokens in Ozone
SELECT statement
Server management limitations
Set Application-Master resource-limit for a specific queue
Set consumer and producer properties as table properties
Set default Application Master resource limit
Set global application limits
Set global maximum application priority
Set HADOOP_CONF to the destination cluster
Set Maximum Application limit for a specific queue
Set Ordering policies within a specific queue
Set properties in Cloudera Manager
Set Proxy Server Authentication for Clusters Using Kerberos
Set quotas using Cloudera Manager
SET statement
Set up a JDBC URL connection override
Set up a PostgreSQL database
Set up a storage policy for HDFS
Set up an Oracle database
Set up JDBCStorageHandler for Postgres
Set up MariaDB or MySQL database
Set up MirrorMaker in Cloudera Manager
Set Up Sqoop
Set up SSD storage using Cloudera Manager
Set up the cost-based optimizer and statistics
Set up the development environment
Set up WebHDFS on a secure cluster
Set user limits within a queue
Setting HDFS quotas
Setting Java System Properties for Solr
Setting Lucene Version
Setting Oozie permissions
Setting Python path variables for Livy
Setting the cache timeout
Setting the Idle Query and Idle Session Timeouts
Setting the Oozie database timezone
Setting the trash interval
Setting Timeout and Retries for Thrift Connections to Backend Client
Setting Timeouts in Impala
Setting up Data Cache for Remote Reads
Setting up Data Cache for Remote Reads
Setting Up HDFS Caching
Setting up OzoneFS
Setting up the backend Hive metastore database
Setting up the HortonworksSchemaRegistry Controller Service
Setting up the metastore database
Setting user limits for HBase
Setting user limits for Kafka
Settings to avoid data loss
Shell commands
Shiro Settings: Reference
shiro.ini Example
Show materialized views
SHOW MATERIALIZED VIEWS
SHOW statement
Showing Atlas Server status
SHUTDOWN statement
Simple Client Examples
SimpleConsumer.java
SimpleProducer.java
Single tablet write operations
Size the BlockCache
Sizing estimation based on network and disk message throughput
Sizing NameNode heap memory
Slow name resolution and nscd
SMALLINT data type
Snapshot failures
Solr
Solr and HDFS - the Block Cache
Solr Server Tuning Categories
solrctl Reference
Space quotas
Spark
Spark
Spark
Spark actions that produce Atlas entities
Spark application model
Spark audit entries
Spark cluster execution overview
Spark Direct Reader mode
Spark entities created in Apache Atlas
Spark entity metadata migration
Spark execution model
Spark Indexing
Spark integration best practices
Spark integration known issues and limitations
Spark integration limitations
Spark Job ACLs
Spark lineage
Spark metadata collection
Spark on YARN deployment modes
Spark relationships
Spark security
Spark SQL example
Spark Streaming and Dynamic Allocation
Spark Streaming Example
Spark troubleshooting
Spark tuning
spark-submit command options
Specify the JDBC connection string
Specify truststore properties
Specifying domains or pages to which Hue can redirect users
Specifying HTTP request methods
Specifying Impala Credentials to Access S3
Specifying racks for hosts
Speeding up Job Commits by Increasing the Number of Threads
Spooling Query Results
SQL migration
SQL statements
SQLContext and HiveContext
Sqoop
Sqoop
Sqoop
Sqoop Hive import stops when HS2 does not use Kerberos authentication
SRM Command Line Tools
SRM security example for a cluster environment managed by a single Cloudera Manager instance
SRM security example for a cluster environment managed by multiple Cloudera Manager instances
srm-control
srm-control Options Reference
SSE-C: Server-Side Encryption with Customer-Provided Encryption Keys
SSE-KMS: Amazon S3-KMS Managed Encryption Keys
SSE-S3: Amazon S3-Managed Encryption Keys
Start and stop queues
Start and stop the NFS Gateway services
Start compaction manually
Start HBase
Start Hive on an insecure cluster
Start Hive using a password
Starting and stopping HBase using Cloudera Manager
Starting and stopping Kudu processes
Starting Apache Hive
Starting the Lily HBase NRT Indexer Service
Starting the Oozie server
Statistics generation and viewing commands
STDDEV, STDDEV_SAMP, STDDEV_POP functions
Step 1: Generate keys and certificates for Kafka brokers
Step 1: Worker host configuration
Step 2: Create your own certificate authority
Step 2: Worker host planning
Step 3: Cluster size
Step 3: Sign the certificate
Step 4: Configure Kafka brokers
Step 5: Configure Kafka clients
Step 6: Verify container settings on cluster
Step 6A: Cluster container capacity
Step 6B: Container sanity checking
Step 7: MapReduce configuration
Step 7A: MapReduce sanity checking
Steps 4 and 5: Verify settings
Stop HBase
Stop replication in an emergency
Stopping Impala
Stopping the Oozie server
Storage
Storage
Storage group classification
Storage group pairing
Storage Systems Supports
Store HBase snapshots on Amazon S3
Storing Data Using Ozone
Storing medium objects (MOBs)
Streams Messaging
Streams Messaging
Streams Messaging Manager
Streams Messaging Manager
Streams Messaging Manager Overview
Streams Messaging Manager Overview
Streams Replication Manager
Streams Replication Manager
Streams Replication Manager
Streams Replication Manager
Streams Replication Manager Architecture
Streams Replication Manager Driver
Streams Replication Manager Overview
Streams Replication Manager Reference
Streams Replication Manager requirements
Streams Replication Manager Service
STRING data type
String functions
STRUCT complex type
Submit a Python app
Submit a Scala or Java application
Submitting batch applications using the Livy API
Submitting Spark applications
Submitting Spark Applications to YARN
Submitting Spark applications using Livy
Subqueries in Impala SELECT statements
Subquery restrictions
Subscribing to a topic
SUM
SUM function
Switching from CMS to G1GC
Symbolizing stack traces
Synchronize table data using HashTable/SyncTable tool
Synchronizing the contents of JournalNodes
System Level Broker Tuning
System metadata migration
Table
Table and Column Statistics
Tables
TABLESAMPLE clause
Tablet
Tablet history garbage collection and the ancient history mark
Tablet server
Tag-based Services and Policies
Tags and policy evaluation
Take a snapshot using a shell script
Take HBase snapshots
Task architecture and load-balancing
Terms
Test MOB storage and retrieval performance
Testing the LDAP configuration
The certmanager utility
The Cloud Storage Connectors
The HDFS mover command
The perfect schema
The S3A Committers and Third-Party Object Stores
Thread Tuning for S3A Data Upload
Threads
Thrift Server crashes after receiving invalid data
Throttle quota examples
Throttle quotas
Timeline consistency
TIMESTAMP compatibility for Parquet files
TIMESTAMP data type
TINYINT data type
TLS
Tombstoned or STOPPED tablet replicas
Tool usage
Top-down process for adding a new metadata source
Topics
Topics and Groups Subcommand
Tracking an Apache Hive query in YARN
Tracking Hive on Tez query execution
Transactional table access
Transactions
Transactions
Trash behavior with HDFS Transparent Encryption enabled
Troubleshoot RegionServer grouping
Troubleshooting
Troubleshooting Apache Hadoop YARN
Troubleshooting Apache HBase
Troubleshooting Apache Hive
Troubleshooting Apache Impala
Troubleshooting Apache Kudu
Troubleshooting Apache Kudu
Troubleshooting Apache Sqoop
Troubleshooting Cloudera Search
Troubleshooting Data Analytics Studio
Troubleshooting Docker on YARN
Troubleshooting HBase
Troubleshooting Hue
Troubleshooting Impala
Troubleshooting Linux Container Executor
Troubleshooting NTP stability problems
Troubleshooting on YARN
Troubleshooting performance issues
Troubleshooting replication failure in the DAS Event Processor
Troubleshooting S3 and S3Guard
Troubleshooting SAML authentication
Troubleshooting the S3A Committers
TRUNCATE TABLE statement
Trusted users
Tuning Apache Hadoop YARN
Tuning Apache Kafka Performance
Tuning Apache Spark
Tuning Apache Spark Applications
Tuning Cloudera Search
Tuning Garbage Collection
Tuning Hue
Tuning Impala
Tuning Replication
Tuning Resource Allocation
Tuning S3A Uploads
Tuning Spark Shuffle Operations
Tuning the metastore
Tuning the Number of Partitions
Turning safe mode on HA NameNodes
Tutorial
UDF concepts
UI Tools
Unable to authenticate users in Hue using SAML
Unable to connect Oracle database to Hue using SCAN
Unable to terminate Hive queries from Job Browser
Unable to view new databases and tables, or unable to see changes to the existing databases or tables
Unable to view or create Oozie workflows
Understanding
Understanding --go-live and HDFS ACLs
Understanding Apache Phoenix-Hive connector
Understanding Apache Phoenix-Spark connector
Understanding erasure coding policies
Understanding HBase garbage collection
Understanding Hue users and groups
Understanding NiFi Record Based Processing
Understanding Performance using EXPLAIN Plan
Understanding Performance using Query Profile
Understanding Performance using SUMMARY Report
Understanding Replication Flows
Understanding the extractHBaseCells Morphline Command
Understanding the extractHBaseCells Morphline Command
Understanding the kafka-run-class Bash Script
Understanding YARN architecture
UNION clause
Unlock Kafka metadata in Zookeeper
Unsupported Apache Spark Features
Unsupported command line tools
Unsupported Interfaces and Features
Update data in a table
UPDATE statement
Updating a Notifier
Updating a row
Updating an Alert Policy
Updating in bulk
Updating the Schema in a Solr Collection
Uploading tables
Upsert option in Kudu Spark
UPSERT statement
Upserting a row
URL schema for Ozone S3 Gateway
URL to browse Ozone buckets
Usability issues
Use a CTE in a query
Use a custom MapReduce job
Use a subquery
Use BulkLoad
Use Case 1: Registering and Querying a Schema for a Kafka Topic
Use Case 2: Reading/Deserializing and Writing/Serializing Data from and to a Kafka Topic
Use Case 3: Dataflow Management with Schema-based Routing
Use Case Architectures
Use cases
Use cases for ACLs on HDFS
Use cases for BulkLoad
Use cases for centralized cache management
Use cases for HBase
Use Cgroups
Use cluster replication
Use CopyTable
Use CPU scheduling
Use CPU scheduling with distributed shell
Use curl to access a URL protected by Kerberos HTTP SPNEGO
Use FPGA scheduling
Use FPGA with distributed shell
Use GPU scheduling
Use GPU scheduling with distributed shell
Use GZipCodec with a one-time job
Use HashTable and SyncTable Tool
Use HWC for streaming
Use materialized view optimations from a subquery
Use multiple ZooKeeper services
Use node labels
Use rsync to copy files from one broker to another
Use snapshots
Use Spark
Use Sqoop
USE statement
Use the Apache Thrift Proxy API
Use the HBase APIs for Java
Use the HBase command-line utilities
Use the HBase REST server
Use the HBase shell
Use the Hue HBase app
Use the JDBC interpreter to access Hive
Use the JDBC interpreter to access Phoenix
Use the Livy interpreter to access Spark
Use the Network Time Protocol (NTP) with HBase
Use the YARN CLI to View Logs for Applications
Use the YARN REST APIs to manage applications
Use the yarn rmadmin tool to administer ResourceManager high availability
User Account Requirements
User authentication in Hue
User management in Hue
User-defined functions (UDFs)
Using --go-live with SSL or Kerberos
Using a credential provider to secure S3 credentials
Using advanced search
Using Apache HBase Backup and Disaster Recovery
Using Apache Hive
Using Apache Impala with Apache Kudu
Using Apache Impala with Apache Kudu
Using Apache Phoenix to Store and Access Data
Using Apache Zeppelin
Using Avro Data Files
Using Basic Search
Using Breakpad Minidumps for Crash Reporting
Using chrony for time synchronization
Using CLI commands to create and list ACLs
Using Cloudera Manager to manage HDFS HA
Using cluster names in the kudu command line tool
Using common table expressions
Using Configuration Properties to Authenticate
Using constraints
Using Custom JAR Files with Search
Using custom libraries with Spark
Using Data Analytics Studio
Using dfs.datanode.max.transfer.threads with HBase
Using DistCp
Using DistCp between HA clusters using Cloudera Manager
Using DistCp to copy files
Using DistCp with Amazon S3
Using DistCp with Highly Available remote clusters
Using DNS with HBase
Using EC2 Instance Metadata to Authenticate
Using Environment Variables to Authenticate
Using erasure coding for existing data
Using erasure coding for new data
Using Free-text Search
Using functions
Using governance-based data discovery
Using HBase blocksize
Using HBase coprocessors
Using HBase replication
Using HBase scanner heartbeat
Using HDFS snapshots for data protection
Using HdfsFindTool to find files
Using hedged reads
Using Hive Warehouse Connector with Oozie Spark action
Using HttpFS to provide access to HDFS
Using Hue
Using Hue
Using Impala to query Kudu tables
Using JDBC API
Using JdbcStorageHandler to query RDBMS
Using JdbcStorageHandler to query RDBMS
Using JMX for accessing HDFS metrics
Using Kafka Connect
Using Kafka's inter-broker security
Using Livy with interactive notebooks
Using Livy with Spark
Using Load Balancer with HttpFS
Using MapReduce Batch Indexing to Index Sample Tweets
Using materialized views
Using metadata for cluster governance
Using non-JDBC drivers
Using ORC Data Files
Using Ozone S3 Gateway to work with storage elements
Using Parquet Data Files
Using Per-Bucket Credentials to Authenticate
Using PySpark
Using quota management
Using rack awareness for read replicas
Using Ranger to Provide Authorization in CDP
Using Ranger with Ozone
Using RCFile Data Files
Using Record-Enabled Processors
Using RegionServer grouping
Using S3Guard for Consistent S3 Metadata
Using Schema Registry
Using Search filters
Using SequenceFile Data Files
Using solrctl with an HTTP proxy
Using Spark Hive Warehouse and HBase Connector Client .jar files with Livy
Using Spark MLlib
Using Spark SQL
Using Spark Streaming
Using Spark with a secure Kudu cluster
Using Sqoop actions with Oozie
Using Streams Replication Manager
Using tag attributes and values in Ranger tag-based policy conditions
Using Text Data Files
Using the Apache Knox Gateway UI
Using the CDS 3 Maven Repo
Using the Charts Library with the Kudu service
Using the Cloudera Runtime Maven Repository
Using the Database Explorer
Using the Directory Committer in MapReduce
Using the HBase-Spark connector
Using the HBCK2 tool to remediate HBase clusters
Using the Indexer HTTP Interface
Using the Lily HBase NRT Indexer Service
Using the Livy API to run Spark jobs
Using the NFS Gateway for accessing HDFS
Using the Note Toolbar
Using the Ranger Console
Using the REST API
Using the REST proxy API
Using the S3Guard CLI
Using the S3Guard Command to List and Delete Uploads
Using the Spark DataFrame API
Using Unique Filenames to Avoid File Update Inconsistency
Using YARN Web UI and CLI
Using Zeppelin Interpreters
UTF-8 codec error
Validating the Cloudera Search Deployment
VALUES statement
VARCHAR data type
Varchar type
VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP functions
Variations on Put
Verify that replication works
Verify the ZooKeeper authentication
Verify validity of the NFS services
Verifying if a memory limit is sufficient
Verifying That an S3A Committer Was Used
Verifying that Indexing Works
Verifying that S3Guard is Enabled on a Bucket
Verifying the Impala dependency on Kudu
Versions
View All Applications
View and Modify Cloudera Search Configuration
View and Modify Log Levels for Cloudera Search and Related Services
View application details
View audit details
View Cluster Overview
View compaction progress
View Nodes and Node Details
View query details
View Queues and Queue Details
View Ranger reports
View transaction locks
View transactions
Viewing detailed information
Viewing Existing Solr Collections
Viewing lineage
Viewing racks assigned to cluster hosts
Viewing Replication Details
Viewing storage information
Viewing table and column statistics
Viewing the API documentation
Viewing the DAG counters
Viewing the DAG flow
Viewing the Hive configurations for a query
Viewing the Join report
Viewing the query details
Viewing the query recommendations
Viewing the query timeline
Viewing the Read and Write report
Viewing the task-level DAG information
Viewing the Tez configurations for a query
Viewing the visual explain for a query
Views
Virtual machine options for HBase Shell
Virtual memory handling
Web UI encryption
Web UI redaction
Web User Interface for Debugging
What's New
When Shuffles Do Not Occur
When to Add a Shuffle Transformation
When to use Atlas classifications for access control
Why HDFS data becomes unbalanced
Why one scheduler?
Wildcards and variables in resource-based policies
WINDOW
WITH clause
Work Preserving Recovery for YARN components
Working with Amazon S3
Working with Apache Hive Metastore
Working with Atlas classifications and labels
Working with Classifications and Labels
Working with Google Cloud Storage
Working with Ozone ACLs
Working with Ozone File System
Working with S3 buckets in the same AWS region
Working with the Oozie server
Working with the Recon web user interface
Working with Third-party S3-compatible Object Stores
Working with versioned S3 buckets
Working with Zeppelin Notes
Write transformed Hive data to Kafka
Write-ahead log garbage collection
Writes
Writing data to HBase
Writing data to Kafka
Writing managed tables through HWC
Writing to multiple tablets
Writing UDFs
Writing user-defined aggregate functions (UDAFs)
YARN
YARN
YARN
YARN
YARN ACL rules
YARN ACL syntax
YARN ACL types
YARN Configuration Properties
YARN Features
YARN Log Aggregation Overview
YARN resource allocation of multiple resource-types
YARN ResourceManager High Availability
YARN ResourceManager high availability architecture
YARN services API examples
YARN tuning overview
Zeppelin
Zeppelin
ZooKeeper
ZooKeeper
ZooKeeper
ZooKeeper ACLs Best Practices
ZooKeeper ACLs Best Practices: Atlas
ZooKeeper ACLs Best Practices: HBase
ZooKeeper ACLs Best Practices: HDFS
ZooKeeper ACLs Best Practices: Kafka
ZooKeeper ACLs Best Practices: Oozie
ZooKeeper ACLs Best Practices: Ranger
ZooKeeper ACLs best practices: Search
ZooKeeper ACLs Best Practices: YARN
ZooKeeper ACLs Best Practices: ZooKeeper
ZooKeeper Authentication
zookeeper-security-migration
«
Filter topics
Edit or delete a snapshot policy
▼
Data protection
▶︎
Backing up HDFS metadata
▶︎
Introduction to HDFS metadata files and directories
▶︎
Files and directories
NameNodes
JournalNodes
DataNodes
▶︎
HDFS commands for metadata files and directories
Configuration properties
▶︎
Back up HDFS metadata
Prepare to back up the HDFS metadata
Backing up NameNode metadata
Back up HDFS metadata using Cloudera Manager
Restoring NameNode metadata
Restore HDFS metadata from a backup using Cloudera Manager
Perform a backup of the HDFS metadata
▼
Using HDFS snapshots for data protection
Considerations for working with HDFS snapshots
Enable snapshot creation on a directory
Create snapshots on a directory
Recover data from a snapshot
Options to determine differences between contents of snapshots
CLI commands to perform snapshot operations
▼
Managing snapshot policies using Cloudera Manager
Create a snapshot policy
Edit or delete a snapshot policy
Enable and disable snapshot creation using Cloudera Manager
Create snapshots using Cloudera Manager
Delete snapshots using Cloudera Manager
▶︎
Configuring HDFS trash
Trash behavior with HDFS Transparent Encryption enabled
Enabling and disabling trash
Setting the trash interval
Preventing inadvertent deletion of directories
»
Configuring Data Protection
Edit or delete a snapshot policy
You can use Cloudera Manager to edit or delete existing snapshot policies.
Select
Backup
>
Snapshot Policies
in the left navigation bar.
Existing snapshot policies are shown in a table on the
Snapshot Policies
page.
Click
next to a policy and select
Edit Configuration
or
Delete
.
If you want to edit the selected policy, make the required changes and click
Save Policy
.
Parent topic:
Managing snapshot policies using Cloudera Manager
7.3.1
7.1
7.1.9
7.1.8
7.1.7
7.1.6
7.1.5
7.1.4
7.1.3
7.1.2
7.1.1
7.0.3