Using Apache Hive
Apache Hive 3 tables
Hive table locations
Refer to a table using dot notation
Create a CRUD transactional table
Create an insert-only transactional table
Drop an external table along with data
Convert a managed non-transactional table to external
Create an S3-based external table
Using constraints
Determine the table type
Hive 3 ACID transactions
Apache Hive query basics
Query the information_schema database
Inserting data into a table
Updating data in a table
Merging data in tables
Deleting data from a table
Use a subquery
Subquery restrictions
Use wildcards with SHOW DATABASES
Aggregate and group data
Query correlated data
Using common table expressions
Use a CTE in a query
Compare tables using ANY/SOME/ALL
Escape an invalid identifier
CHAR data type support
ORC vs Parquet formats
Create a default directory for managed tables
Generate surrogate keys
Partitions introduction
Create partitions dynamically
Manage partitions
Automate partition discovery and repair
Repair partitions manually using MSCK repair
Manage partition retention time
Scheduling queries
Enable scheduled queries
Periodically rebuilding a materialized view
Get scheduled query information and monitor the query
Speeding up queries using materialized views
Creating and using a materialized view
Create the tables and view
Verify use of a query rewrite
Using optimizations from a subquery
Dropping a materialized view
Showing materialized views
Describe a materialized view
Managing query rewrites
Purposely using a stale materialized view
Creating and using a partitioned materialized view
CDW stored procedures
Setting up a CDW client
Creating a function
Using the cursor to return record sets
HPL/SQL examples
Using functions
Reload, view, and filter functions
Create a user-defined function
Set up the development environment
Create the UDF class
Build the project and upload the JAR
Register the UDF
Call the UDF in a query