Using Apache Hive
Apache Hive 3 tables
Hive table locations
Refer to a table using dot notation
Create a CRUD transactional table
Create an insert-only transactional table
Drop an external table along with data
Convert a managed non-transactional table to external
Create an S3-based external table
Using constraints
Determine the table type
Hive 3 ACID transactions
Apache Hive query basics
Query the information_schema database
Insert data into a table
Update data in a table
Merge data in tables
Delete data from a table
Use a subquery
Subquery restrictions
Use wildcards with SHOW DATABASES
Aggregate and group data
Query correlated data
Using common table expressions
Use a CTE in a query
Compare tables using ANY/SOME/ALL
Escape an invalid identifier
CHAR data type support
ORC vs Parquet in CDP
Create a default directory for managed tables
Generate surrogate keys
Partitions introduction
Create partitions dynamically
Manage partitions
Automate partition discovery and repair
Repair partitions manually using MSCK repair
Manage partition retention time
Scheduling queries
Enable scheduled queries
Periodically rebuild a materialized view
Get scheduled query information and monitor the query
Using materialized views
Create and use a materialized view
Create the tables and view
Verify use of a query rewrite
Use materialized view optimations from a subquery
Drop a materialized view
Show materialized views
Describe a materialized view
Manage query rewrites
Create and use a partitioned materialized view
CDW stored procedures
Setting up a CDW client
Creating a function
Using the cursor to return record sets
HPL/SQL examples
Using functions
Reload, view, and filter functions
Create a user-defined function
Set up the development environment
Create the UDF class
Build the project and upload the JAR
Register the UDF
Call the UDF in a query