Apache Hive features
Major changes to Apache Hive 2.x improve Apache Hive 3.x transactions and security. Knowing the major differences between these versions is critical for SQL users, including those who use Apache Spark and Apache Impala.
Hive is a data warehouse system for summarizing, querying, and analyzing huge, disparate data sets.
ACID transaction processing
Hive 3 tables are ACID (Atomicity, Consistency, Isolation, and Durability)-compliant. Hive 3 write and read operations improve the performance of transactional tables. Atomic operations include simple writes and inserts, writes to multiple partitions, and multiple inserts in a single SELECT statement. A read operation is not affected by changes that occur during the operation. You can insert or delete data, and it remains consistent throughout software and hardware crashes. Creation and maintenance of Hive tables is simplified because there is no longer any need to bucket tables.
Materialized views
Because multiple queries frequently need the same intermediate roll up or joined table, you can avoid costly, repetitious query portion sharing, by precomputing and caching intermediate tables into views.
Query results cache
Hive filters and caches similar or identical queries. Hive does not recompute the data that has not changed. Caching repetitive queries can reduce the load substantially when hundreds or thousands of users of BI tools and web services query Hive.
Scheduled Queries
Using SQL statements, you can schedule Hive queries to run on a recurring basis, monitor query progress, temporarily ignore a query schedule, and limit the number running in parallel. You can use scheduled queries to start compaction and periodically rebuild materialized views, for example.
Security improvements
Apache Ranger secures Hive data by default. To meet demands for concurrency improvements, ACID support, render security, and other features, Hive tightly controls the location of the warehouse on a file system, or object store, and memory resources.
With Apache Ranger and Apache Hive ACID support, your organization will be ready to support and implement GDPR (General Data Protection Regulation).
Connection Pooling
Hive supports HikariCP JDBC connection pooling.
Unsupported features
CDP does not support the following features that were available in HDP and CDH platforms:
- Multiple insert overwrite queries that read data from a source table.
- CREATE TABLE that specifies a managed table location
Do not use the LOCATION clause to create a managed table. Hive assigns a default location in the warehouse to managed tables.
- CREATE INDEX
Hive builds and stores indexes in ORC or Parquet within the main table, instead of a different table, automatically. Set
hive.optimize.index.filter
to enable use (not recommended--use materialized views instead). Existing indexes are preserved and migrated in Parquet or ORC to CDP during upgrade.