Hive unsupported interfaces and features
You need to know the interfaces available in HDP or CDH platforms that are not supported.
The following interfaces are not supported in CDP Private Cloud Base:
- Druid
- Hcat CLI (however HCatalog is supported)
- Hive CLI (replaced by Beeline)
- Hive View UI feature in Ambari
- Apache Hive Standalone driver
- Renaming Hive databases
- Multiple insert overwrite queries that read data from a source table.
- LLAP
- MapReduce execution engine (replaced by Tez)
- Pig
- S3 for storing tables
- Spark execution engine (replaced by Tez)
- Spark thrift server
Spark and Hive tables interoperate using the Hive Warehouse Connector.
- SQL Standard Authorization
- Tez View UI feature in Ambari
- WebHCat
You can use Hue in lieu of Hive View.
Storage Based Authorization
Storage Based Authorization (SBA) is not longer supported in CDP. Ranger integration with Hive metastore provides consistency in Ranger authorization enabled in HiveServer (HS2). SBA does not provide authorization support for metadata that lacks a file/directory association. Ranger-based authorization has no such limitation.
Hive-Kudu integration
CDP does not support the integration of HiveServer (HS2) with Kudu tables. You cannot run queries against Kudu tables from HS2.
Partially unsupported interfaces
Apache Hadoop Distributed Copy (DistCP) is not supported for copying Hive ACID tables. See link below.
Unsupported Features
CDP does not support the following features that were available in HDP and CDH platforms:
- Replicate Hive ACID tables between CDP Private Cloud Base clusters using REPL
commands
You cannot use the REPL commands (REPL DUMP and REPL LOAD) to replicate Hive ACID table data between two CDP Private Cloud Base clusters.
- CREATE TABLE that specifies a managed table location
Do not use the LOCATION clause to create a managed table. Hive assigns a default location in the warehouse to managed tables. That default location is configured in Hive using the hive.metastore.warehouse.dir configuration property, but can be overridden for the database by setting the CREATE DATABASE MANAGEDLOCATION parameter.
- CREATE INDEX
Hive builds and stores indexes in ORC or Parquet within the main table, instead of a different table, automatically. Set
hive.optimize.index.filter
to enable use (not recommended--use materialized views instead). Existing indexes are preserved and migrated in Parquet or ORC to CDP during upgrade. - Hive metastore (HMS) high availablility (HA) load balancing in CDH
You need to set up HMS HA as described in the documentation.
- Local or Embedded Hive metastore server
CDP does not support the use of a local or embedded Hive metastore setup.
Unsupported Connector Use
CDP does not support the Sqoop exports using the Hadoop jar
command (the
Java API) that Teradata documents. For more information, see Migrating data using Sqoop.