Hive unsupported interfaces and features in public clouds
You need to understand the interfaces that are not supported in CDP.
The following interfaces are not supported:
- Hcat CLI (however, HCatalog is supported)
- Hive CLI (replaced by Beeline)
- Hive View UI feature in Ambari
- Renaming Hive databases
- Multiple insert overwrite queries that read data from a source table.
- MapReduce execution engine (replaced by LLAP)
- Pig
- Spark execution engine
- Spark thrift server
Spark and Hive tables interoperate using the Hive Warehouse Connector.
- SQL Standard Authorization
- Tez View UI feature in Ambari
- WebHCat
You can use Hue in lieu of Hive View.
Hive-Kudu integration
CDP does not support the integration of HiveServer (HS2) with Kudu tables. You cannot run queries against Kudu tables from HS2.
Unsupported Features
CDP does not support the following features that were available in HDP and CDH platforms:
- CREATE TABLE that specifies a managed table location
Do not use the LOCATION clause to create a managed table. Hive assigns a default location in the warehouse to managed tables. That default location is configured in Hive using the
hive.metastore.warehouse.dir configuration
property, but can be overridden for the database by setting the CREATE DATABASE MANAGEDLOCATION parameter. - CREATE INDEX and related index commands were removed in Hive 3, and consequently are not
supported in CDP.
In CDP, you use the Hive 3 default ORC columnar file formats to achieve the performance benefits of indexing. Materialized Views with automatic query rewriting also improves performance. Indexes migrated to CDP are preserved but render any Hive tables with an undroppable index. To drop the index, google the Known Issue for CDPD-23041.
- Hive metastore (HMS) high availablility (HA) load balancing in CDH
You need to set up HMS HA as described in the documentation.
- Local or Embedded Hive metastore server
CDP does not support the use of a local or embedded Hive metastore setup.
Unsupported Connector Use
CDP does not support the Sqoop exports using the Hadoop jar
command (the
Java API) that Teradata documents. For more information, see Migrating data using Sqoop.