Apache Hive Known Issues
— UDF infile() does not accept arguments of type CHAR or VARCHAR
— UDF translate() does not accept arguments of type CHAR or VARCHAR
— UDF printf() does not accept arguments of type CHAR or VARCHAR
— Select * fails on Parquet tables with the map data type
— Hive's Timestamp type cannot be stored in Parquet
Tables containing timestamp columns can't use Parquet as the storage engine.
Bug: HIVE-6394
Severity: Low
Workaround: Use a different file format.
— Hive's Decimal type cannot be stored in Parquet and Avro
— Hive creates an invalid table if you specify more than one partition with alter table
Hive (in all known versions from 0.7) allows you to configure multiple partitions with a single alter table command, but the configuration it creates is invalid for both Hive and Impala.
Bug: None
Severity: Medium
Resolution: Use workaround.
Workaround:
ALTER TABLE page_view ADD PARTITION (dt='2008-08-08', country='us') location '/path/to/us/part080808' PARTITION (dt='2008-08-09', country='us') location '/path/to/us/part080809';should be replaced with:
ALTER TABLE page_view ADD PARTITION (dt='2008-08-08', country='us') location '/path/to/us/part080808'; ALTER TABLE page_view ADD PARTITION (dt='2008-08-09', country='us') location '/path/to/us/part080809';
— PostgreSQL 9.0+ requires additional configuration
Caused by: javax.jdo.JDODataStoreException: Error executing JDOQL query "SELECT "THIS"."TBL_NAME" AS NUCORDER0 FROM "TBLS" "THIS" LEFT OUTER JOIN "DBS" "THIS_DATABASE_NAME" ON "THIS"."DB_ID" = "THIS_DATABASE_NAME"."DB_ID" WHERE "THIS_DATABASE_NAME"."NAME" = ? AND (LOWER("THIS"."TBL_NAME") LIKE ? ESCAPE '\\' ) ORDER BY NUCORDER0 " : ERROR: invalid escape string Hint: Escape string must be empty or one character.. NestedThrowables: org.postgresql.util.PSQLException: ERROR: invalid escape string Hint: Escape string must be empty or one character. at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:252) at org.apache.hadoop.hive.metastore.ObjectStore.getTables(ObjectStore.java:759) ... 28 more Caused by: org.postgresql.util.PSQLException: ERROR: invalid escape string Hint: Escape string must be empty or one character. at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2096) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1829) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:510) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:386) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:271) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96) at org.datanucleus.store.rdbms.SQLController.executeStatementQuery(SQLController.java:457) at org.datanucleus.store.rdbms.query.legacy.SQLEvaluator.evaluate(SQLEvaluator.java:123) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.performExecute(JDOQLQuery.java:288) at org.datanucleus.store.query.Query.executeQuery(Query.java:1657) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) ... 29 more
The problem is caused by a backward-incompatible change in the default value of the standard_conforming_strings property. Versions up to PostgreSQL 9.0 defaulted to off, but starting with version 9.0 the default is on.
Bug: None
Severity: Low
Resolution: Use workaround.
ALTER DATABASE <hive_db_name> SET standard_conforming_strings = off;
— Queries spawned from MapReduce jobs in MRv1 fail if mapreduce.framework.name is set to yarn
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
Bug: None
Severity: High
Resolution: Use workaround
Workaround: Remove the mapreduce.framework.name property from mapred-site.xml.
— Commands run against an Oracle backed Metastore may fail
javax.jdo.JDODataStoreException Incompatible data type for column TBLS.VIEW_EXPANDED_TEXT : was CLOB (datastore), but type expected was LONGVARCHAR (metadata). Please check that the type in the datastore and the type specified in the MetaData are consistent.
This error may occur if the metastore is run on top of an Oracle database with the configuration property datanucleus.validateColumns set to true.
Bug: None
Severity: Low
Workaround: Set datanucleus.validateColumns=false in the hive-site.xml configuration file.
— Hive, Pig, and Sqoop 1 fail in MRv1 tarball installation because /usr/bin/hbase sets HADOOP_MAPRED_HOME to MR2
This problem affects tarball installations only.
Bug: None
Severity: High
Resolution: Use workaround
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
In addition, /usr/lib/hadoop-mapreduce must not exist in HADOOP_CLASSPATH.
— Hive Web Interface not supported
Cloudera no longer supports the Hive Web Interface because of inconsistent upstream maintenance of this project.
Bug: DISTRO-77
Severity: Low
Resolution: Use workaround
Workaround: Use Hue and Beeswax instead of the Hive Web Interface.
— Hive may need additional configuration to make it work in an Federated HDFS cluster
Failed with exception Renames across Mount points not supported
Bug: None
Severity: Low
Resolution: No software fix planned; use the workaround.
<property> <name>hive.exec.scratchdir</name> <value>/user/${user.name}/tmp</value> </property>
— Cannot create archive partitions with external HAR (HTTP Archive) tables
ALTER TABLE ... ARCHIVE PARTITION is not supported on external tables.
Bug: None
Severity: Low
Workaround: None
— Setting hive.optimize.skewjoin to true causes long running queries to fail
Bug: None
Severity: Low
Workaround: None
— JDBC - executeUpdate does not returns the number of rows modified
Contrary to the documentation, method executeUpdate always returns zero.
Severity: Low
Workaround: None
HCatalog Known Issues
— Hive's DECIMAL data type cannot be mapped to Pig via HCatalog
HCatalog does recognize the DECIMAL data type.
Bug: none
Severity: Low
Workaround: None
— Job submission using WebHCatalog might not work correctly
Bug: none
Severity: Low
Resolution: Use workaround.
Workaround: Cloudera recommends using the Oozie REST interface to submit jobs, as it's a more mature and capable tool.
— WebHCatalog does not work in a Kerberos-secured Federated cluster
Bug: none
Severity: LowCDH-12416
Resolution: None planned.
Workaround:None
—Hive Auth (Grant/Revoke/Show Grant) statements do not support fully qualified table names (default.tab1)
Bug: None
Severity: Low
Workaround: Switch to the database before granting privileges on the table.
—Object types Server and URI are not supported in "SHOW GRANT ROLE roleName on OBJECT objectName"
Bug: None
Severity: Low
Workaround:Use SHOW GRANT ROLE roleNameto list all privileges granted to the role.
<< Apache HBase Known Issues | Hue Known Issues >> | |