Known Issues in Apache Phoenix

Learn about the known issues in Phoenix, the impact or changes to the functionality, and the workaround.

CDPD-21865: If a table uses local secondary indexing, and if this table is used multiple times in a query, the following error may occur:org.apache.phoenix.schema.AmbiguousTableException: ERROR 501 (42000): Table name exists in more than one table schema and is used without being qualified. This error occurs only when using self JOIN and even when the correct table alias is used in the query.

When you want to use a self JOIN, do not use local indexes on those tables.

CDPD-23173: After migrating from HDP 3.x to Cloudera Runtime 7.1.6, when you connect to your migrated Phoenix Query Server (PQS) and SSL/TLS is enabled for Apache HBase, you see the following error unable to find valid certification path to requested target at ....
When connecting to PQS, provide the truststore and the truststore password parameters along with the PQS endpoint URL. For example, when using phoenix-sqlline:
phoenix-sqlline-thin https://[***PQS endpoint URL***]:8765 -t [***PATH TO YOUR JKS FILE***} [****TRUSTSTORE.jks****] -tp [***TRUSTSTORE PASSWORD****]
Use the truststore (phoenix.queryserver.tls.truststore) and truststore password (phoenix.queryserver.tls.truststore.password) that you set when configuring TLS for Phoenix Query Server.
CDPD-23539: When a query on a table with local indexes refers to both covered and uncovered columns in the where clause, the query will return incorrect results.


CDPD-23465: When using the Phoenix-Spark connector, you may see some errors because of incompatibility between the Phoenix Spark JAR file and an HBase shaded mapreduce JAR file present in your Spark classpath.

In Cloudera Manager, locate Spark_on_YARN > Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf, add spark.driver.userClassPathFirst=true and spark.executor.userClassPathFirst=true. Note that these settings apply only to cluster mode. Then run your Spark applications that use the Phoenix Spark integration in cluster mode.


Run Spark applications that use the Phoenix Spark integration in cluster mode with "--conf spark.driver.userClassPathFirst=true --conf spark.executor.userClassPathFirst=true".

If you use the spark-shell with the Phoenix-Spark connector, do the following:
  1. Run cp -r /etc/spark/conf /some/other/spark-conf.
  2. Edit [***YOUR PATH to SOME OTHER SPARK-CONF***]/spark-conf/classpath.txt, and remove the following lines:
  3. Export SPARK_CONF_DIR to the customized configuration directory:
  4. Run spark-shell.