Known Issues in Apache Phoenix

Learn about the known issues in Phoenix, the impact or changes to the functionality, and the workaround.

CDPD-21865: If a table uses local secondary indexing, and if this table is used multiple times in a query, the following error may occur:org.apache.phoenix.schema.AmbiguousTableException: ERROR 501 (42000): Table name exists in more than one table schema and is used without being qualified. This error occurs only when using self JOIN and even when the correct table alias is used in the query.

When you want to use a self JOIN, do not use local indexes on those tables.

CDPD-23173: After migrating from HDP 3.x to Cloudera Runtime 7.1.6, when you connect to your migrated Phoenix Query Server (PQS) and SSL/TLS is enabled for Apache HBase, you see the following error unable to find valid certification path to requested target at ....
When connecting to PQS, provide the truststore and the truststore password parameters along with the PQS endpoint URL. For example, when using phoenix-sqlline:
phoenix-sqlline-thin https://[***PQS endpoint URL***]:8765 -t [***PATH TO YOUR JKS FILE***} [****TRUSTSTORE.jks****] -tp [***TRUSTSTORE PASSWORD****]
Use the truststore (phoenix.queryserver.tls.truststore) and truststore password (phoenix.queryserver.tls.truststore.password) that you set when configuring TLS for Phoenix Query Server.

For more information, see Launching Apache Phoenix Thin Client.

CDPD-23539: When a query on a table with local indexes refers to both covered and uncovered columns in the where clause, the query will return incorrect results.

None.

CDPD-23465: When using the Phoenix-Spark connector, you may see some errors because of incompatibility between the Phoenix Spark JAR file and an HBase shaded mapreduce JAR file present in your Spark classpath.

In Cloudera Manager, locate Spark_on_YARN > Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf, add spark.driver.userClassPathFirst=true and spark.executor.userClassPathFirst=true. Note that these settings apply only to cluster mode. Then run your Spark applications that use the Phoenix Spark integration in cluster mode.

or

Run Spark applications that use the Phoenix Spark integration in cluster mode with "--conf spark.driver.userClassPathFirst=true --conf spark.executor.userClassPathFirst=true".

If you use the spark-shell with the Phoenix-Spark connector, do the following:
  1. Run cp -r /etc/spark/conf /some/other/spark-conf.
  2. Edit [***YOUR PATH to SOME OTHER SPARK-CONF***]/spark-conf/classpath.txt, and remove the following lines:
    /opt/cloudera/parcels/CDH-7.1.6.../lib/hbase/bin/../lib/client-facing-thirdparty/audience-annotations-0.5.0.jar
     /opt/cloudera/parcels/CDH-7.1.6.../lib/hbase/bin/../lib/client-facing-thirdparty/commons-logging-1.2.jar
     /opt/cloudera/parcels/CDH-7.1.6.../lib/hbase/bin/../lib/client-facing-thirdparty/findbugs-annotations-1.3.9-1.jar
     /opt/cloudera/parcels/CDH-7.1.6.../lib/hbase/bin/../lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar
     /opt/cloudera/parcels/CDH-7.1.6.../lib/hbase/bin/../lib/shaded-clients/hbase-shaded-mapreduce-2.2.3.7.1.6.0-....jar
  3. Export SPARK_CONF_DIR to the customized configuration directory:
    export SPARK_CONF_DIR=/[***YOUR PATH to SOME OTHER SPARK-CONF***]/
  4. Run spark-shell.
DRILL-6866: In CDP 7.1.6 PvC cluster, while connecting to Phoenix sqlline, it fails with the following exception : “java.lang.IllegalArgumentException: Bad history file syntax! The history file `/home/e154579/.sqlline/history` may be an older history: please remove it or use a different history file.”

To resolve this issue, remove or move the mentioned history file in the exception and connect to the sqlline again.

Technical Service Bulletins

TSB 2022-568: HBase normalizer must be disabled for Salted Phoenix tables
When Apache Phoenix (“Phoenix”) creates a salted table, it pre-splits the table according to the number of salt regions. These regions must always be kept separate, otherwise Phoenix does not work correctly.

The HBase normalizer is not aware of this requirement, and in some cases the pre-split regions are merged automatically. This causes failure in Phoenix.

The same requirement applies when merging regions of salted tables manually: regions containing different salt keys (the first byte of the rowkey) must never be merged.

Note that either automatic or manual splitting of the regions for a salted table does not cause a problem. The problem only occurs when adjacent regions containing different salt keys are merged.

Upstream JIRA
PHOENIX-4906
Knowledge article
For the latest update on this issue, see the corresponding Knowledge article: TSB 2022-568: Hbase normalizer must be disabled for Salted Phoenix tables