Apache Phoenix Frequently Asked QuestionsPDF version

Frequently asked questions

Find answers to frequently asked questions around Apache Phoenix and its deployment.

Yes. Apache Phoenix is used for OLTP (Online Transactional Processing) use cases and not OLAP (Online Analytical Processing) use cases. Although, you can use Phoenix for real-time data ingest as a primary use case.

A typical Phoenix deployment has the following:

  • Application
  • Phoenix Client/JDBC driver
  • HBase client

A Phoenix client/JDBC driver is essentially a Java library that you should include in your Java code. Phoenix uses HBase as storage similar to how HBase uses HDFS as storage. However, the abstraction for Phoenix is not yet complete, for example, for implementing access controls, you need to set ACLs on the underlying HBase tables that contain the Phoenix data.



For Phoenix applications, you must follow the same sizing guidelines that you follow for HBase. For more information about Phoenix performance tuning, https://phoenix.apache.org/tuning_guide.html.

Yes, you can use Kerberos for authentication. You can configure authorization using HBase authorization.

You can map HBase’s native row timestamp to a Phoenix column. By doing this, you can take advantage of the various optimizations that HBase provides for time ranges on the store files as well as various query optimization capabilities built within Phoenix.

For more information, see https://phoenix.apache.org/rowtimestamp.html

Phoenix does local Indexing for deadlock prevention during global index maintenance.: Phoenix also atomically rebuild index partially when index update fails (PHOENIX-1112).

Sequences are a standard SQL feature that allows for generating monotonically increasing numbers typically used to form an ID.

For more information, see https://phoenix.apache.org/sequences.html.

Writes are durable and durability is defined by a WRITE that is committed to disk (in the Write Ahead Log). So in case of a RegionServer failure, the write is recoverable by replaying the WAL. A “complete” write is one that has been flushed from the WAL to an HFile. Any failures will be represented as exceptions.

Yes, you can do bulk inserts in Phoenix. For more information see https://phoenix.apache.org/bulk_dataload.html.

Yes, but is it is not recommended or supported. Data is encoded by Phoenix, so you have to decode the data for reading. Writing to the HBase tables directly would result in corruption in Phoenix.

Yes, as long as Phoenix data types are used. You have to use asynchronous indexes and manually update them since Phoenix won’t be aware of any updates.

For information about guideposts, see https://phoenix.apache.org/update_statistics.html.