Accessing data using Apache Druid
Also available as:
PDF

Configure Apache Druid for high availability

To make Apache Druid (incubating) highly available, you need to configure a sophisticated, multinode structure.

The architecture of a Druid cluster includes a number of nodes, two of which are designated Overlord and Coordinator. Within each Overlord and Coordinator domain, ZooKeeper determines which node is the Active node. The other nodes supporting each process are in Standby state until an Active node stops running and the Standby nodes receive the failover. Multiple Historical and Realtime nodes also serve to support a failover mechanism. But for Broker and Realtime processes, there are no designated Active and Standby nodes. Muliple instances of Druid Broker processes are required for HA. Recommendations: Use an external, virtual IP address or load balancer to direct user queries to multiple Druid Broker instances. A Druid Router can also serve as a mechanism to route queries to multiple broker nodes.

  • You ensured that no local storage is used.
  • You installed MySQL or Postgres as the metadata storage layer. You cannot use Derby because it does not support a multinode cluster with HA.
  • You configured your MySQL or Postgres metadata storage for HA mode to avoid outages that impact cluster operations. See your database documentation for more information.
  • You planned to dedicate at least three ZooKeeper nodes to HA mode.
  1. In Ambari, enable Namenode High Availability using the Ambari wizard. See Ambari documentation for more information.
  2. Install the Druid Overlord, Coordinator, Broker, Realtime, and Historical processes on multiple nodes that are distributed among different hardware servers.
  3. Ensure that the replication factor for each data source is greater than 1 in the Coordinator process rules. If you did not change data source rule configurations, no action is required because the default replication factor is 2.