Data Governance Guide
Also available as:
loading table of contents...

Chapter 6. Configuring High Availability (Falcon Server)

Currently, configuring high availability for the Falcon server is a manual process. When the primary Falcon server is down, the backup Falcon server must be manually started by the system administrator. Then the backup Falcon server picks up where the primary server stopped.

Configuring Properties and Setting Up Directory Structure for High Availability

Required Properties for Falcon Server High Availability:

The Falcon server stores its data in the file that is located in the <falcon_home>/conf directory. Configure the start-up properties as follows for high availability:

  • * This location should be a directory on HDFS.

  • *.retry.recorder.path: This location should be an NFS-mounted directory that is owned by Falcon, and with permissions set to 755.

  • * This location should also be an NFS-mounted directory that is owned by Falcon, and with permissions set to 755.

  • Falcon conf directory: The default location of this directory is <falcon_home>/conf, which is symbolically linked to /etc/falcon/conf. This directory must point to an NFS-mounted directory to ensure that the changes made on the primary Falcon server are populated to the back-up server.

To set up an NFS-mounted directory:

The following instructions use for the NFS server, for the primary Falcon server, and for the back-up Falcon server.

  1. Logged in as root on the server that hosts the NFS mount directory:

    1. Install and start NFS with the following command:

      yum install nfs-utils nfs-utils-lib
      chkconfig nfs on
      service rpcbind start
      service nfs start
    2. Create a directory that holds the Falcon data:

      mkdir -p /hadoop/falcon/data
    3. Add the following lines to the file /etc/exports to share the data directories:

    4. Export the shared data directories:

      exportfs -a
  2. Logged in as root, install the nfs-utils package and its library on each of the Falcon servers.

    yum install nfs-utils nfs-utils-lib
  3. After installing the NFS utilities packages, still logged in as root, create the NFS mount directory, and then mount the directories with the following commands:

    mkdir -p /hadoop/falcon/data

Preparing the Falcon Servers

To prepare the Falcon servers for high availability:

  1. Logged in as root on each of the Falcon servers, make sure that the properties *.retry.recorder.path and * point to a directory under the NFS-mounted directory. For example, the /hadoop/falcon/data directory as shown in the above example.

  2. Logged in as the falcon user, start the primary Falcon server. Do not start the back-up Falcon server.


Manually Failing Over the Falcon Servers

When the primary Falcon server fails, the failover to the back-up server is a manual process:

  1. Logged in as the falcon user, make sure that the Falcon process is not running on the back-up server:

  2. Logged in as root, update the files on all of the Falcon client nodes. Set the property falcon.url to the fully qualified domain name of the back-up server.

    If Transport Layer Security (TLS) is disabled, use port 15000:

    falcon.url=http://<back-up-server>:15000/ ### if TLS is disabled

    If TLS is enabled, use port 15443:

    falcon.url=https://<back-up-server>:15443/ ### if TLS is enabled
  3. Logged in as the falcon user, start the back-up Falcon server: