8.13. Known Issues for Falcon

  • BUG-16608: Oozie table import job fails with error where user hive wants to write to table dir owned by the table owner.

    Problem: Falcon generated hive-action does not pass the hive-site.xml with the right configuration parameters. One manifestation of the problem will be the failure in table import job where user "hive" will be used to write to a directory owned by the table owner. This is because hive.metastore.execute.setugi parameter is not being passed as part of the hive action.

    Workaround: Add a Hive default configuration to Oozie.

    Stop the Oozie service.

    [Warning]Warning

    This change allows you to work with Hive tables and Oozie workflows, but will impact all Hive actions, including non-Falcon Oozie workflows.

    Under the oozie configuration directory (typically /etc/oozie/conf), there will be a subdirectory called action-conf. Under that directory, either create or modify the file hive-site.xml and add the following:

     <property>
       <name>hive.metastore.execute.setugi</name>
       <value>true</value>
    </property>

    After making this change restart the Oozie service. If Oozie is configured for HA, perform this configuration change on all Oozie server nodes.

  • BUG-16290: Strange delegation token issues in secure clusters

    Problem: Inconsistencies in rules for hadoop.security.auth_to_local can lead to issues with delgation token renewals in secure clusters.

    Workaround: Verify that hadoop.security.auth_to_local in core-site.xml is consistent across all clusters.

  • BUG-16290, FALCON-389: Oozie config changes needed to support HCat replication in Falcon

    Problem: Oozie config changes are needed before Falcon can handle HCat replication.

    Workaround: Modify Oozie on all clusters managed by Falcon:

    1. Stop the Oozie service on all Falcon clusters.

    2. Copy each cluster's hadoop conf directory to a different location. For example, if you have two clusters, copy one to /etc/hadoop/conf-1 and the other to /etc/hadoop/conf-2.

    3. For each oozie-site.xml file, modify the oozie.service.HadoopAccessorService.hadoop.configurations property, specifying clusters, the RPC ports of the NameNodes and HostManagers accordingly.

      For example, if Falcon connects to 3 clusters, specify:

      <property>
            <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
            <value>*=/etc/hadoop/conf,$NameNode:$rpcPortNN=$hadoopConfDir1,$ResourceManager1:$rpcPortRM=$hadoopConfDir1,$NameNode2=$hadoopConfDir2,$ResourceManager2:$rpcPortRM=$hadoopConfDir2,$NameNode3 :$rpcPortNN =$hadoopConfDir3,$ResourceManager3 :$rpcPortRM =$hadoopConfDir3</value>
            <description>
                Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of
                the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is
                used when there is no exact match for an authority. The HADOOP_CONF_DIR contains
                the relevant Hadoop *-site.xml files. If the path is relative is looked within
                the Oozie configuration directory; though the path can be absolute (i.e. to point
                to Hadoop client conf/ directories in the local filesystem.
            </description>
          </property>
    4. Restart the Oozie service on all clusters.