Data Movement and Integration
Also available as:
PDF
loading table of contents...

Chapter 15. Troubleshooting

The following information can help you troubleshoot issues with your Falcon server installation.

Falcon logs

The Falcon server logs are available in the logs directory under $FALCON_HOME.

To get logs for an instance of a feed or process:

$FALCON_HOME/bin/falcon instance -type $feed/process -name $name -logs -start "yyyy-MM-dd'T'HH:mm'Z'" [-end "yyyy-MM-dd'T'HH:mm'Z'"] [-runid $runid]

Falcon Server Failure

The Falcon server is stateless. All you need to do is restart Falcon for recovery, because a Falcon server failure does not affect currently scheduled feeds and processes.

Delegation Token Renewal Issues

Inconsistencies in rules for hadoop.security.auth_to_local can lead to issues with delegation token renewals.

If you are using secure clusters, verify that hadoop.security.auth_to_local in core-site.xml is consistent across all clusters.

Invalid Entity Schema

Invalid values in cluster, feeds (datasets), or processing schema can occur.

Review Falcon entity specifications.

Incorrect Entity

Failure to specify the correct entity type to Falcon for any action results in a validation error.

For example, if you specify -type feed to sumbit -type process, you will see the following error:

[org.xml.sax.SAXParseException; lineNumber: 5; columnNumber: 68; cvc-elt.1.a: Cannot find the declaration of element 'process'.]

Bad Config Store Error

The configuration store directory must be owned by your "falcon" user.

Unable to set DataSet Entity

Ensure ‘validity times’ make sense.

  • They must align between clusters, processes, and feeds.

  • In a given pipeline Dates need to be ISO8601 format:

    yyyy-MM-dd'T'HH:mm'Z’

Oozie Jobs

Always start with the Oozie bundle job, one bundle job per feed and process. Feeds have one coordinator job to set the retention policy and one coordinator for the replication policy.