1.1. Downgrade Flume

As with the upgrade process, when performing a rolling downgrade of Flume we recommend that you start with the agents in the tier closest to the final data destination in the dataflow path. When you finish downgrading those agents, downgrade agents that are further away from the destination.

  1. Stop all Flume agents on the node.

  2. If the agents use File channel or spillable channel, note that the checkpoint & data directories for the newer version of Flume might not be readable by the older version of Flume. If errors indicate that it does not work, you have one of two options:

    1. Restore the checkpoint and data directories from backup. If the newer version of Flume already processed some or all of the events, this could mean double delivery of those events--the downgraded flume will redeliver those events. This approach ensures, however, that no data is lost.

    2. Not recommended if it is important to not lose data: Start with empty checkpoint and data directories. Events that were backed up but not delivered by the new Flume agent, will be lost.

  3. Switch back to the previous version of Flume on the node:

    hdp-select set flume-server 2.2.2.0-2041

  4. Start the Flume agents, one by one.

  5. Validate by making sure new data is reaching the destination.

Validation

To validate the downgrade process for Hadoop, HBase, Phoenix, and Mahout clients, submit the smoke tests for MapReduce, HDFS, Phoenix, and Mahout (see Appendix A).

To validate Flume, make sure new data reaches the destination.

To validate Sqoop:

  • Run the Sqoop smoke test (see Appendix A).

  • Invoke a Sqoop command and make sure that it responds correctly.

  • The Sqoop version command should return the correct version.

Continue with the next downgrade step.


loading table of contents...