Upgrading HDP Manually
Also available as:
PDF
loading table of contents...

Verify HDFS Filesystem Health

Analyze if the filesystem is healthy.

[Note]Note

The su commands in this section use keywords to represent the Service user. For example, "hdfs" is used to represent the HDFS Service user. If you are using another name for your Service users, you will need to substitute your Service user name in each of the su commands.

[Important]Important

If you have a secure server, you will need Kerberos credentials for hdfs user access.

  1. Run the fsck command on namenode as $HDFS_USER:

    su - hdfs -c "hdfs fsck / -files -blocks -locations > dfs-new-fsck-1.log"

    Open dfs-new-fsck-2.log to see that the filesystem under path / is HEALTHY.

  2. Run hdfs namespace and report.

    1. List directories.

      su - hdfs -c "hdfs dfs -ls -R / > dfs-new-lsr-1.log"

    2. Open the dfs-new-lsr-l.log and confirm that you can see the file and directory listing in the namespace.

    3. Run report command to create a list of DataNodes in the cluster.

      su - hdfs -c "hdfs dfsadmin -report > dfs-new-report-1.log"

    4. Open the dfs-new-report file and validate the admin report.

  3. Compare the namespace report before the upgrade and after the upgrade. Verify that user files exist after upgrade.

    The file names are listed below:

    dfs-old-fsck-1.log < -- > dfs-new-fsck-1.log

    dfs-old-lsr-1.log < -- > dfs-new-lsr-1.log

    [Note]Note

    You must do this comparison manually to catch all errors.

  4. From the NameNode WebUI, determine if all DataNodes are up and running.

    http://<namenode>:<namenodeport>

  5. If you are on a highly available HDFS cluster, go to the StandbyNameNode web UI to see if all DataNodes are up and running:

    http://<standbynamenode>:<namenodeport>

  6. If you are not on a highly available HDFS cluster, go to the SecondaryNameNode web UI to see if it the secondary node is up and running:

    http://<secondarynamenode>:<secondarynamenodeport>

  7. Verify that read and write to hdfs works successfully.

    hdfs dfs -put [input file] [output file]

    hdfs dfs -cat [output file]