Verify HDFS Filesystem Health
Analyze if the filesystem is healthy.
Note | |
---|---|
The |
Important | |
---|---|
If you have a secure server, you will need Kerberos credentials for hdfs user access. |
Run the fsck command on namenode as $HDFS_USER:
su - hdfs -c "hdfs fsck / -files -blocks -locations > dfs-new-fsck-1.log"
You should see feedback that the filesystem under path
/
is HEALTHY.Run
hdfs
namespace and report.List directories.
su - hdfs -c "hdfs dfs -ls -R / > dfs-new-lsr-1.log"
Open the
dfs-new-lsr-l.log
and confirm that you can see the file and directory listing in the namespace.Run report command to create a list of DataNodes in the cluster.
su - hdfs -c "hdfs dfsadmin -report > dfs-new-report-1.log"
Open the
dfs-new-report
file and validate the admin report.
Compare the namespace report before the upgrade and after the upgrade. Verify that user files exist after upgrade.
The file names are listed below:
dfs-old-fsck-1.log < -- > dfs-new-fsck-1.log
dfs-old-report-1.log < -- > dfs-new-report-1.log
Note You must do this comparison manually to catch all errors.
From the NameNode WebUI, determine if all DataNodes are up and running.
http://<namenode>:<namenodeport>
If you are on a highly available HDFS cluster, go to the StandbyNameNode web UI to see if all DataNodes are up and running:
http://<standbynamenode>:<namenodeport>
If you are not on a highly available HDFS cluster, go to the SecondaryNameNode web UI to see if it the secondary node is up and running:
http://<secondarynamenode>:<secondarynamenodeport>
Verify that read and write to hdfs works successfully.
hdfs dfs -put [input file] [output file]
hdfs dfs -cat [output file]