JournalNode Health Tests
JournalNode Edits Directory Free Space
This is a JournalNode health test that checks that the filesystem containing the edits directory of this JournalNode has sufficient free space. This test can be configured using the Edits Directory Free Space Monitoring Absolute Thresholds and Edits Directory Free Space Monitoring Percentage Thresholds JournalNode monitoring settings.
Short Name: Edits Directory Free Space
Property Name | Description | Template Name | Default Value | Unit |
---|---|---|---|---|
Edits Directory Free Space Monitoring Absolute Thresholds | The health check thresholds for monitoring of free space on the filesystem that contains the JournalNode's edits directory. | journalnode_edits_directory_free_space_absolute_thresholds | critical:5.36870912E9, warning:1.073741824E10 | BYTES |
Edits Directory Free Space Monitoring Percentage Thresholds | The health check thresholds for monitoring of free space on the filesystem that contains the JournalNode's edits directory. Specified as a percentage of the capacity on that filesystem. This setting is not used if a Edits Directory Free Space Monitoring Absolute Thresholds setting is configured. | journalnode_edits_directory_free_space_percentage_thresholds | critical:never, warning:never | PERCENT |
JournalNode File Descriptors
This JournalNode health test checks that the number of file descriptors used does not rise above some percentage of the JournalNode file descriptor limit. A failure of this health test may indicate a bug in either Hadoop or Cloudera Manager. Contact Cloudera support. This test can be configured using the File Descriptor Monitoring Thresholds JournalNode monitoring setting.
Short Name: File Descriptors
Property Name | Description | Template Name | Default Value | Unit |
---|---|---|---|---|
File Descriptor Monitoring Thresholds | The health test thresholds of the number of file descriptors used. Specified as a percentage of file descriptor limit. | journalnode_fd_thresholds | critical:70.0, warning:50.0 | PERCENT |
JournalNode GC Duration
This JournalNode health test checks that the JournalNode is not spending too much time performing Java garbage collection. It checks that no more than some percentage of recent time is spent performing Java garbage collection. A failure of this health test may indicate a capacity planning problem or misconfiguration of the JournalNode. This test can be configured using the Garbage Collection Duration Thresholds and Garbage Collection Duration Monitoring Period JournalNode monitoring settings.
Short Name: GC Duration
Property Name | Description | Template Name | Default Value | Unit |
---|---|---|---|---|
Garbage Collection Duration Monitoring Period | The period to review when computing the moving average of garbage collection time. | journalnode_gc_duration_window | 5 | MINUTES |
Garbage Collection Duration Thresholds | The health test thresholds for the weighted average time spent in Java garbage collection. Specified as a percentage of elapsed wall clock time. | journalnode_gc_duration_thresholds | critical:60.0, warning:30.0 | no unit |
JournalNode Host Health
This JournalNode health test factors in the health of the host upon which the JournalNode is running. A failure of this test means that the host running the JournalNode is experiencing some problem. See that host's status page for more details.This test can be enabled or disabled using the JournalNode Host Health Test JournalNode monitoring setting.
Short Name: Host Health
Property Name | Description | Template Name | Default Value | Unit |
---|---|---|---|---|
JournalNode Host Health Test | When computing the overall JournalNode health, consider the host's health. | journalnode_host_health_enabled | true | no unit |
JournalNode Log Directory Free Space
This JournalNode health test checks that the filesystem containing the log directory of this JournalNode has sufficient free space. This test can be configured using the Log Directory Free Space Monitoring Absolute Thresholds and Log Directory Free Space Monitoring Percentage Thresholds JournalNode monitoring settings.
Short Name: Log Directory Free Space
Property Name | Description | Template Name | Default Value | Unit |
---|---|---|---|---|
Log Directory Free Space Monitoring Absolute Thresholds | The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory. | log_directory_free_space_absolute_thresholds | critical:5.36870912E9, warning:1.073741824E10 | BYTES |
Log Directory Free Space Monitoring Percentage Thresholds | The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory. Specified as a percentage of the capacity on that filesystem. This setting is not used if a Log Directory Free Space Monitoring Absolute Thresholds setting is configured. | log_directory_free_space_percentage_thresholds | critical:never, warning:never | PERCENT |
JournalNode Process Status
This JournalNode health test checks that the Cloudera Manager Agent on the JournalNode host is heart beating correctly and that the process associated with the JournalNode role is in the state expected by Cloudera Manager. A failure of this health test may indicate a problem with the JournalNode process, a lack of connectivity to the Cloudera Manager Agent on the JournalNode host, or a problem with the Cloudera Manager Agent. This test can fail either because the JournalNode has crashed or because the JournalNode will not start or stop in a timely fashion. Check the JournalNode logs for more details. If the test fails because of problems communicating with the Cloudera Manager Agent on the JournalNode host, check the status of the Cloudera Manager Agent by running /etc/init.d/cloudera-scm-agent status on the JournalNode host, or look in the Cloudera Manager Agent logs on the JournalNode host for more details. This test can be enabled or disabled using the JournalNode Process Health Test JournalNode monitoring setting.
Short Name: Process Status
Property Name | Description | Template Name | Default Value | Unit |
---|---|---|---|---|
JournalNode Process Health Test | Enables the health test that the JournalNode's process state is consistent with the role configuration | journalnode_scm_health_enabled | true | no unit |
JournalNode Sync Status
This is a JournalNode health test that checks that the active NameNode is in sync with this JournalNode. This test returns "Bad" health if the active NameNode is out-of-sync with the JournalNode. This test is disabled when there is no active NameNode. This test can be configured using the Active NameNode Sync Status Health Check and Active NameNode Sync Status Startup Tolerance JournalNode monitoring settings.
Short Name: Sync Status
Property Name | Description | Template Name | Default Value | Unit |
---|---|---|---|---|
Active NameNode Sync Status Health Check | Enables the health check that verifies the active NameNode's sync status to the JournalNode | journalnode_sync_status_enabled | true | no unit |
Active NameNode Sync Status Startup Tolerance | The amount of time at JournalNode startup allowed for the active NameNode to get in sync with the JournalNode. | journalnode_sync_status_startup_tolerance | 180 | SECONDS |
JournalNode Unexpected Exits
This JournalNode health test checks that the JournalNode has not recently exited unexpectedly. The test returns "Bad" health if the number of unexpected exits goes above a critical threshold. For example, if this test is configured with a critical threshold of 1, this test would return "Good" health if there have been no unexpected exits recently. If there has been 1 or more unexpected exits recently, this test would return "Bad" health. This test can be configured using the Unexpected Exits Thresholds and Unexpected Exits Monitoring Period JournalNode monitoring settings.
Short Name: Unexpected Exits
Property Name | Description | Template Name | Default Value | Unit |
---|---|---|---|---|
Unexpected Exits Monitoring Period | The period to review when computing unexpected exits. | unexpected_exits_window | 5 | MINUTES |
Unexpected Exits Thresholds | The health test thresholds for unexpected exits encountered within a recent period specified by the unexpected_exits_window configuration for the role. | unexpected_exits_thresholds | critical:any, warning:never | no unit |
JournalNode Web Server Status
This JournalNode health test checks that the web server of the JournalNode is responding quickly to requests by the Cloudera Manager Agent, and that the Cloudera Manager Agent can collect metrics from the web server. A failure of this health test may indicate a problem with the web server of the JournalNode, a misconfiguration of the JournalNode or a problem with the Cloudera Manager Agent. Consult the Cloudera Manager Agent logs and the logs of the JournalNode for more detail. If the test's failure message indicates a communication problem, this means that the Cloudera Manager Agent's HTTP requests to the JournalNode's web server are failing or timing out. These requests are completely local to the JournalNode's host, and so should never fail under normal conditions. If the test's failure message indicates an unexpected response, then the JournalNode's web server responded to the Cloudera Manager Agent's request, but the Cloudera Manager Agent could not interpret the response for some reason. This test can be configured using the Web Metric Collection JournalNode monitoring setting.
Short Name: Web Server Status
Property Name | Description | Template Name | Default Value | Unit |
---|---|---|---|---|
Web Metric Collection | Enables the health test that the Cloudera Manager Agent can successfully contact and gather metrics from the web server. | journalnode_web_metric_collection_enabled | true | no unit |
Web Metric Collection Duration | The health test thresholds on the duration of the metrics request to the web server. | journalnode_web_metric_collection_thresholds | critical:never, warning:10000.0 | MILLISECONDS |
<< JobTracker Health Tests | Kerberos Ticket Renewer Health Tests >> | |