When set, Cloudera Manager will send alerts when this entity's configuration changes.
Related Name
Default Value
false
API Name
enable_config_alerts
Required
false
Other🔗
Alternatives Priority🔗
Description
The priority level that the client configuration will have in the Alternatives system on the hosts. Higher priority levels will cause Alternatives to prefer this configuration over any others.
Related Name
Default Value
51
API Name
client_config_priority
Required
true
Spark Data Serializer🔗
Description
Name of class implementing org.apache.spark.serializer.Serializer to use in Spark applications.
Related Name
spark.serializer
Default Value
org.apache.spark.serializer.KryoSerializer
API Name
spark_data_serializer
Required
true
Default Application Deploy Mode🔗
Description
Which deploy mode to use by default. Can be overridden by users when launching applications.
Related Name
spark_deploy_mode
Default Value
client
API Name
spark_deploy_mode
Required
false
Caching Executor Idle Timeout🔗
Description
When dynamic allocation is enabled, time after which idle executors with cached RDDs blocks will be stopped. By default, they're never stopped.
Related Name
spark.dynamicAllocation.cachedExecutorIdleTimeout
Default Value
API Name
spark_dynamic_allocation_cached_idle_timeout
Required
false
Enable Dynamic Allocation🔗
Description
Enable dynamic allocation of executors in Spark applications.
Related Name
spark.dynamicAllocation.enabled
Default Value
true
API Name
spark_dynamic_allocation_enabled
Required
false
Executor Idle Timeout🔗
Description
When dynamic allocation is enabled, time after which idle executors will be stopped.
Related Name
spark.dynamicAllocation.executorIdleTimeout
Default Value
1 minute(s)
API Name
spark_dynamic_allocation_idle_timeout
Required
false
Initial Executor Count🔗
Description
When dynamic allocation is enabled, number of executors to allocate when the application starts. By default, this is the same value as the minimum number of executors.
Related Name
spark.dynamicAllocation.initialExecutors
Default Value
API Name
spark_dynamic_allocation_initial_executors
Required
false
Maximum Executor Count🔗
Description
When dynamic allocation is enabled, maximum number of executors to allocate. By default, Spark relies on YARN to control the maximum number of executors for the application.
Related Name
spark.dynamicAllocation.maxExecutors
Default Value
API Name
spark_dynamic_allocation_max_executors
Required
false
Minimum Executor Count🔗
Description
When dynamic allocation is enabled, minimum number of executors to keep alive while the application is running.
Related Name
spark.dynamicAllocation.minExecutors
Default Value
0
API Name
spark_dynamic_allocation_min_executors
Required
false
Scheduler Backlog Timeout🔗
Description
When dynamic allocation is enabled, timeout before requesting new executors when there are backlogged tasks.
When dynamic allocation is enabled, timeout before requesting new executors after the initial backlog timeout has already expired. By default this is the same value as the initial backlog timeout.
Whether to allow users to kill running stages from the Spark Web UI.
Related Name
spark.ui.killEnabled
Default Value
true
API Name
spark_gateway_ui_kill_enabled
Required
true
Enable History🔗
Description
Write Spark application history logs to HDFS.
Related Name
spark.eventLog.enabled
Default Value
true
API Name
spark_history_enabled
Required
false
Enable I/O Encryption🔗
Description
Whether to encrypt temporary shuffle and cache files stored by Spark on the local disks.
Related Name
spark.io.encryption.enabled
Default Value
false
API Name
spark_io_encryption_enabled
Required
false
Enable Spark Lineage🔗
Description
Whether to enable spark lineage support. If enabled, spark lineage is sent to Atlas.
Related Name
spark.lineage.enabled
Default Value
true
API Name
spark_lineage_enabled
Required
false
Enable Network Encryption🔗
Description
Whether to encrypt communication between Spark processes belonging to the same application. Requires authentication (spark.authenticate) to be enabled.
Related Name
spark.network.crypto.enabled
Default Value
false
API Name
spark_network_encryption_enabled
Required
false
Enable Optimized S3 Committers🔗
Description
Whether use optimized committers when writing data to S3.
Related Name
spark.cloudera.s3_committers.enabled
Default Value
true
API Name
spark_optimized_s3_committers_enabled
Required
false
Extra Python Path🔗
Description
Python library paths to add to PySpark applications.
Related Name
spark_python_path
Default Value
API Name
spark_python_path
Required
false
Enable Shuffle Service🔗
Description
Enables the external shuffle service. The external shuffle service preserves shuffle files written by executors so that the executors can be deallocated without losing work. Must be enabled if Enable Dynamic Allocation is enabled. Recommended and enabled by default.
Related Name
spark.shuffle.service.enabled
Default Value
true
API Name
spark_shuffle_service_enabled
Required
true
Enable Spark Web UI🔗
Description
Whether to enable the Spark Web UI on individual applications. It's recommended that the UI be disabled in secure clusters.
Related Name
spark.ui.enabled
Default Value
true
API Name
spark_ui_enabled
Required
false
A comma-separated list of secure Hadoop filesystems🔗
Description
A comma-separated list of secure Hadoop filesystems your Spark application is going to access. For example, hdfs://nn1.com:8032,hdfs://nn2.com:8032. The Spark application must have access to the filesystems listed and Kerberos must be properly configured to be able to access them (either in the same realm or in a trusted realm). Spark acquires security tokens for each of the filesystems so that the Spark application can access those remote Hadoop filesystems.
Related Name
spark.yarn.access.hadoopFileSystems
Default Value
API Name
spark_yarn_access_hadoopfilesystems
Required
false
Suppressions🔗
Suppress Configuration Validator: CDH Version Validator🔗
Description
Whether to suppress configuration warnings produced by the CDH Version Validator configuration validator.
Related Name
Default Value
false
API Name
role_config_suppression_cdh_version_validator
Required
true
Suppress Parameter Validation: Deploy Directory🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Deploy Directory parameter.
Whether to suppress configuration warnings produced by the built-in parameter validation for the Gateway Logging Advanced Configuration Snippet (Safety Valve) parameter.
Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf parameter.
Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh parameter.
Suppress Parameter Validation: Spark Data Serializer🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Data Serializer parameter.
Related Name
Default Value
false
API Name
role_config_suppression_spark_data_serializer
Required
true
Suppress Parameter Validation: Extra Python Path🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Extra Python Path parameter.
Related Name
Default Value
false
API Name
role_config_suppression_spark_python_path
Required
true
Suppress Parameter Validation: A comma-separated list of secure Hadoop filesystems🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the A comma-separated list of secure Hadoop filesystems parameter.
History Server Logging Advanced Configuration Snippet (Safety Valve)🔗
Description
For advanced use only, a string to be inserted into log4j.properties for this role only.
Related Name
Default Value
API Name
log4j_safety_valve
Required
false
Enable auto refresh for metric configurations🔗
Description
When true, Enable Metric Collection and Metric Filter parameters will be set automatically if they're changed. Otherwise, a refresh by hand is required.
Related Name
Default Value
false
API Name
metric_config_auto_refresh
Required
false
Heap Dump Directory🔗
Description
Path to directory where heap dumps are generated when java.lang.OutOfMemoryError error is thrown. This directory is automatically created if it does not exist. If this directory already exists, it will be owned by the current role user with 1777 permissions. Sharing the same directory among multiple roles will cause an ownership race. The heap dump files are created with 600 permissions and are owned by the role user. The amount of free space in this directory should be greater than the maximum Java Process heap size configured for this role.
Related Name
oom_heap_dump_dir
Default Value
/tmp
API Name
oom_heap_dump_dir
Required
false
Dump Heap When Out of Memory🔗
Description
When set, generates a heap dump file when when an out-of-memory error occurs.
Related Name
Default Value
false
API Name
oom_heap_dump_enabled
Required
true
Kill When Out of Memory🔗
Description
When set, a SIGKILL signal is sent to the role process when java.lang.OutOfMemoryError is thrown.
Related Name
Default Value
true
API Name
oom_sigkill_enabled
Required
true
Automatically Restart Process🔗
Description
When set, this role's process is automatically (and transparently) restarted in the event of an unexpected failure. This configuration applies in the time after the Start Wait Timeout period.
Related Name
Default Value
false
API Name
process_auto_restart
Required
true
Enable Metric Collection🔗
Description
Cloudera Manager agent monitors each service and each of its role by publishing metrics to the Cloudera Manager Service Monitor. Setting it to false will stop Cloudera Manager agent from publishing any metric for corresponding service/roles. This is usually helpful for services that generate large amount of metrics which Service Monitor is not able to process.
Related Name
Default Value
true
API Name
process_should_monitor
Required
true
Process Start Retry Attempts🔗
Description
Number of times to try starting a role's process when the process exits before the Start Wait Timeout period. After a process is running beyond the Start Wait Timeout, the retry count is reset. Setting this configuration to zero will prevent restart of the process during the Start Wait Timeout period.
Related Name
Default Value
3
API Name
process_start_retries
Required
false
Process Start Wait Timeout🔗
Description
The time in seconds to wait for a role's process to start successfully on a host. Processes which exit/crash before this time will be restarted until reaching the limit specified by the Start Retry Attempts count parameter. Setting this configuration to zero will turn off this feature.
Related Name
Default Value
20
API Name
process_start_secs
Required
false
History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh🔗
Description
For advanced use only. A string to be inserted into spark-conf/spark-env.sh for this role only.
Related Name
Default Value
API Name
spark-conf/spark-env.sh_role_safety_valve
Required
false
History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-history-server.conf🔗
Description
For advanced use only. A string to be inserted into spark-conf/spark-history-server.conf for this role only.
History Server Environment Advanced Configuration Snippet (Safety Valve)🔗
Description
For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of this role except client configuration.
Related Name
Default Value
API Name
SPARK_YARN_HISTORY_SERVER_role_env_safety_valve
Required
false
Logs🔗
History Server Log Directory🔗
Description
The log directory for log files of the role History Server.
Related Name
log_dir
Default Value
/var/log/spark
API Name
log_dir
Required
false
History Server Logging Threshold🔗
Description
The minimum log level for History Server logs
Related Name
Default Value
INFO
API Name
log_threshold
Required
false
History Server Maximum Log File Backups🔗
Description
The maximum number of rolled log files to keep for History Server logs. Typically used by log4j or logback.
Related Name
Default Value
10
API Name
max_log_backup_index
Required
false
History Server Max Log Size🔗
Description
The maximum size, in megabytes, per log file for History Server logs. Typically used by log4j or logback.
Related Name
Default Value
200 MiB
API Name
max_log_size
Required
false
Monitoring🔗
Enable Health Alerts for this Role🔗
Description
When set, Cloudera Manager will send alerts when the health of this role reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold
Related Name
Default Value
true
API Name
enable_alerts
Required
false
Enable Configuration Change Alerts🔗
Description
When set, Cloudera Manager will send alerts when this entity's configuration changes.
Related Name
Default Value
false
API Name
enable_config_alerts
Required
false
Enable JMX Exporter (beta)🔗
Description
JMX Exporter support is a beta feature. If enabled, CM configures the role to run JMX Exporter in agent mode with the provided port and YAML configuration. This exporter then can be used with the OpenTelemetry Collector feature. See the JMX Exporter documentation.
Log Directory Free Space Monitoring Absolute Thresholds🔗
Description
The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory.
Related Name
Default Value
Warning: 10 GiB, Critical: 5 GiB
API Name
log_directory_free_space_absolute_thresholds
Required
false
Log Directory Free Space Monitoring Percentage Thresholds🔗
Description
The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory. Specified as a percentage of the capacity on that filesystem. This setting is not used if a Log Directory Free Space Monitoring Absolute Thresholds setting is configured.
Related Name
Default Value
Warning: Never, Critical: Never
API Name
log_directory_free_space_percentage_thresholds
Required
false
Metric Filter🔗
Description
Defines a Metric Filter for this role. Cloudera Manager Agents will not send filtered metrics to the Service Monitor. Define the following fields:
Health Test Metric Set - Select this parameter to collect only metrics required for health tests.
Default Dashboard Metric Set - Select this parameter to collect only metrics required for the default dashboards. For user-defined charts, you must add the metrics you require for the chart using the Custom Metrics parameter.
Include/Exclude Custom Metrics - Select Include to specify metrics that should be collected. Select Exclude to specify metrics that should not be collected. Enter the metric names to be included or excluded using the Metric Name parameter.
Metric Name - The name of a metric that will be included or excluded during metric collection.
If you do not select Health Test Metric Set or Default Dashboard Metric Set, or specify metrics by name, metric filtering will be turned off (this is the default behavior).For example, the following configuration enables the collection of metrics required for Health Tests and the jvm_heap_used_mb metric:
Include only Health Test Metric Set: Selected.
Include/Exclude Custom Metrics: Set to Include.
Metric Name: jvm_heap_used_mb
You can also view the JSON representation for this parameter by clicking View as JSON. In this example, the JSON looks like this:{
"includeHealthTestMetricSet": true,
"filterType": "whitelist",
"metrics": ["jvm_heap_used_mb"]
}
Related Name
Default Value
API Name
monitoring_metric_filter
Required
false
OpenTelemetry Collector Exporters Section🔗
Description
Define the exporters settings as a yaml snippet according to the OpenTelemetry Collector standards. Variable substitution available, see the receivers' help.
Define the extensions settings as a yaml snippet according to the OpenTelemetry Collector standards. Variable substitution available, see the receivers' help.
This port can be used for JMX Exporter to implement a Prometheus exporter or for other OpenTelemetry Collector related purposes
Related Name
Default Value
API Name
otelcol_helper_port
Required
false
OpenTelemetry Collector Processors Section🔗
Description
Define the processors settings as a yaml snippet according to the OpenTelemetry Collector standards. Variable substitution available, see the receivers' help.
Related Name
Default Value
API Name
otelcol_processors
Required
false
OpenTelemetry Collector Receivers Section🔗
Description
Define the receivers settings as a yaml snippet according to the OpenTelemetry Collector standards. A number of variables can help to use the same config everywhere. The follow strings or expressions will be substituted: $HOST_NAME, $CLUSTER_NAME, $CLUSTER_ID, $SERVICE_TYPE, $SERVICE_NAME, $ROLE_NAME, $ROLE_TYPE, $ROLE_PARAM(my_parameter_name) - e.g.: a port parameter for the role's metrics, $DECODE_B64(...) and $DECODE_URL(...) to decode encoded parameters, $ENV_PARAM(name) to fetch params from the process' environment, $SYS_PARAM(name) to fetch java system properties.
Related Name
Default Value
API Name
otelcol_receivers
Required
false
OpenTelemetry Collector Remote Write Password🔗
Description
Remote write password for the OpenTelemetry Collector. This param is for convenience and intended to be used at the extensions section of Otelcol settings using the $ROLE_PARAM(otelcol_remote_write_password) expression. Specify $INFRA(cdp_request_signer_password) when forwarding to Cloudera Observability central monitoring. (This is the default.)
Related Name
Default Value
******
API Name
otelcol_remote_write_password
Required
false
OpenTelemetry Collector Remote Write URL🔗
Description
Remote write URL for the OpenTelemetry Collector. This param is for convenience and intended to be used at the exporters section of Otelcol settings using the $ROLE_PARAM(otelcol_remote_write_url) expression. Specify $INFRA(cdp_request_signer_url) when forwarding to Cloudera Observability central monitoring.
Related Name
Default Value
$INFRA(cdp_request_signer_url)
API Name
otelcol_remote_write_url
Required
false
OpenTelemetry Collector Remote Write Username🔗
Description
Remote write username for the OpenTelemetry Collector. This param is for convenience and intended to be used at the extensions section of Otelcol settings using the $ROLE_PARAM(otelcol_remote_write_user) expression. Specify $INFRA(cdp_request_signer_username) when forwarding to Cloudera Observability central monitoring.
Related Name
Default Value
$INFRA(cdp_request_signer_username)
API Name
otelcol_remote_write_user
Required
false
Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Exporters Section🔗
Description
Define the exporters settings as a yaml snippet according to the OpenTelemetry Collector standards. Variable substitution available, see the receivers' help.
Related Name
Default Value
API Name
otelcol_rtm_logs_exporters
Required
false
Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Extensions Section🔗
Description
Define the extensions settings as a yaml snippet according to the OpenTelemetry Collector standards. Variable substitution available, see the receivers' help.
Related Name
Default Value
API Name
otelcol_rtm_logs_extensions
Required
false
Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Processors Section🔗
Description
Define the processors settings as a yaml snippet according to the OpenTelemetry Collector standards. Variable substitution available, see the receivers' help.
Related Name
Default Value
API Name
otelcol_rtm_logs_processors
Required
false
Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Receivers Section🔗
Description
Define the receivers settings as a yaml snippet according to the OpenTelemetry Collector standards. A number of variables can help to use the same config everywhere. The follow strings or expressions will be substituted: $HOST_NAME, $CLUSTER_NAME, $CLUSTER_ID, $SERVICE_TYPE, $SERVICE_NAME, $ROLE_NAME, $ROLE_TYPE, $ROLE_PARAM(my_parameter_name) - e.g.: a port parameter for the role's metrics, $DECODE_B64(...) and $DECODE_URL(...) to decode encoded parameters, $ENV_PARAM(name) to fetch params from the process' environment, $SYS_PARAM(name) to fetch java system properties.
Related Name
Default Value
API Name
otelcol_rtm_logs_receivers
Required
false
Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Service Section🔗
Description
Define the service settings as a yaml snippet according to the OpenTelemetry Collector standards. Variable substitution available, see the receivers' help.
Related Name
Default Value
API Name
otelcol_rtm_logs_service
Required
false
OpenTelemetry Collector Service Section🔗
Description
Define the service settings as a yaml snippet according to the OpenTelemetry Collector standards. Variable substitution available, see the receivers' help.
Related Name
Default Value
API Name
otelcol_service
Required
false
Enable OpenTelemetry Collector (beta)🔗
Description
OpenTelemetry Collector support is a new beta feature (will change without notice) which can run OpenTelemetry Collector as an agent together with the CM Agent to forward metrics to a Prometheus like storage.
Related Name
Default Value
false
API Name
otelcol_should_collect
Required
true
Enable Real-Time Monitoring for Jobs / Queries with OpenTelemetry🔗
Description
This enables OpenTelemetry Collector as an agent together with the CM Agent to forward real-time monitoring data about jobs / queries to Observability backend. The configuration of the subsections shouldn't be modified manually.
Related Name
Default Value
false
API Name
otelcol_should_collect_rtm_logs
Required
true
Swap Memory Usage Rate Thresholds🔗
Description
The health test thresholds on the swap memory usage rate of the process. Specified as the change of the used swap memory during the predefined period.
Related Name
Default Value
Warning: Never, Critical: Never
API Name
process_swap_memory_rate_thresholds
Required
false
Swap Memory Usage Rate Window🔗
Description
The period to review when computing unexpected swap memory usage change of the process.
Related Name
common.process.swap_memory_rate_window
Default Value
5 minute(s)
API Name
process_swap_memory_rate_window
Required
false
Process Swap Memory Thresholds🔗
Description
The health test thresholds on the swap memory usage of the process. This takes precedence over the host level threshold.
Related Name
Default Value
Warning: 200 B, Critical: Never
API Name
process_swap_memory_thresholds
Required
false
Role Triggers🔗
Description
The configured triggers for this role. This is a JSON-formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has the following fields:
triggerName(mandatory) - The name of the trigger. This value must be unique for the specific role.
triggerExpression(mandatory) - A tsquery expression representing the trigger.
streamThreshold(optional) - The maximum number of streams that can satisfy a condition of a trigger before the condition fires. By default set to 0, and any stream returned causes the condition to fire.
enabled (optional) - By default set to 'true'. If set to 'false', the trigger is not evaluated.
expressionEditorConfig (optional) - Metadata for the trigger editor. If present, the trigger should only be edited from the Edit Trigger page; editing the trigger here can lead to inconsistencies.
For example, the following JSON formatted trigger configured for a DataNode fires if the DataNode has more than 1500 file descriptors opened:[{"triggerName": "sample-trigger",
"triggerExpression": "IF (SELECT fd_open WHERE roleName=$ROLENAME and last(fd_open) > 1500) DO health:bad",
"streamThreshold": 0, "enabled": "true"}]See the trigger rules documentation for more details on how to write triggers using tsquery.The JSON format is evolving and may change and, as a result, backward compatibility is not guaranteed between releases.
Related Name
Default Value
[]
API Name
role_triggers
Required
true
File Descriptor Monitoring Thresholds🔗
Description
The health test thresholds of the number of file descriptors used. Specified as a percentage of file descriptor limit.
Related Name
Default Value
Warning: 50.0 %, Critical: 70.0 %
API Name
spark_yarn_history_server_fd_thresholds
Required
false
History Server Host Health Test🔗
Description
When computing the overall History Server health, consider the host's health.
Related Name
Default Value
true
API Name
spark_yarn_history_server_host_health_enabled
Required
false
History Server Process Health Test🔗
Description
Enables the health test that the History Server's process state is consistent with the role configuration
Related Name
Default Value
true
API Name
spark_yarn_history_server_scm_health_enabled
Required
false
Unexpected Exits Thresholds🔗
Description
The health test thresholds for unexpected exits encountered within a recent period specified by the unexpected_exits_window configuration for the role.
Related Name
Default Value
Warning: Never, Critical: Any
API Name
unexpected_exits_thresholds
Required
false
Unexpected Exits Monitoring Period🔗
Description
The period to review when computing unexpected exits.
Related Name
Default Value
5 minute(s)
API Name
unexpected_exits_window
Required
false
Other🔗
Use Local Storage🔗
Description
Whether to use local storage for caching application history data, which reduces memory usage and makes service restarts faster.
Related Name
enable_local_storage
Default Value
false
API Name
enable_local_storage
Required
false
Enable Event Log Cleaner🔗
Description
Specifies whether the History Server should periodically clean up event logs from storage.
Related Name
spark.history.fs.cleaner.enabled
Default Value
true
API Name
event_log_cleaner_enabled
Required
false
Event Log Cleaner Interval🔗
Description
How often the History Server will clean up event log files.
Related Name
spark.history.fs.cleaner.interval
Default Value
1 day(s)
API Name
event_log_cleaner_interval
Required
false
Maximum Event Log Age🔗
Description
Specifies the maximum age of the event logs.
Related Name
spark.history.fs.cleaner.maxAge
Default Value
7 day(s)
API Name
event_log_cleaner_max_age
Required
false
Admin Users🔗
Description
Comma-separated list of users who can view all applications when authentication is enabled.
Related Name
spark.history.ui.admin.acls
Default Value
knox
API Name
history_server_admin_users
Required
false
HDFS Polling Interval🔗
Description
How often to poll HDFS for new applications.
Related Name
spark.history.fs.update.interval.seconds
Default Value
10 second(s)
API Name
history_server_fs_poll_interval
Required
false
Java Heap Size of History Server in Bytes🔗
Description
Maximum size for the Java process heap memory. Passed to Java -Xmx. Measured in bytes.
Related Name
history_server_max_heapsize
Default Value
512 MiB
API Name
history_server_max_heapsize
Required
true
Retained App Count🔗
Description
Max number of application UIs to keep in the History Server's memory. All applications will still be available, but may take longer to load if they're not in memory.
Related Name
spark.history.retainedApplications
Default Value
50
API Name
history_server_retained_apps
Required
false
Enable User Authentication🔗
Description
Enables user authentication using SPNEGO (requires Kerberos), and enables access control to application history data.
Related Name
history_server_spnego_enabled
Default Value
false
API Name
history_server_spnego_enabled
Required
false
Local Storage Directory🔗
Description
Directory where to keep local caches of application history data.
Related Name
spark.history.store.path
Default Value
/var/lib/spark/history
API Name
local_storage_dir
Required
false
Max Local Storage Size🔗
Description
Approximate maximum amount of data to use in local storage for caching application history data.
Related Name
spark.history.store.maxDiskUsage
Default Value
10 GiB
API Name
local_storage_max_usage
Required
false
Performance🔗
Maximum Process File Descriptors🔗
Description
If configured, overrides the process soft and hard rlimits (also called ulimits) for file descriptors to the configured value.
Related Name
Default Value
API Name
rlimit_fds
Required
false
Ports and Addresses🔗
History Server WebUI Port🔗
Description
The port of the history server WebUI
Related Name
spark.history.ui.port
Default Value
18088
API Name
history_server_web_port
Required
true
TLS/SSL Port Number🔗
Description
Port where to listen for TLS/SSL connections. HTTP connections will be redirected to this port when TLS/SSL is enabled.
Related Name
spark.ssl.historyServer.port
Default Value
18488
API Name
ssl_server_port
Required
false
Resource Management🔗
Cgroup V1 BLKIO Weight🔗
Description
Weight for the read I/O requests issued by this role, enforced by the Linux kernel under cgroup v1. The greater the weight, the higher the priority of the requests when the host experiences I/O contention. Must be between 100 and 1000. Defaults to 1000 for processes not managed by Cloudera Manager.
Related Name
blkio.weight
Default Value
500
API Name
rm_blkio_weight
Required
true
Cgroup V1 CPU Shares🔗
Description
Number of CPU shares to assign to this role, enforced by the Linux kernel under cgroup v1. The greater the number of shares, the larger the share of the host's CPUs that will be given to this role when the host experiences CPU contention. Must be between 2 and 262144. Defaults to 1024 for processes not managed by Cloudera Manager.
Related Name
cpu.shares
Default Value
1024
API Name
rm_cpu_shares
Required
true
Cgroup V2 CPU Weight🔗
Description
Weight of CPU resources to assign to this role, enforced by the Linux kernel under cgroup v2. The greater the weight, the larger the share of the host's CPUs that will be given to this role when the host experiences CPU contention. Must be between 1 and 10000. Defaults to 100.
Related Name
cpu.weight
Default Value
100
API Name
rm_cpu_weight
Required
true
Custom Control Group Resources (overrides Cgroup settings)🔗
Description
Custom control group resources to assign to this role, which will be enforced by the Linux kernel. These resources should exist on the target hosts, otherwise an error will occur when the process starts. Use the same format as used for arguments to the cgexec command: resource1,resource2:path1 or resource3:path2 For example: 'cpu,memory:my/path blkio:my2/path2' ***These settings override other cgroup settings.***
Related Name
custom.cgroups
Default Value
API Name
rm_custom_resources
Required
false
Cgroup V2 I/O Weight🔗
Description
Weight for the I/O requests issued by this role, enforced by the Linux kernel under cgroup v2. The greater the weight, the higher the priority of the requests when the host experiences I/O contention. Must be between 1 and 10000. Defaults to 100.
Related Name
io.weight
Default Value
100
API Name
rm_io_weight
Required
true
Cgroup V1 Memory Hard Limit🔗
Description
Hard memory limit to assign to this role, enforced by the Linux kernel under cgroup v1. When the limit is reached, the kernel will reclaim pages charged to the process. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 to specify no limit. By default processes not managed by Cloudera Manager will have no limit. If the value is -1, Cloudera Manager will not monitor Cgroup memory usage therefore some of the charts will show 'No Data'
Related Name
memory.limit_in_bytes
Default Value
-1 MiB
API Name
rm_memory_hard_limit_v1
Required
true
Cgroup V2 Memory Hard Limit🔗
Description
Hard memory limit to assign to this role, enforced by the Linux kernel under cgroup v2. When the limit is reached, the kernel will reclaim pages charged to the process. If reclaiming fails, the kernel may kill the process. Both anonymous and page cache pages contribute to the limit. Use a value of 'max' to specify no limit. By default, processes not managed by Cloudera Manager will have no limit. If the value is 'max', Cloudera Manager will not monitor Cgroup memory usage, and some charts will show 'No Data'.
Related Name
memory.max
Default Value
-1 MiB
API Name
rm_memory_hard_limit_v2
Required
true
Cgroup V1 Memory Soft Limit🔗
Description
Soft memory limit to assign to this role, enforced by the Linux kernel under cgroup v1. When the limit is reached, the kernel will reclaim pages charged to the process if and only if the host is facing memory pressure. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 to specify no limit. By default processes not managed by Cloudera Manager will have no limit. If the value is -1, Cloudera Manager will not monitor Cgroup memory usage therefore some of the charts will show 'No Data'
Related Name
memory.soft_limit_in_bytes
Default Value
-1 MiB
API Name
rm_memory_soft_limit_v1
Required
true
Cgroup V2 Memory Soft Limit🔗
Description
Soft memory limit to assign to this role, enforced by the Linux kernel under cgroup v2. When the limit is reached, the kernel will reclaim pages charged to the process only if the host is facing memory pressure. If reclaiming fails, the kernel may kill the process. Both anonymous and page cache pages contribute to the limit. Use a value of 'max' to specify no limit. By default, processes not managed by Cloudera Manager will have no limit. If the value is 'max', Cloudera Manager will not monitor Cgroup memory usage, and some charts will show 'No Data'.
Related Name
memory.high
Default Value
-1 MiB
API Name
rm_memory_soft_limit_v2
Required
true
Security🔗
Enable TLS/SSL for History Server🔗
Description
Encrypt communication between clients and History Server using Transport Layer Security (TLS) (formerly known as Secure Socket Layer (SSL)).
Related Name
spark.ssl.historyServer.enabled
Default Value
false
API Name
ssl_enabled
Required
false
History Server TLS/SSL Server Keystore File Location🔗
Description
The path to the TLS/SSL keystore file containing the server certificate and private key used for TLS/SSL. Used when History Server is acting as a TLS/SSL server. The keystore must be in the format specified in Administration > Settings > Java Keystore Type.
Related Name
spark.ssl.historyServer.keyStore
Default Value
API Name
ssl_server_keystore_location
Required
false
History Server TLS/SSL Server Keystore File Password🔗
Description
The password for the History Server keystore file.
Related Name
Default Value
API Name
ssl_server_keystore_password
Required
false
Supported SSL/TLS versions🔗
Description
The SSL/TLS protocol versions to accept HTTPS connections from. Note that the available cipher suites also affect which protocol versions can be negotiated, and some cipher suites are only available in higher versions.
Related Name
spark.ssl.historyServer.protocol
Default Value
TLSv1.2
API Name
supported_tls_versions
Required
false
SSL/TLS Cipher Suite🔗
Description
The SSL/TLS cipher suites to use. "Modern 2018" is a modern set of cipher suites as of 2018, according to the Mozilla server-side TLS recommendations. These cipher suites use strong cryptography and are preferred unless interaction with older clients is required. These modern cipher suites are compatible with Firefox 27, Chrome 22, Internet Explorer 11, Opera 14, Safari 7, Android 4.4, and Java 8. "Intermediate 2018" is an intermediate set of cipher suites as of 2018, according to the Mozilla server-side TLS recommendations. Select the Intermediate 2018 cipher suites if you require compatibility with a wider range of clients, legacy browsers, or older Linux tools.
Related Name
spark.ssl.historyServer.enabledAlgorithms
Default Value
modern2018
API Name
tls_ciphers
Required
false
Stacks Collection🔗
Stacks Collection Data Retention🔗
Description
The amount of stacks data that is retained. After the retention limit is reached, the oldest data is deleted.
Related Name
stacks_collection_data_retention
Default Value
100 MiB
API Name
stacks_collection_data_retention
Required
false
Stacks Collection Directory🔗
Description
The directory in which stacks logs are placed. If not set, stacks are logged into a stacks subdirectory of the role's log directory. If this directory already exists, it will be owned by the current role user with 755 permissions. Sharing the same directory among multiple roles will cause an ownership race.
Related Name
stacks_collection_directory
Default Value
API Name
stacks_collection_directory
Required
false
Stacks Collection Enabled🔗
Description
Whether or not periodic stacks collection is enabled.
Related Name
stacks_collection_enabled
Default Value
false
API Name
stacks_collection_enabled
Required
true
Stacks Collection Frequency🔗
Description
The frequency with which stacks are collected.
Related Name
stacks_collection_frequency
Default Value
5.0 second(s)
API Name
stacks_collection_frequency
Required
false
Stacks Collection Method🔗
Description
The method used to collect stacks. The jstack option involves periodically running the jstack command against the role's daemon process. The servlet method is available for those roles that have an HTTP server endpoint exposing the current stacks traces of all threads. When the servlet method is selected, that HTTP endpoint is periodically scraped.
Related Name
stacks_collection_method
Default Value
jstack
API Name
stacks_collection_method
Required
false
Suppressions🔗
Suppress Configuration Validator: CDH Version Validator🔗
Description
Whether to suppress configuration warnings produced by the CDH Version Validator configuration validator.
Related Name
Default Value
false
API Name
role_config_suppression_cdh_version_validator
Required
true
Suppress Parameter Validation: Admin Users🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Admin Users parameter.
Whether to suppress configuration warnings produced by the built-in parameter validation for the JMX Exporter configuration YAML parameter.
Related Name
Default Value
false
API Name
role_config_suppression_jmx_exporter_yaml
Required
true
Suppress Parameter Validation: Local Storage Directory🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Local Storage Directory parameter.
Related Name
Default Value
false
API Name
role_config_suppression_local_storage_dir
Required
true
Suppress Parameter Validation: History Server Logging Advanced Configuration Snippet (Safety Valve)🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Logging Advanced Configuration Snippet (Safety Valve) parameter.
Related Name
Default Value
false
API Name
role_config_suppression_log4j_safety_valve
Required
true
Suppress Parameter Validation: History Server Log Directory🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Log Directory parameter.
Whether to suppress configuration warnings produced by the built-in parameter validation for the OpenTelemetry Collector Remote Write Password parameter.
Whether to suppress configuration warnings produced by the built-in parameter validation for the OpenTelemetry Collector Remote Write Username parameter.
Related Name
Default Value
false
API Name
role_config_suppression_otelcol_remote_write_user
Required
true
Suppress Parameter Validation: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Exporters Section🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Exporters Section parameter.
Suppress Parameter Validation: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Extensions Section🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Extensions Section parameter.
Suppress Parameter Validation: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Processors Section🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Processors Section parameter.
Suppress Parameter Validation: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Receivers Section🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Receivers Section parameter.
Suppress Parameter Validation: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Service Section🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Service Section parameter.
Related Name
Default Value
false
API Name
role_config_suppression_otelcol_rtm_logs_service
Required
true
Suppress Parameter Validation: OpenTelemetry Collector Service Section🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the OpenTelemetry Collector Service Section parameter.
Related Name
Default Value
false
API Name
role_config_suppression_otelcol_service
Required
true
Suppress Parameter Validation: Custom Control Group Resources (overrides Cgroup settings)🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Custom Control Group Resources (overrides Cgroup settings) parameter.
Related Name
Default Value
false
API Name
role_config_suppression_rm_custom_resources
Required
true
Suppress Parameter Validation: Role Triggers🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Role Triggers parameter.
Related Name
Default Value
false
API Name
role_config_suppression_role_triggers
Required
true
Suppress Parameter Validation: History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh parameter.
Suppress Parameter Validation: History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-history-server.conf🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-history-server.conf parameter.
Suppress Parameter Validation: History Server Environment Advanced Configuration Snippet (Safety Valve)🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Environment Advanced Configuration Snippet (Safety Valve) parameter.
Suppress Parameter Validation: History Server TLS/SSL Server Keystore File Location🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server TLS/SSL Server Keystore File Location parameter.
Suppress Parameter Validation: History Server TLS/SSL Server Keystore File Password🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server TLS/SSL Server Keystore File Password parameter.
Whether to suppress the results of the Audit Pipeline Test heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.
Whether to suppress the results of the File Descriptors heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.
Whether to suppress the results of the Host Health heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.
Whether to suppress the results of the Log Directory Free Space heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.
Whether to suppress the results of the Otelcol Health heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.
Whether to suppress the results of the Process Status heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.
Whether to suppress the results of the Swap Memory Usage heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.
Suppress Health Test: Swap Memory Usage Rate Beta🔗
Description
Whether to suppress the results of the Swap Memory Usage Rate Beta heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.
Whether to suppress the results of the Unexpected Exits heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.
The group that this service's processes should run as.
Related Name
Default Value
spark
API Name
process_groupname
Required
true
System User🔗
Description
The user that this service's processes should run as.
Related Name
Default Value
spark
API Name
process_username
Required
true
Spark Service Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh🔗
Description
For advanced use only, a string to be inserted into spark-conf/spark-env.sh. Applies to configurations of all roles in this service except client configuration.
Related Name
Default Value
API Name
spark-conf/spark-env.sh_service_safety_valve
Required
false
Spark Service Environment Advanced Configuration Snippet (Safety Valve)🔗
Description
For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of all roles in this service except client configuration.
Related Name
Default Value
API Name
SPARK_ON_YARN_service_env_safety_valve
Required
false
Monitoring🔗
Enable Service Level Health Alerts🔗
Description
When set, Cloudera Manager will send alerts when the health of this service reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold
Related Name
Default Value
true
API Name
enable_alerts
Required
false
Enable Configuration Change Alerts🔗
Description
When set, Cloudera Manager will send alerts when this entity's configuration changes.
Related Name
Default Value
false
API Name
enable_config_alerts
Required
false
Service Triggers🔗
Description
The configured triggers for this service. This is a JSON-formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has the following fields:
triggerName(mandatory) - The name of the trigger. This value must be unique for the specific service.
triggerExpression(mandatory) - A tsquery expression representing the trigger.
streamThreshold(optional) - The maximum number of streams that can satisfy a condition of a trigger before the condition fires. By default set to 0, and any stream returned causes the condition to fire.
enabled (optional) - By default set to 'true'. If set to 'false', the trigger is not evaluated.
expressionEditorConfig (optional) - Metadata for the trigger editor. If present, the trigger should only be edited from the Edit Trigger page; editing the trigger here can lead to inconsistencies.
For example, the following JSON formatted trigger fires if there are more than 10 DataNodes with more than 500 file descriptors opened:[{"triggerName": "sample-trigger",
"triggerExpression": "IF (SELECT fd_open WHERE roleType = DataNode and last(fd_open) > 500) DO health:bad",
"streamThreshold": 10, "enabled": "true"}]See the trigger rules documentation for more details on how to write triggers using tsquery.The JSON format is evolving and may change and, as a result, backward compatibility is not guaranteed between releases.
Related Name
Default Value
[]
API Name
service_triggers
Required
true
Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve)🔗
Description
For advanced use only, a list of derived configuration properties that will be used by the Service Monitor instead of the default ones.
Related Name
Default Value
API Name
smon_derived_configs_safety_valve
Required
false
Healthy History Server Monitoring Thresholds🔗
Description
The health test thresholds of the overall History Server health. The check returns "Concerning" health if the percentage of "Healthy" History Servers falls below the warning threshold. The check is unhealthy if the total percentage of "Healthy" and "Concerning" History Servers falls below the critical threshold.
Name of the Atlas service that this Spark service instance depends on
Related Name
Default Value
API Name
atlas_service
Required
false
HBase Service🔗
Description
Name of the HBase service that this Spark service instance depends on
Related Name
Default Value
API Name
hbase_service
Required
false
History Server Load Balancer Address🔗
Description
The URI of the History server's load balancer used by clients. Example: https://lb.example.com:port
Related Name
spark.history.lb.uri
Default Value
API Name
history_server_load_balancer_url
Required
false
Knox Service🔗
Description
Name of the Knox service that this Spark service instance depends on
Related Name
Default Value
API Name
knox_service
Required
false
Spark Authentication🔗
Description
Enable whether the Spark communication protocols do authentication using a shared secret.
Related Name
spark.authenticate
Default Value
false
API Name
spark_authenticate
Required
true
Spark Driver Log Location (HDFS)🔗
Description
The location of Spark driver logs in HDFS when Spark application runs in client mode. Changing this value will not move existing logs to the new location.
Related Name
spark.driver.log.dfsDir
Default Value
/user/spark/driverLogs
API Name
spark_driver_log_dfs_dir
Required
true
Persist Driver Logs to Dfs🔗
Description
If enabled, driver logs in YARN client mode will be persisted to the configured Spark Driver Log Location (HDFS)
Related Name
spark.driver.log.persistToDfs.enabled
Default Value
true
API Name
spark_driver_log_persist_to_dfs
Required
true
Spark History Location🔗
Description
The location of Spark application history logs. Changing this value will not move existing logs to the new location.
Related Name
spark.eventLog.dir
Default Value
hdfs:///user/spark/applicationHistory
API Name
spark_history_log_dir
Required
true
Shuffle Service AES Encryption🔗
Description
Whether to enable AES-based authentication and encryption in the shuffle service. Requires authentication to be enabled to take effect.
Related Name
spark_shuffle_aes_enabled
Default Value
true
API Name
spark_shuffle_aes_enabled
Required
true
YARN Service🔗
Description
Name of the YARN service that this Spark service instance depends on
Related Name
Default Value
API Name
yarn_service
Required
true
Ports and Addresses🔗
Spark Shuffle Service Port🔗
Description
The port the Spark Shuffle Service listens for fetch requests.
Related Name
spark.shuffle.service.port
Default Value
7337
API Name
spark_shuffle_service_port
Required
true
Security🔗
Kerberos Principal🔗
Description
Kerberos principal short name used by all roles of this service.
Related Name
Default Value
spark
API Name
kerberos_princ_name
Required
true
Suppressions🔗
Suppress Configuration Validator: CDH Version Validator🔗
Description
Whether to suppress configuration warnings produced by the CDH Version Validator configuration validator.
Whether to suppress configuration warnings produced by the JMX Exporter configuration YAML configuration validator.
Related Name
Default Value
false
API Name
role_config_suppression_jmx_exporter_yaml
Required
true
Suppress Configuration Validator: Local Storage Directory🔗
Description
Whether to suppress configuration warnings produced by the Local Storage Directory configuration validator.
Related Name
Default Value
false
API Name
role_config_suppression_local_storage_dir
Required
true
Suppress Configuration Validator: History Server Logging Advanced Configuration Snippet (Safety Valve)🔗
Description
Whether to suppress configuration warnings produced by the History Server Logging Advanced Configuration Snippet (Safety Valve) configuration validator.
Related Name
Default Value
false
API Name
role_config_suppression_log4j_safety_valve
Required
true
Suppress Configuration Validator: History Server Log Directory🔗
Description
Whether to suppress configuration warnings produced by the History Server Log Directory configuration validator.
Whether to suppress configuration warnings produced by the OpenTelemetry Collector Remote Write Username configuration validator.
Related Name
Default Value
false
API Name
role_config_suppression_otelcol_remote_write_user
Required
true
Suppress Configuration Validator: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Exporters Section🔗
Description
Whether to suppress configuration warnings produced by the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Exporters Section configuration validator.
Suppress Configuration Validator: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Extensions Section🔗
Description
Whether to suppress configuration warnings produced by the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Extensions Section configuration validator.
Suppress Configuration Validator: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Processors Section🔗
Description
Whether to suppress configuration warnings produced by the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Processors Section configuration validator.
Suppress Configuration Validator: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Receivers Section🔗
Description
Whether to suppress configuration warnings produced by the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Receivers Section configuration validator.
Suppress Configuration Validator: Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Service Section🔗
Description
Whether to suppress configuration warnings produced by the Real-Time Monitoring for Jobs / Queries with OpenTelemetry - Service Section configuration validator.
Related Name
Default Value
false
API Name
role_config_suppression_otelcol_rtm_logs_service
Required
true
Suppress Configuration Validator: OpenTelemetry Collector Service Section🔗
Description
Whether to suppress configuration warnings produced by the OpenTelemetry Collector Service Section configuration validator.
Related Name
Default Value
false
API Name
role_config_suppression_otelcol_service
Required
true
Suppress Configuration Validator: Custom Control Group Resources (overrides Cgroup settings)🔗
Description
Whether to suppress configuration warnings produced by the Custom Control Group Resources (overrides Cgroup settings) configuration validator.
Related Name
Default Value
false
API Name
role_config_suppression_rm_custom_resources
Required
true
Suppress Configuration Validator: Role Triggers🔗
Description
Whether to suppress configuration warnings produced by the Role Triggers configuration validator.
Whether to suppress configuration warnings produced by the Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf configuration validator.
Whether to suppress configuration warnings produced by the Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh configuration validator.
Suppress Configuration Validator: History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh🔗
Description
Whether to suppress configuration warnings produced by the History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh configuration validator.
Suppress Configuration Validator: History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-history-server.conf🔗
Description
Whether to suppress configuration warnings produced by the History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-history-server.conf configuration validator.
Suppress Configuration Validator: History Server Environment Advanced Configuration Snippet (Safety Valve)🔗
Description
Whether to suppress configuration warnings produced by the History Server Environment Advanced Configuration Snippet (Safety Valve) configuration validator.
Whether to suppress configuration warnings produced by the built-in parameter validation for the Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve) parameter.
Suppress Parameter Validation: Spark Service Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh🔗
Description
Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Service Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh parameter.
Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Service Environment Advanced Configuration Snippet (Safety Valve) parameter.
Whether to suppress the results of the History Server Health heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.