Spark Properties in CDH 5.7.0
Role groups:
gateway
Advanced
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Deploy Directory | The directory where the client configs will be deployed | /etc/spark | client_config_root_dir | true | |
Gateway Logging Advanced Configuration Snippet (Safety Valve) | For advanced use only, a string to be inserted into log4j.properties for this role only. | log4j_safety_valve | false | ||
Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf | For advanced use only, a string to be inserted into the client configuration for spark-conf/spark-defaults.conf. | spark-conf/spark-defaults.conf_client_config_safety_valve | false | ||
Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh | For advanced use only, a string to be inserted into the client configuration for spark-conf/spark-env.sh. | spark-conf/spark-env.sh_client_config_safety_valve | false |
Logs
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Gateway Logging Threshold | The minimum log level for Gateway logs | INFO | log_threshold | false |
Monitoring
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Enable Configuration Change Alerts | When set, Cloudera Manager will send alerts when this entity's configuration changes. | false | enable_config_alerts | false |
Other
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Alternatives Priority | The priority level that the client configuration will have in the Alternatives system on the hosts. Higher priority levels will cause Alternatives to prefer this configuration over any others. | 51 | client_config_priority | true | |
Spark Data Serializer | Name of class implementing org.apache.spark.serializer.Serializer to use in Spark applications. | spark.serializer | org.apache.spark.serializer.KryoSerializer | spark_data_serializer | true |
Default Application Deploy Mode | Which deploy mode to use by default. Can be overridden by users when launching applications. | spark_deploy_mode | client | spark_deploy_mode | false |
Caching Executor Idle Timeout | When dynamic allocation is enabled, time after which idle executors with cached RDDs blocks will be stopped. By default, they're never stopped. This configuration is only available starting in CDH 5.5. | spark.dynamicAllocation.cachedExecutorIdleTimeout | spark_dynamic_allocation_cached_idle_timeout | false | |
Enable Dynamic Allocation | Enable dynamic allocation of executors in Spark applications. | spark.dynamicAllocation.enabled | true | spark_dynamic_allocation_enabled | false |
Executor Idle Timeout | When dynamic allocation is enabled, time after which idle executors will be stopped. | spark.dynamicAllocation.executorIdleTimeout | 1 minute(s) | spark_dynamic_allocation_idle_timeout | false |
Initial Executor Count | When dynamic allocation is enabled, number of executors to allocate when the application starts. By default, this is the same value as the minimum number of executors. | spark.dynamicAllocation.initialExecutors | spark_dynamic_allocation_initial_executors | false | |
Maximum Executor Count | When dynamic allocation is enabled, maximum number of executors to allocate. By default, Spark relies on YARN to control the maximum number of executors for the application. | spark.dynamicAllocation.maxExecutors | spark_dynamic_allocation_max_executors | false | |
Minimum Executor Count | When dynamic allocation is enabled, minimum number of executors to keep alive while the application is running. | spark.dynamicAllocation.minExecutors | 0 | spark_dynamic_allocation_min_executors | false |
Scheduler Backlog Timeout | When dynamic allocation is enabled, timeout before requesting new executors when there are backlogged tasks. | spark.dynamicAllocation.schedulerBacklogTimeout | 1 second(s) | spark_dynamic_allocation_scheduler_backlog_timeout | false |
Sustained Scheduler Backlog Timeout | When dynamic allocation is enabled, timeout before requesting new executors after the initial backlog timeout has already expired. By default this is the same value as the initial backlog timeout. | spark.dynamicAllocation.sustainedSchedulerBacklogTimeout | spark_dynamic_allocation_sustained_scheduler_backlog_timeout | false | |
Shell Logging Threshold | The minimum log level for the Spark shell. | spark_gateway_shell_logging_threshold | WARN | spark_gateway_shell_logging_threshold | true |
Enable Kill From UI | Whether to allow users to kill running stages from the Spark Web UI. | spark.ui.killEnabled | true | spark_gateway_ui_kill_enabled | true |
Enable History | Write Spark application history logs to HDFS. | spark.eventLog.enabled | true | spark_history_enabled | false |
Extra Python Path | Python library paths to add to PySpark applications. | spark_python_path | spark_python_path | false | |
Enable Shuffle Service | Enables the external shuffle service. The external shuffle service preserves shuffle files written by executors so that the executors can be deallocated without losing work. Must be enabled if Enable Dynamic Allocation is enabled. Recommended and enabled by default for CDH 5.5 and higher. | spark.shuffle.service.enabled | true | spark_shuffle_service_enabled | true |
Suppressions
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Suppress Configuration Validator: CDH Version Validator | Whether to suppress configuration warnings produced by the CDH Version Validator configuration validator. | false | role_config_suppression_cdh_version_validator | true | |
Suppress Parameter Validation: Deploy Directory | Whether to suppress configuration warnings produced by the built-in parameter validation for the Deploy Directory parameter. | false | role_config_suppression_client_config_root_dir | true | |
Suppress Parameter Validation: Gateway Logging Advanced Configuration Snippet (Safety Valve) | Whether to suppress configuration warnings produced by the built-in parameter validation for the Gateway Logging Advanced Configuration Snippet (Safety Valve) parameter. | false | role_config_suppression_log4j_safety_valve | true | |
Suppress Parameter Validation: Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf | Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf parameter. | false | role_config_suppression_spark-conf/spark-defaults.conf_client_config_safety_valve | true | |
Suppress Parameter Validation: Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh | Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh parameter. | false | role_config_suppression_spark-conf/spark-env.sh_client_config_safety_valve | true | |
Suppress Parameter Validation: Spark Data Serializer | Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Data Serializer parameter. | false | role_config_suppression_spark_data_serializer | true | |
Suppress Parameter Validation: Extra Python Path | Whether to suppress configuration warnings produced by the built-in parameter validation for the Extra Python Path parameter. | false | role_config_suppression_spark_python_path | true |
historyserver
Advanced
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
History Server Logging Advanced Configuration Snippet (Safety Valve) | For advanced use only, a string to be inserted into log4j.properties for this role only. | log4j_safety_valve | false | ||
Heap Dump Directory | Path to directory where heap dumps are generated when java.lang.OutOfMemoryError error is thrown. This directory is automatically created if it does not exist. If this directory already exists, role user must have write access to this directory. If this directory is shared among multiple roles, it should have 1777 permissions. The heap dump files are created with 600 permissions and are owned by the role user. The amount of free space in this directory should be greater than the maximum Java Process heap size configured for this role. | oom_heap_dump_dir | /tmp | oom_heap_dump_dir | false |
Dump Heap When Out of Memory | When set, generates heap dump file when java.lang.OutOfMemoryError is thrown. | true | oom_heap_dump_enabled | true | |
Kill When Out of Memory | When set, a SIGKILL signal is sent to the role process when java.lang.OutOfMemoryError is thrown. | true | oom_sigkill_enabled | true | |
Automatically Restart Process | When set, this role's process is automatically (and transparently) restarted in the event of an unexpected failure. | false | process_auto_restart | true | |
Enable Metric Collection | Cloudera Manager agent monitors each service and each of its role by publishing metrics to the Cloudera Manager Service Monitor. Setting it to false will stop Cloudera Manager agent from publishing any metric for corresponding service/roles. This is usually helpful for services that generate large amount of metrics which Service Monitor is not able to process. | true | process_should_monitor | true | |
History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh | For advanced use only. A string to be inserted into spark-conf/spark-env.sh for this role only. | spark-conf/spark-env.sh_role_safety_valve | false | ||
History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-history-server.conf | For advanced use only. A string to be inserted into spark-conf/spark-history-server.conf for this role only. | spark-conf/spark-history-server.conf_role_safety_valve | false | ||
History Server Environment Advanced Configuration Snippet (Safety Valve) | For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of this role except client configuration. | SPARK_YARN_HISTORY_SERVER_role_env_safety_valve | false |
Logs
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
History Server Log Directory | The log directory for log files of the role History Server. | log_dir | /var/log/spark | log_dir | false |
History Server Logging Threshold | The minimum log level for History Server logs | INFO | log_threshold | false | |
History Server Maximum Log File Backups | The maximum number of rolled log files to keep for History Server logs. Typically used by log4j or logback. | 10 | max_log_backup_index | false | |
History Server Max Log Size | The maximum size, in megabytes, per log file for History Server logs. Typically used by log4j or logback. | 200 MiB | max_log_size | false |
Monitoring
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Enable Health Alerts for this Role | When set, Cloudera Manager will send alerts when the health of this role reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold | true | enable_alerts | false | |
Enable Configuration Change Alerts | When set, Cloudera Manager will send alerts when this entity's configuration changes. | false | enable_config_alerts | false | |
Process Swap Memory Thresholds | The health test thresholds on the swap memory usage of the process. | Warning: Any, Critical: Never | process_swap_memory_thresholds | false | |
Role Triggers | The configured triggers for this role. This is a JSON-formatted list of triggers. These triggers are evaluated as part as the health
system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has the following fields:
|
[] | role_triggers | true | |
File Descriptor Monitoring Thresholds | The health test thresholds of the number of file descriptors used. Specified as a percentage of file descriptor limit. | Warning: 50.0 %, Critical: 70.0 % | spark_yarn_history_server_fd_thresholds | false | |
History Server Host Health Test | When computing the overall History Server health, consider the host's health. | true | spark_yarn_history_server_host_health_enabled | false | |
History Server Process Health Test | Enables the health test that the History Server's process state is consistent with the role configuration | true | spark_yarn_history_server_scm_health_enabled | false | |
Unexpected Exits Thresholds | The health test thresholds for unexpected exits encountered within a recent period specified by the unexpected_exits_window configuration for the role. | Warning: Never, Critical: Any | unexpected_exits_thresholds | false | |
Unexpected Exits Monitoring Period | The period to review when computing unexpected exits. | 5 minute(s) | unexpected_exits_window | false |
Other
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Enable Event Log Cleaner | Specifies whether the History Server should periodically clean up event logs from storage. | spark.history.fs.cleaner.enabled | false | event_log_cleaner_enabled | false |
Event Log Cleaner Interval | How often the History Server will clean up event log files. | spark.history.fs.cleaner.interval | 1 day(s) | event_log_cleaner_interval | false |
Maximum Event Log Age | Specifies the maximum age of the event logs. | spark.history.fs.cleaner.maxAge | 7 day(s) | event_log_cleaner_max_age | false |
HDFS Polling Interval | How often to poll HDFS for new applications. | spark.history.fs.update.interval.seconds | 10 second(s) | history_server_fs_poll_interval | false |
Java Heap Size of History Server in Bytes | Maximum size for the Java process heap memory. Passed to Java -Xmx. Measured in bytes. | history_server_max_heapsize | 512 MiB | history_server_max_heapsize | true |
Retained App Count | Max number of application UIs to keep in the History Server's memory. All applications will still be available, but may take longer to load if they're not in memory. | spark.history.retainedApplications | 50 | history_server_retained_apps | false |
Performance
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Maximum Process File Descriptors | If configured, overrides the process soft and hard rlimits (also called ulimits) for file descriptors to the configured value. | rlimit_fds | false |
Ports and Addresses
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
History Server WebUI Port | The port of the history server WebUI | spark.history.ui.port | 18088 | history_server_web_port | true |
Resource Management
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Cgroup CPU Shares | Number of CPU shares to assign to this role. The greater the number of shares, the larger the share of the host's CPUs that will be given to this role when the host experiences CPU contention. Must be between 2 and 262144. Defaults to 1024 for processes not managed by Cloudera Manager. | cpu.shares | 1024 | rm_cpu_shares | true |
Cgroup I/O Weight | Weight for the read I/O requests issued by this role. The greater the weight, the higher the priority of the requests when the host experiences I/O contention. Must be between 100 and 1000. Defaults to 1000 for processes not managed by Cloudera Manager. | blkio.weight | 500 | rm_io_weight | true |
Cgroup Memory Hard Limit | Hard memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages charged to the process. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit. | memory.limit_in_bytes | -1 MiB | rm_memory_hard_limit | true |
Cgroup Memory Soft Limit | Soft memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages charged to the process if and only if the host is facing memory pressure. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit. | memory.soft_limit_in_bytes | -1 MiB | rm_memory_soft_limit | true |
Stacks Collection
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Stacks Collection Data Retention | The amount of stacks data that is retained. After the retention limit is reached, the oldest data is deleted. | stacks_collection_data_retention | 100 MiB | stacks_collection_data_retention | false |
Stacks Collection Directory | The directory in which stacks logs are placed. If not set, stacks are logged into a stacks subdirectory of the role's log directory. | stacks_collection_directory | stacks_collection_directory | false | |
Stacks Collection Enabled | Whether or not periodic stacks collection is enabled. | stacks_collection_enabled | false | stacks_collection_enabled | true |
Stacks Collection Frequency | The frequency with which stacks are collected. | stacks_collection_frequency | 5.0 second(s) | stacks_collection_frequency | false |
Stacks Collection Method | The method used to collect stacks. The jstack option involves periodically running the jstack command against the role's daemon process. The servlet method is available for those roles that have an HTTP server endpoint exposing the current stacks traces of all threads. When the servlet method is selected, that HTTP endpoint is periodically scraped. | stacks_collection_method | jstack | stacks_collection_method | false |
Suppressions
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Suppress Configuration Validator: CDH Version Validator | Whether to suppress configuration warnings produced by the CDH Version Validator configuration validator. | false | role_config_suppression_cdh_version_validator | true | |
Suppress Parameter Validation: History Server Logging Advanced Configuration Snippet (Safety Valve) | Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Logging Advanced Configuration Snippet (Safety Valve) parameter. | false | role_config_suppression_log4j_safety_valve | true | |
Suppress Parameter Validation: History Server Log Directory | Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Log Directory parameter. | false | role_config_suppression_log_dir | true | |
Suppress Parameter Validation: Heap Dump Directory | Whether to suppress configuration warnings produced by the built-in parameter validation for the Heap Dump Directory parameter. | false | role_config_suppression_oom_heap_dump_dir | true | |
Suppress Parameter Validation: Role Triggers | Whether to suppress configuration warnings produced by the built-in parameter validation for the Role Triggers parameter. | false | role_config_suppression_role_triggers | true | |
Suppress Parameter Validation: History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh | Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh parameter. | false | role_config_suppression_spark-conf/spark-env.sh_role_safety_valve | true | |
Suppress Parameter Validation: History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-history-server.conf | Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-history-server.conf parameter. | false | role_config_suppression_spark-conf/spark-history-server.conf_role_safety_valve | true | |
Suppress Parameter Validation: History Server Environment Advanced Configuration Snippet (Safety Valve) | Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Environment Advanced Configuration Snippet (Safety Valve) parameter. | false | role_config_suppression_spark_yarn_history_server_role_env_safety_valve | true | |
Suppress Parameter Validation: Stacks Collection Directory | Whether to suppress configuration warnings produced by the built-in parameter validation for the Stacks Collection Directory parameter. | false | role_config_suppression_stacks_collection_directory | true | |
Suppress Health Test: Audit Pipeline Test | Whether to suppress the results of the Audit Pipeline Test heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. | false | role_health_suppression_spark_on_yarn_spark_yarn_history_server_audit_health | true | |
Suppress Health Test: File Descriptors | Whether to suppress the results of the File Descriptors heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. | false | role_health_suppression_spark_on_yarn_spark_yarn_history_server_file_descriptor | true | |
Suppress Health Test: Host Health | Whether to suppress the results of the Host Health heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. | false | role_health_suppression_spark_on_yarn_spark_yarn_history_server_host_health | true | |
Suppress Health Test: Process Status | Whether to suppress the results of the Process Status heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. | false | role_health_suppression_spark_on_yarn_spark_yarn_history_server_scm_health | true | |
Suppress Health Test: Swap Memory Usage | Whether to suppress the results of the Swap Memory Usage heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. | false | role_health_suppression_spark_on_yarn_spark_yarn_history_server_swap_memory_usage | true | |
Suppress Health Test: Unexpected Exits | Whether to suppress the results of the Unexpected Exits heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. | false | role_health_suppression_spark_on_yarn_spark_yarn_history_server_unexpected_exits | true |
service_wide
Advanced
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
System Group | The group that this service's processes should run as. | spark | process_groupname | true | |
System User | The user that this service's processes should run as. | spark | process_username | true | |
Spark Service Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh | For advanced use only, a string to be inserted into spark-conf/spark-env.sh. Applies to configurations of all roles in this service except client configuration. | spark-conf/spark-env.sh_service_safety_valve | false | ||
Spark Service Environment Advanced Configuration Snippet (Safety Valve) | For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of all roles in this service except client configuration. | SPARK_ON_YARN_service_env_safety_valve | false |
Monitoring
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Enable Service Level Health Alerts | When set, Cloudera Manager will send alerts when the health of this service reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold | true | enable_alerts | false | |
Enable Configuration Change Alerts | When set, Cloudera Manager will send alerts when this entity's configuration changes. | false | enable_config_alerts | false | |
Service Triggers | The configured triggers for this service. This is a JSON-formatted list of triggers. These triggers are evaluated as part as the
health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has the following fields:
|
[] | service_triggers | true | |
Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve) | For advanced use only, a list of derived configuration properties that will be used by the Service Monitor instead of the default ones. | smon_derived_configs_safety_valve | false |
Other
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
HBase Service | Name of the HBase service that this Spark service instance depends on | hbase_service | false | ||
Spark Authentication | Enable whether the Spark communication protocols do authentication using a shared secret. If using Spark2, ensure that value of this property is the same in both services. | spark.authenticate | false | spark_authenticate | true |
Spark History Location (HDFS) | The location of Spark application history logs in HDFS. Changing this value will not move existing logs to the new location. | spark.eventLog.dir | /user/spark/applicationHistory | spark_history_log_dir | true |
Spark JAR Location (HDFS) | The location of the Spark JAR in HDFS. If left blank, Cloudera Manager will use the Spark JAR installed on the cluster nodes. | spark_jar_hdfs_path | spark_jar_hdfs_path | false | |
YARN (MR2 Included) Service | Name of the YARN (MR2 Included) service that this Spark service instance depends on | yarn_service | true |
Ports and Addresses
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Spark Shuffle Service Port | The port the Spark Shuffle Service listens for fetch requests. If using Spark2, ensure that value of this property is the same in both services. | spark.shuffle.service.port | 7337 | spark_shuffle_service_port | true |
Security
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Kerberos Principal | Kerberos principal short name used by all roles of this service. | spark | kerberos_princ_name | true |
Suppressions
Display Name | Description | Related Name | Default Value | API Name | Required |
---|---|---|---|---|---|
Suppress Configuration Validator: Gateway Count Validator | Whether to suppress configuration warnings produced by the Gateway Count Validator configuration validator. | false | service_config_suppression_gateway_count_validator | true | |
Suppress Parameter Validation: Kerberos Principal | Whether to suppress configuration warnings produced by the built-in parameter validation for the Kerberos Principal parameter. | false | service_config_suppression_kerberos_princ_name | true | |
Suppress Parameter Validation: System Group | Whether to suppress configuration warnings produced by the built-in parameter validation for the System Group parameter. | false | service_config_suppression_process_groupname | true | |
Suppress Parameter Validation: System User | Whether to suppress configuration warnings produced by the built-in parameter validation for the System User parameter. | false | service_config_suppression_process_username | true | |
Suppress Parameter Validation: Service Triggers | Whether to suppress configuration warnings produced by the built-in parameter validation for the Service Triggers parameter. | false | service_config_suppression_service_triggers | true | |
Suppress Parameter Validation: Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve) | Whether to suppress configuration warnings produced by the built-in parameter validation for the Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve) parameter. | false | service_config_suppression_smon_derived_configs_safety_valve | true | |
Suppress Parameter Validation: Spark Service Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh | Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Service Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh parameter. | false | service_config_suppression_spark-conf/spark-env.sh_service_safety_valve | true | |
Suppress Parameter Validation: Spark History Location (HDFS) | Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark History Location (HDFS) parameter. | false | service_config_suppression_spark_history_log_dir | true | |
Suppress Parameter Validation: Spark JAR Location (HDFS) | Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark JAR Location (HDFS) parameter. | false | service_config_suppression_spark_jar_hdfs_path | true | |
Suppress Parameter Validation: Spark Service Environment Advanced Configuration Snippet (Safety Valve) | Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Service Environment Advanced Configuration Snippet (Safety Valve) parameter. | false | service_config_suppression_spark_on_yarn_service_env_safety_valve | true | |
Suppress Configuration Validator: History Server Count Validator | Whether to suppress configuration warnings produced by the History Server Count Validator configuration validator. | false | service_config_suppression_spark_yarn_history_server_count_validator | true |