Spark Properties in CDH 5.1.0

gatewaydefaultgroup

Advanced

Display Name Description Related Name Default Value API Name Required
Deploy Directory The directory where the client configs will be deployed /etc/spark client_config_root_dir true
Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/log4j.properties For advanced use only, a string to be inserted into the client configuration for spark-conf/log4j.properties. spark-conf/log4j.properties_client_config_safety_valve false
Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf For advanced use only, a string to be inserted into the client configuration for spark-conf/spark-defaults.conf. spark-conf/spark-defaults.conf_client_config_safety_valve false
Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh For advanced use only, a string to be inserted into the client configuration for spark-conf/spark-env.sh. spark-conf/spark-env.sh_client_config_safety_valve false

Monitoring

Display Name Description Related Name Default Value API Name Required
Enable Configuration Change Alerts When set, Cloudera Manager will send alerts when this entity's configuration changes. false enable_config_alerts false

Other

Display Name Description Related Name Default Value API Name Required
Alternatives Priority The priority level that the client configuration will have in the Alternatives system on the hosts. Higher priority levels will cause Alternatives to prefer this configuration over any others. 51 client_config_priority true
Enable History Write Spark application history logs to HDFS. spark.eventLog.enabled true spark_history_enabled false

historyserverdefaultgroup

Advanced

Display Name Description Related Name Default Value API Name Required
History Server Environment Advanced Configuration Snippet (Safety Valve) For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of this role except client configuration. SPARK_YARN_HISTORY_SERVER_role_env_safety_valve false
History Server Logging Advanced Configuration Snippet (Safety Valve) For advanced use only, a string to be inserted into log4j.properties for this role only. log4j_safety_valve false
Automatically Restart Process When set, this role's process is automatically (and transparently) restarted in the event of an unexpected failure. false process_auto_restart true

Logs

Display Name Description Related Name Default Value API Name Required
History Server Log Directory The log directory for log files of the role History Server. log_dir /var/log/spark log_dir false
History Server Logging Threshold The minimum log level for History Server logs INFO log_threshold false
History Server Maximum Log File Backups The maximum number of rolled log files to keep for History Server logs. Typically used by log4j. 10 max_log_backup_index false
History Server Max Log Size The maximum size, in megabytes, per log file for History Server logs. Typically used by log4j. 200 MiB max_log_size false

Monitoring

Display Name Description Related Name Default Value API Name Required
Enable Health Alerts for this Role When set, Cloudera Manager will send alerts when the health of this role reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold true enable_alerts false
Enable Configuration Change Alerts When set, Cloudera Manager will send alerts when this entity's configuration changes. false enable_config_alerts false
Log Directory Free Space Monitoring Absolute Thresholds The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory. Warning: 10 GiB, Critical: 5 GiB log_directory_free_space_absolute_thresholds false
Log Directory Free Space Monitoring Percentage Thresholds The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory. Specified as a percentage of the capacity on that filesystem. This setting is not used if a Log Directory Free Space Monitoring Absolute Thresholds setting is configured. Warning: Never, Critical: Never log_directory_free_space_percentage_thresholds false
Role Triggers The configured triggers for this role. This is a JSON formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has all of the following fields:
  • triggerName (mandatory) - the name of the trigger. This value must be unique for the specific role.
  • triggerExpression (mandatory) - a tsquery expression representing the trigger.
  • streamThreshold (optional) - the maximum number of streams that can satisfy a condition of a trigger before the condition fires. By default set to 0, and any stream returned causes the condition to fire.
  • enabled (optional) - by default set to 'true'. If set to 'false' the trigger will not be evaluated.
For example, here is a JSON formatted trigger configured for a DataNode that fires if the DataNode has more than 1500 file-descriptors opened:[{"triggerName": "sample-trigger", "triggerExpression": "IF (SELECT fd_open WHERE roleName=$ROLENAME and last(fd_open) > 1500) DO health:bad", "streamThreshold": 0, "enabled": "true"}]Consult the trigger rules documentation for more details on how to write triggers using tsquery.The JSON format is evolving and may change in the future and as a result backward compatibility is not guaranteed between releases at this time.
[] role_triggers true
File Descriptor Monitoring Thresholds The health test thresholds of the number of file descriptors used. Specified as a percentage of file descriptor limit. Warning: 50.0 %, Critical: 70.0 % spark_yarn_history_server_fd_thresholds false
History Server Host Health Test When computing the overall History Server health, consider the host's health. true spark_yarn_history_server_host_health_enabled false
History Server Process Health Test Enables the health test that the History Server's process state is consistent with the role configuration true spark_yarn_history_server_scm_health_enabled false
Unexpected Exits Thresholds The health test thresholds for unexpected exits encountered within a recent period specified by the unexpected_exits_window configuration for the role. Warning: Never, Critical: Any unexpected_exits_thresholds false
Unexpected Exits Monitoring Period The period to review when computing unexpected exits. 5 minute(s) unexpected_exits_window false

Other

Display Name Description Related Name Default Value API Name Required
Java Heap Size of History Server in Bytes Maximum size for the Java process heap memory. Passed to Java -Xmx. Measured in bytes. history_server_max_heapsize 256 MiB history_server_max_heapsize true

Performance

Display Name Description Related Name Default Value API Name Required
Maximum Process File Descriptors If configured, overrides the process soft and hard rlimits (also called ulimits) for file descriptors to the configured value. rlimit_fds false

Ports and Addresses

Display Name Description Related Name Default Value API Name Required
History Server WebUI Port The port of the history server WebUI history.port 18088 history_server_web_port true

Resource Management

Display Name Description Related Name Default Value API Name Required
Cgroup CPU Shares Number of CPU shares to assign to this role. The greater the number of shares, the larger the share of the host's CPUs that will be given to this role when the host experiences CPU contention. Must be between 2 and 262144. Defaults to 1024 for processes not managed by Cloudera Manager. cpu.shares 1024 rm_cpu_shares true
Cgroup I/O Weight Weight for the read I/O requests issued by this role. The greater the weight, the higher the priority of the requests when the host experiences I/O contention. Must be between 100 and 1000. Defaults to 1000 for processes not managed by Cloudera Manager. blkio.weight 500 rm_io_weight true
Cgroup Memory Hard Limit Hard memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages charged to the process. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit. memory.limit_in_bytes -1 MiB rm_memory_hard_limit true
Cgroup Memory Soft Limit Soft memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages charged to the process if and only if the host is facing memory pressure. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit. memory.soft_limit_in_bytes -1 MiB rm_memory_soft_limit true

service_wide

Advanced

Display Name Description Related Name Default Value API Name Required
Spark Service Environment Advanced Configuration Snippet (Safety Valve) For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of all roles in this service except client configuration. SPARK_ON_YARN_service_env_safety_valve false
System Group The group that this service's processes should run as. spark process_groupname true
System User The user that this service's processes should run as. spark process_username true

Monitoring

Display Name Description Related Name Default Value API Name Required
Enable Service Level Health Alerts When set, Cloudera Manager will send alerts when the health of this service reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold true enable_alerts false
Enable Configuration Change Alerts When set, Cloudera Manager will send alerts when this entity's configuration changes. false enable_config_alerts false
Service Triggers The configured triggers for this service. This is a JSON formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has all of the following fields:
  • triggerName (mandatory) - the name of the trigger. This value must be unique for the specific service.
  • triggerExpression (mandatory) - a tsquery expression representing the trigger.
  • streamThreshold (optional) - the maximum number of streams that can satisfy a condition of a trigger before the condition fires. By default set to 0, and any stream returned causes the condition to fire.
  • enabled (optional) - by default set to 'true'. If set to 'false' the trigger will not be evaluated.
For example, here is a JSON formatted trigger that fires if there are more than 10 DataNodes with more than 500 file-descriptors opened:[{"triggerName": "sample-trigger", "triggerExpression": "IF (SELECT fd_open WHERE roleType = DataNode and last(fd_open) > 500) DO health:bad", "streamThreshold": 10, "enabled": "true"}]Consult the trigger rules documentation for more details on how to write triggers using tsquery.The JSON format is evolving and may change in the future and as a result backward compatibility is not guaranteed between releases at this time.
[] service_triggers true
Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve) For advanced use only, a list of derived configuration properties that will be used by the Service Monitor instead of the default ones. smon_derived_configs_safety_valve false

Other

Display Name Description Related Name Default Value API Name Required
Spark History Location (HDFS) The location of Spark application history logs in HDFS. Changing this value will not move existing logs to the new location. spark.eventLog.dir /user/spark/applicationHistory spark_history_log_dir true
Spark Jar Location (HDFS) The location of the Spark jar in HDFS spark_jar_hdfs_path /user/spark/share/lib/spark-assembly.jar spark_jar_hdfs_path true
YARN (MR2 Included) Service Name of the YARN (MR2 Included) service that this Spark service instance depends on yarn_service true