Known Issues in MapReduce and YARN

Learn about the known issues in Mapreduce and YARN, the impact or changes to the functionality, and the workaround.

Known Issues

This issue comes up when a user tries to submit an app to a non-existing queue in the format of parent.leaf. The app gets submitted, gets an ID and it state is stored to the state store. After that an NPE will occur and the ResourceManager stops with the application data stored in recovery. When it gets restarted it tries to recover the apps but once it gets to the one that was submitted to the unknown queue the NPE will happen again, causing the RM to exit again.

Resolution 1:

  1. A queue needs to be created somewhere in the structure (like root.parent.leaf in case of the unknown queue was parent.leaf), this way the app still won't be launched but the NPE will not occur and the RM could start
  2. the recovery data for the failing app needs to be deleted. These workarounds however do not protect the customer from experiencing the NPE again.

Resolution 2:

  1. The permanent solution is to change the configuration the following way: Enable the yarn.scheduler.capacity.queue-mappings-override.enable (in QM as Override Queue Mappings or in capacity scheduler SV)
  2. Add a new mapping rule to the first place which matches all users, places the application in the queue specified at runtime and if the target queue doesn't exists it skips this rule (goes to the next one).

Resolution 3: A custom fix can be provided as YARN-10571 is a large refactor patch.

COMPX-5817: Queue Manager UI will not be able to present a view of pre-upgrade queue structure. CM Store is not supported and therefore Yarn will not have any of the pre-upgrade queue structure preserved.
When a Data Hub cluster is deleted, all saved configurations are also deleted. All YARN configurations are saved in CM Store and this is yet to be supported in Data Hub and Cloudera Manager. Hence, the YARN queue structure also will be lost when a Data Hub cluster is deleted or upgraded or restored.
COMPX-5240: Restarting parent queue does not restart child queues in weight mode
When a dynamic auto child creation enabled parent queue is stopped in weight mode, its static and dynamically created child queues are also stopped. However, when the dynamic auto child creation enabled parent queue is restarted, its child queues remain stopped. In addition, the dynamically created child queues cannot be restarted manually through the YARN Queue Manager UI either.
Delete the dynamic auto child creation enabled parent queue. This action also deletes all its child queues, both static and dynamically created child queues, including the stopped dynamic queues. Then recreate the parent queue, enable the dynamic auto child creation feature for it and add any required static child queues.
COMPX-5244: Root queue should not be enabled for auto-queue creation
After dynamic auto child creation is enabled for a queue using the YARN Queue Manager UI, you cannot disable it using the YARN Queue Manager UI. That can cause problem when you want to switch between resource allocation modes, for example from weight mode to relative mode. The YARN Queue Manager UI does not let you to switch resource allocation mode if there is at least one dynamic auto child creation enabled parent queue in your queue hierarchy.
If the dynamic auto child creation enabled parent queue is NOT the root or the root.default queue: Stop and remove the dynamic auto child creation enabled parent queue. Note that this stops and remove all of its child queues as well.
If the dynamic auto child creation enabled parent queue is the root or the root.default queue: You cannot stop and remove neither the root nor the root.default queue. You have to change the configuration in the applicable configuration file:
  1. In Cloudera Manager, navigate to YARN>>Configuration.
  2. Search for capacity scheduler and find the Capacity Scheduler Configuration Advanced Configuration Snippet (Safety Valve) property.
  3. Add the following configuration: yarn.scheduler.capacity.<queue-path>.auto-queue-creation-v2.enabled=false For example: yarn.scheduler.capacity.root.default.auto-queue-creation-v2.enabled=false Alternatively, you can remove the yarn.scheduler.capacity.<queue-path>.auto-queue-creation-v2.enabled property from the configuration file.
  4. Restart the Resource Manager.
COMPX-5589: Unable to add new queue to leaf queue with partition capacity in Weight/Absolute mode
Scenario
  1. User creates one or more partitions.
  2. Assigns a partition to a parent with children
  3. Switches to the partition to distribute the capacities
  4. Creates a new child queue under one of the leaf queues but the following error is displayed:
Error :
2021-03-05 17:21:26,734 ERROR 
com.cloudera.cpx.server.api.repositories.SchedulerRepository: Validation failed for Add queue 
operation. Error message: CapacityScheduler configuration validation failed:java.io.IOException: 
Failed to re-init queues : Parent queue 'root.test2' have children queue used mixed of  weight 
mode, percentage and absolute mode, it is not allowed, please double check, details:
{Queue=root.test2.test2childNew, label= uses weight mode}. {Queue=root.test2.test2childNew, 
label=partition uses percentage mode}             
To create new queues under leaf queues without hitting this error, perform the following:
  1. Switch to Relative mode
  2. Create the required queues
  3. Create the required partitions
  4. Assign partitions and set capacities
  5. Switch back to Weight mode
  1. Create the entire queue structure
  2. Create the required partitions
  3. Assign partition to queues
  4. Set partition capacities
COMPX-5264: Unable to switch to Weight mode on creating a managed parent queue in Relative mode
In the current implemention, if there is an existing managed queue in Relative mode, then conversion to Weight mode is not be allowed.
To proceed with the conversion from Relative mode to Weight mode, there should not be any managed queues. You must first delete the managed queues before conversion. In Weight mode, a parent queue can be converted into managed parent queue.
COMPX-5549: Queue Manager UI sets maximum-capacity to null when you switch mode with multiple partitions
If you associate a partition with one or more queues and then switch the allocation mode before assigning capacities to the queues, an Operation Failed error is displayed as the max-capacity is set to null.

After you associate a partition with one or more queues, in the YARN Queue Manager UI, click Overview > <Partition name> from the dropdown list and distribute capacity to the queues before switching allocation mode or creating placement rules.

COMPX-4992: Unable to switch to absolute mode after deleting a partition using YARN Queue Manager
If you delete a partition (node label) which has been associated with queues and those queues have capacities configured for that partition (node label), the CS.xml still contains the partition (node label) information. Hence, you cannot switch to absolute mode after deleting the partition (node label).
It is recommended not to delete a partition (node label) which has been associated with queues and those queues have capacities configured for that partition (node label).
COMPX-3181: Application logs does not work for AZURE and AWS cluster
Yarn Application Log Aggregation will fail for any YARN job (MR, Tez, Spark, etc) which do not use cloud storage, or use a cloud storage location other than the one configured for YARN logs (yarn.nodemanager.remote-app-log-dir).
Configure the following:
  • For MapReduce job, set mapreduce.job.hdfs-servers in the mapred-site.xml file with all filesystems required for the job including the one set in yarn.nodemanager.remote-app-log-dir such as hdfs://nn1/,hdfs://nn2/.

  • For Spark job, set the job level with all filesystems required for the job including the one set in yarn.nodemanager.remote-app-log-dir such as hdfs://nn1/,hdfs://nn2/ in spark.yarn.access.hadoopFileSystems and pass it through the --config option in spark-submit.

  • For jobs submitted using the hadoop command, place a separate core-site.xml file with fs.defaultFS set to the filesystem set in yarn.nodemanager.remote-app-log-dir in a path. Add that directory path in --config when executing the hadoop command.

COMPX-1445: Queue Manager operations are failing when Queue Manager is installed separately from YARN
If Queue Manager is not selected during YARN installation, Queue Manager operation are failing. Queue Manager says 0 queues are configured and several failures are present. That is because ZooKeeper configuration store is not enabled.
  1. In Cloudera Manager, select the YARN service.
  2. Click the Configuration tab.
  3. Find the Queue Manager Service property.
  4. Select the Queue Manager service that the YARN service instance depends on.
  5. Click Save Changes.
  6. Restart all services that are marked stale in Cloudera Manager.
COMPX-1451: Queue Manager does not support multiple ResourceManagers
When YARN High Availability is enabled there are multiple ResourceManagers. Queue Manager receives multiple ResourceManager URLs for a High Availability cluster. It picks the active ResourceManager URL only when Queue Manager page is loaded. Queue Manager cannot handle it gracefully when the currently active ResourceManager goes down while the user is still using the Queue Manager UI.
Reload the Queue Manager page manually.
COMPX-3329: Autorestart is not enabled for Queue Manager in Data Hub
In a Data Hub cluster, Queue Manager is installed with autorestart disabled. Hence, if Queue Manager goes down, it will not restart automatically.
If Queue Manager goes down in a Data Hub cluster, you must go to the Cloudera Manager Dashboard and restart the Queue Manager service.
Third party applications do not launch if MapReduce framework path is not included in the client configuration
MapReduce application framework is loaded from HDFS instead of being present on the NodeManagers. By default the mapreduce.application.framework.path property is set to the appropriate value, but third party applications with their own configurations will not launch.
Set the mapreduce.application.framework.path property to the appropriate configuration for third party applications.
OPSAPS-57067: Yarn Service in Cloudera Manager reports stale configuration yarn.cluster.scaling.recommendation.enable.
This issue does not affect the functionality. Restarting Yarn service will fix this issue.
JobHistory URL mismatch after server relocation
After moving the JobHistory Server to a new host, the URLs listed for the JobHistory Server on the ResourceManager web UI still point to the old JobHistory Server. This affects existing jobs only. New jobs started after the move are not affected.
For any existing jobs that have the incorrect JobHistory Server URL, there is no option other than to allow the jobs to roll off the history over time. For new jobs, make sure that all clients have the updated mapred-site.xml that references the correct JobHistory Server.
CDH-49165: History link in ResourceManager web UI broken for killed Spark applications
When a Spark application is killed, the history link in the ResourceManager web UI does not work.
To view the history for a killed Spark application, see the Spark HistoryServer web UI instead.
CDH-6808: Routable IP address required by ResourceManager
ResourceManager requires routable host:port addresses for yarn.resourcemanager.scheduler.address, and does not support using the wildcard 0.0.0.0 address.
Set the address, in the form host:port, either in the client-side configuration, or on the command line when you submit the job.
OPSAPS-52066:Stacks under Logs Directory for Hadoop daemons are not accessible from Knox Gateway.
Stacks under the Logs directory for Hadoop daemons, such as NameNode, DataNode, ResourceManager, NodeManager, and JobHistoryServer are not accessible from Knox Gateway.
Administrators can SSH directly to the Hadoop Daemon machine to collect stacks under the Logs directory.
CDPD-2936: Application logs are not accessible in WebUI2 or Cloudera Manager
Running Containers Logs from NodeManager local directory cannot be accessed either in Cloudera Manager or in WebUI2 due to log aggregation.
Use the YARN log CLI to access application logs. For example:
yarn logs -applicationId <Application ID>
Apache Issue: YARN-9725
OPSAPS-50291: Environment variables HADOOP_HOME, PATH, LANG, and TZ are not getting whitelisted
It is possible to include the environment variables HADOOP_HOME, PATH, LANG, and TZ in the allowlist, but the container launch environments do not have these variables set up automatically.
You can manually add the required environment variables to the allowlist using Cloudera Manager.
  1. In Cloudera Manager, select the YARN service.
  2. Click the Configuration tab.
  3. Search for Containers Environment Variable Whitelist.
  4. Add the environment variables (HADOOP_HOME, PATH, LANG, TZ) which are required to the list.
  5. Click Save Changes.
  6. Restart all NodeManagers.
  7. Check the YARN aggregated logs to ensure that newly whitelisted environment variables are set up for container launch.
YARN cannot start if Kerberos principal name is changed
If the Kerberos principal name is changed in Cloudera Manager after launch, YARN will not be able to start. In such case the keytabs can be correctly generated but YARN cannot access ZooKeeper with the new Kerberos principal name and old ACLs.
There are two possible workarounds:
  • Delete the znode and restart the YARN service.
  • Use the reset ZK ACLs command. This also sets the znodes below /rmstore/ZKRMStateRoot to world:anyone:cdrwa which is less secure.
COMPX-8687: Missing access check for getAppAttemps
When the Job ACL feature is enabled using Cloudera Manager (YARN > Configuration > Enablg JOB ACLproperty), the mapreduce.cluster.acls.enabled property is not generated to all configuration files, including the yarn-site.xml configuration file. As a result the ResourceManager process will use the default value of this property. The default property of mapreduce.cluster.acls.enabled is false.
Workaround: Enable the Job ACL feature using an advanced configuration snippet:
  1. In Cloudera Manager select the YARN service.
  2. Click Configuration.
  3. Find the YARN Service MapReduce Advanced Configuration Snippet (Safety Valve) property.
  4. Click the plus icon and add the following:
    • Name: mapreduce.cluster.acls.enabled
    • Value: true
  5. Click Save Changes.

Unsupported Features

The following YARN features are currently not supported in Cloudera Data Platform:
  • Application Timeline Server (ATS 2 and ATS 1.5)
  • Container Resizing
  • Distributed or Centralized Allocation of Opportunistic Containers
  • Distributed Scheduling
  • Docker on YARN (DockerContainerExecutor) on Data Hub clusters
  • Dynamic Resource Pools
  • Fair Scheduler
  • GPU support for Docker
  • Hadoop Pipes
  • Movin jobs between queues
  • Native Services
  • Pluggable Scheduler Configuration
  • Queue Priority Support
  • Reservation REST APIs
  • Resource Estimator Service
  • Resource Profiles
  • (non-Zookeeper) ResourceManager State Store
  • Rolling Log Aggregation
  • Shared Cache
  • YARN Federation