Known Issues in MapReduce and YARN
Learn about the known issues in MapReduce and YARN, the impact or changes to the functionality, and the workaround.
Known Issues
- This issue comes up when a user tries to submit an app to a non-existing queue in the format of parent.leaf. The app gets submitted, gets an ID and it state is stored to the state store. After that an NPE will occur and the ResourceManager stops with the application data stored in recovery. When it gets restarted it tries to recover the apps but once it gets to the one that was submitted to the unknown queue the NPE will happen again, causing the RM to exit again.
-
Resolution 1:
- A queue needs to be created somewhere in the structure (like root.parent.leaf in case of the unknown queue was parent.leaf), this way the app still won't be launched but the NPE will not occur and the RM could start
- the recovery data for the failing app needs to be deleted. These workarounds however do not protect the customer from experiencing the NPE again.
Resolution 2:
- The permanent solution is to change the configuration the following way: Enable the yarn.scheduler.capacity.queue-mappings-override.enable (in QM as Override Queue Mappings or in capacity scheduler SV)
- Add a new mapping rule to the first place which matches all users, and places the application in the queue specified at runtime.
- If required, add more placement rules to ensure that jobs are submitted to the correct queue.
- Create a last rule to reject the application if the target queue does not exist.
Resolution 3: A custom fix can be provided as YARN-10571 is a large refactor patch.
- COMPX-5240: Restarting parent queue does not restart child queues in weight mode
- When a dynamic auto child creation enabled parent queue is stopped in weight mode, its static and dynamically created child queues are also stopped. However, when the dynamic auto child creation enabled parent queue is restarted, its child queues remain stopped. In addition, the dynamically created child queues cannot be restarted manually through the YARN Queue Manager UI either.
- COMPX-5244: Root queue should not be enabled for auto-queue creation
- After dynamic auto child creation is enabled for a queue using the YARN Queue Manager UI, you cannot disable it using the YARN Queue Manager UI. That can cause problem when you want to switch between resource allocation modes, for example from weight mode to relative mode. The YARN Queue Manager UI does not let you to switch resource allocation mode if there is at least one dynamic auto child creation enabled parent queue in your queue hierarchy.
- COMPX-5589: Unable to add new queue to leaf queue with partition capacity in Weight/Absolute mode
- Scenario
- User creates one or more partitions.
- Assigns a partition to a parent with children
- Switches to the partition to distribute the capacities
- Creates a new child queue under one of the leaf queues but the following error is displayed:
Error : 2021-03-05 17:21:26,734 ERROR com.cloudera.cpx.server.api.repositories.SchedulerRepository: Validation failed for Add queue operation. Error message: CapacityScheduler configuration validation failed:java.io.IOException: Failed to re-init queues : Parent queue 'root.test2' have children queue used mixed of weight mode, percentage and absolute mode, it is not allowed, please double check, details: {Queue=root.test2.test2childNew, label= uses weight mode}. {Queue=root.test2.test2childNew, label=partition uses percentage mode}
- COMPX-5264: Unable to switch to Weight mode on creating a managed parent queue in Relative mode
- In the current implemention, if there is an existing managed queue in Relative mode, then conversion to Weight mode is not be allowed.
- COMPX-5549: Queue Manager UI sets maximum-capacity to null when you switch mode with multiple partitions
- If you associate a partition with one or more queues and then switch the allocation mode before assigning capacities to the queues, an Operation Failed error is displayed as the max-capacity is set to null.
- COMPX-4992: Unable to switch to absolute mode after deleting a partition using YARN Queue Manager
- If you delete a partition (node label) which has been associated with queues and those queues have capacities configured for that partition (node label), the CS.xml still contains the partition (node label) information. Hence, you cannot switch to absolute mode after deleting the partition (node label).
- COMPX-3181: Application logs does not work for AZURE and AWS cluster
- Yarn Application Log Aggregation will fail for any YARN job (MR, Tez, Spark, etc) which do not use cloud storage, or use a cloud storage location other than the one configured for YARN logs (yarn.nodemanager.remote-app-log-dir).
- COMPX-1445: Queue Manager operations are failing when Queue Manager is installed separately from YARN
- If Queue Manager is not selected during YARN installation, Queue Manager operation are failing. Queue Manager says 0 queues are configured and several failures are present. That is because ZooKeeper configuration store is not enabled.
- COMPX-1451: Queue Manager does not support multiple ResourceManagers
- When YARN High Availability is enabled there are multiple ResourceManagers. Queue Manager receives multiple ResourceManager URLs for a High Availability cluster. It picks the active ResourceManager URL only when Queue Manager page is loaded. Queue Manager cannot handle it gracefully when the currently active ResourceManager goes down while the user is still using the Queue Manager UI.
- COMPX-3329: Autorestart is not enabled for Queue Manager in Data Hub
- In a Data Hub cluster, Queue Manager is installed with autorestart disabled. Hence, if Queue Manager goes down, it will not restart automatically.
- Third party applications do not launch if MapReduce framework path is not included in the client configuration
- MapReduce application framework is loaded from HDFS instead of being present on the
NodeManagers. By default the
mapreduce.application.framework.path
property is set to the appropriate value, but third party applications with their own configurations will not launch. - OPSAPS-57067: Yarn Service in Cloudera Manager reports stale configuration yarn.cluster.scaling.recommendation.enable.
- This issue does not affect the functionality. Restarting Yarn service will fix this issue.
- JobHistory URL mismatch after server relocation
- After moving the JobHistory Server to a new host, the URLs listed for the JobHistory Server on the ResourceManager web UI still point to the old JobHistory Server. This affects existing jobs only. New jobs started after the move are not affected.
- CDH-49165: History link in ResourceManager web UI broken for killed Spark applications
- When a Spark application is killed, the history link in the ResourceManager web UI does not work.
- CDH-6808: Routable IP address required by ResourceManager
- ResourceManager requires routable
host:port
addresses foryarn.resourcemanager.scheduler.address
, and does not support using the wildcard 0.0.0.0 address. - OPSAPS-52066: Stacks under Logs Directory for Hadoop daemons are not accessible from Knox Gateway.
- Stacks under the Logs directory for Hadoop daemons, such as NameNode, DataNode, ResourceManager, NodeManager, and JobHistoryServer are not accessible from Knox Gateway.
- CDPD-2936: Application logs are not accessible in WebUI2 or Cloudera Manager
- Running Containers Logs from NodeManager local directory cannot be accessed either in Cloudera Manager or in WebUI2 due to log aggregation.
- YARN cannot start if Kerberos principal name is changed
- If the Kerberos principal name is changed in Cloudera Manager after launch, YARN will not be able to start. In such case the keytabs can be correctly generated but YARN cannot access ZooKeeper with the new Kerberos principal name and old ACLs.
- COMPX-8687: Missing access check for getAppAttemps
- When the Job ACL feature is enabled using Cloudera Manager (
mapreduce.cluster.acls.enabled
property is not generated to all configuration files, including theyarn-site.xml
configuration file. As a result the ResourceManager process will use the default value of this property. The default property ofmapreduce.cluster.acls.enabled
isfalse
.
property), the
- COMPX-7493: YARN Tracking URL that is shown in the command line does not work when knox is enabled
- When Knox is configured for YARN, the Tracking URL printed in the command line of an YARN application such as spark-submit shows the direct URL instead of the Knox Gateway URL.
Unsupported Features
-
The following YARN features are currently not supported in Cloudera Data Platform:
- Application Timeline Server v2 (ATSv2)
- Container Resizing
- Distributed or Centralized Allocation of Opportunistic Containers
- Distributed Scheduling
- Docker on YARN (DockerContainerExecutor) on Data Hub clusters
- Dynamic Resource Pools
- Fair Scheduler
- GPU support for Docker
- Hadoop Pipes
- Native Services
- Pluggable Scheduler Configuration
- Queue Priority Support
- Reservation REST APIs
- Resource Estimator Service
- Resource Profiles
- (non-Zookeeper) ResourceManager State Store
- Rolling Log Aggregation
- Shared Cache
- YARN Federation
- Moving jobs between queues
Technical Service Bulletins
- TSB 2021-539: Capacity Scheduler queue pending metrics can become negative in certain production workload scenarios causing blocked queues
- The pending metrics of Capacity Scheduler queues can become negative in certain
production workload scenarios.
Once this metric becomes negative, the scheduler is unable to schedule any further resource requests on the specific queue. As a result, new applications are stuck in the
ACCEPTED
state unless YARN ResourceManager is restarted or failed-over. - Knowledge article
- For the latest update on this issue see the corresponding Knowledge article: TSB 2021-539: Capacity Scheduler queue pending metrics can become negative in certain production workload scenarios causing blocked queues