Known Issues in MapReduce and YARN
This topic describes known issues, unsupported features and limitations for using MapReduce and YARN in this release of Cloudera Runtime.
Known Issues
- OPSAPS-57067: Yarn Service in Cloudera Manager reports stale configuration yarn.cluster.scaling.recommendation.enable.
- Workaround: This issue does not affect the functionality. Restarting Yarn service will fix this issue.
- JobHistory URL mismatch after server relocation
- After moving the JobHistory Server to a new host, the URLs listed for the JobHistory Server on the ResourceManager web UI still point to the old JobHistory Server. This affects existing jobs only. New jobs started after the move are not affected.
- CDH-49165: History link in ResourceManager web UI broken for killed Spark applications
- When a Spark application is killed, the history link in the ResourceManager web UI does not work.
- CDH-6808: Routable IP address required by ResourceManager
- ResourceManager requires routable
host:port
addresses foryarn.resourcemanager.scheduler.address
, and does not support using the wildcard 0.0.0.0 address.
- OPSAPS-52066: Stacks under Logs Directory for Hadoop daemons are not accessible from Knox Gateway.
- Stacks under the Logs directory for Hadoop daemons, such as NameNode, DataNode, ResourceManager, NodeManager, and JobHistoryServer are not accessible from Knox Gateway.
- CDPD-2936: Application logs are not accessible in WebUI2 or Cloudera Manager
- Running Containers Logs from NodeManager local directory cannot be accessed either in Cloudera Manager or in WebUI2 due to log aggregation.
- OPSAPS-50291: Environment variables
HADOOP_HOME
,PATH
,LANG
, andTZ
are not getting whitelisted - It is possible to whitelist the environment variables
HADOOP_HOME
,PATH
,LANG
, andTZ
, but the container launch environments do not have these variables set up automatically. - COMPX-3181: Application logs does not work for AZURE and AWS cluster
- Yarn Application Log Aggregation will fail for any YARN job (MR, Tez, Spark, etc) which do not use cloud storage, or use a cloud storage location other than the one configured for YARN logs (yarn.nodemanager.remote-app-log-dir).
- COMPX-1445: Queue Manager operations are failing when Queue Manager is installed separately from YARN
- If Queue Manager is not selected during YARN installation, Queue Manager operation are failing. Queue Manager says 0 queues are configured and several failures are present. That is because ZooKeeper configuration store is not enabled.
- COMPX-1451: Queue Manager does not support multiple Resource
- When YARN High Availability is enabled there are multiple Resource Managers. Queue Manager receives multiple ResourceManager URLs for a High Availability cluster. It picks the active ResourceManager URL only when Queue Manager page is loaded. Queue Manager cannot handle it gracefully when the currently active ResourceManager goes down while the user is still using the Queue Manager UI.
- COMPX-3329: Autorestart is not enabled for Queue Manager in Data Hub
- In a Data Hub cluster, Queue Manager is installed with autorestart disabled. Hence, if Queue Manager goes down, it will not restart automatically.
- Third party applications do not launch if MapReduce framework path is not included in the client configuration
- MapReduce application framework is loaded from HDFS instead of being present on the
NodeManagers. By default the
mapreduce.application.framework.path
property is set to the appropriate value, but third party applications with their own configurations will not launch.
- COMPX-3181: Log aggregation fails for YARN jobs not using the cloud storage configured
by
yarn.nodemanager.remote-app-log-dir
-
Log aggregation fails for any YARN job (MapReduce, Tez, Spark, and so on) which does not use cloud storage, or does not use the cloud storage that is configured using the yarn.nodemanager.remote-app-log-dir property.
- YARN cannot start if Kerberos principal name is changed
- If the Kerberos principal name is changed in Cloudera Manager after launch, YARN will not be able to start. In such case the keytabs can be correctly generated but YARN cannot access ZooKeeper with the new Kerberos principal name and old ACLs.
- COMPX-8687: Missing access check for getAppAttemps
- When the Job ACL feature is enabled using Cloudera Manager (
mapreduce.cluster.acls.enabled
property is not generated to all configuration files, including theyarn-site.xml
configuration file. As a result the ResourceManager process will use the default value of this property. The default property ofmapreduce.cluster.acls.enabled
isfalse
.
property), the
Unsupported Features
-
The following YARN features are currently not supported in Cloudera Data Platform:
- GPU support for Docker
- Hadoop Pipes
- Fair Scheduler
- Application Timeline Server (ATS 2 and ATS 1.5)
- Container Resizing
- Distributed or Centralized Allocation of Opportunistic Containers
- Distributed Scheduling
- Native Services
- Pluggable Scheduler Configuration
- Queue Priority Support
- Reservation REST APIs
- Resource Estimator Service
- Resource Profiles
- (non-Zookeeper) ResourceManager State Store
- Shared Cache
- YARN Federation
- Rolling Log Aggregation
- Docker on YARN (DockerContainerExecutor) on Data Hub clusters
- Moving jobs between queues
- Dynamic Resource Pools