Known Issues in MapReduce, Apache Hadoop YARN, and YARN Queue Manager

Learn about the known issues in Mapreduce, YARN and YARN Queue Manager, the impact or changes to the functionality, and the workaround.

Known Issues

A fresh install of 7.2.18 of YARN Queue Manager does not allow user to bypass the Setup Database screen for YARN Queue Manager
YARN Queue Manager in Cloudera Data Platform (CDP) Private Cloud Base 7.2.18 does not require you to install a PostGres database, therefore users should not see the Setup Database screen and should be able to skip the Setup Database screen. With this known issue, users who are conducting a fresh install of 7.2.18 are not able to bypass the Setup Database screen as expected.
  1. When conducting a fresh install of YARN Queue Manager in 7.2.18, you must ensure that you have both CDP and Cloudera Manager upgraded to 7.2.18.
  2. When you reach the Setup Database screen in the Cloudera Manager installation wizard for Queue Manager, enter any dummy values for the following fields:
    1. Database name: configstore
    2. Database Username: dbuser
    3. Database Password: dbpassword
    YARN Queue Manager will not connect to PostGres with the above details and will fall back to the embedded database.
  3. Run the following script command in a browser console to enable the Continue button:

    document.querySelector('.btn.next').removeAttribute('disabled');

  4. Click Continue and proceed with the YARN Queue Manager installation.
  5. After installation is complete, SSH into the host that has Queue Manager installed, and run this command: sed -i 's/migrationCompleted=true/migrationCompleted=false/' /var/lib/hadoop-yarn/migration.properties
  6. Restart YARN Queue Manager.
CDPD-46685 Nodemanager logs are filled with logs similar to: 2022-11-28 03:42:39,587 WARN org.apache.hadoop.ipc.Client: Address change detected. Old: deh-34631355-niv-master1.e2e-797.dze1-y40r.int.cldr.work/10.114.128.84:8031 New: deh-34631355-niv-master1.e2e-797.dze1-y40r.int.cldr.work/10.114.128.63:8031 2022-11-28 03:43:01,425 WARN org.apache.hadoop.ipc.Client: Address change detected. Old: deh-34631355-niv-master0.e2e-797.dze1-y40r.int.cldr.work/10.114.128.79:8031 New: deh-34631355-niv-master0.e2e-797.dze1-y40r.int.cldr.work/10.114.128.65:8031.
Restart all YARN NodeManagers, they should come up without issues and Cloudera Manager should recognize them as healthy nodes once the status of them is refreshed upon restart.
YARN cannot start if Kerberos principal name is changed
If the Kerberos principal name is changed in Cloudera Manager after launch, YARN will not be able to start. In such case the keytabs can be correctly generated but YARN cannot access ZooKeeper with the new Kerberos principal name and old ACLs.
There are two possible workarounds:
  • Delete the znode and restart the YARN service.
  • Use the reset ZK ACLs command. This also sets the znodes below /rmstore/ZKRMStateRoot to world:anyone:cdrwa which is less secure.
Third party applications do not launch if MapReduce framework path is not included in the client configuration
MapReduce application framework is loaded from HDFS instead of being present on the NodeManagers. By default the mapreduce.application.framework.path property is set to the appropriate value, but third party applications with their own configurations will not launch.
Set the mapreduce.application.framework.path property to the appropriate configuration for third party applications.
JobHistory URL mismatch after server relocation
After moving the JobHistory Server to a new host, the URLs listed for the JobHistory Server on the ResourceManager web UI still point to the old JobHistory Server. This affects existing jobs only. New jobs started after the move are not affected.
For any existing jobs that have the incorrect JobHistory Server URL, there is no option other than to allow the jobs to roll off the history over time. For new jobs, make sure that all clients have the updated mapred-site.xml that references the correct JobHistory Server.
CDH-6808: Routable IP address required by ResourceManager
ResourceManager requires routable host:port addresses for yarn.resourcemanager.scheduler.address, and does not support using the wildcard 0.0.0.0 address.
Set the address, in the form host:port, either in the client-side configuration, or on the command line when you submit the job.
CDH-49165: History link in ResourceManager web UI broken for killed Spark applications
When a Spark application is killed, the history link in the ResourceManager web UI does not work.
To view the history for a killed Spark application, see the Spark HistoryServer web UI instead.
COMPX-3329: Autorestart is not enabled for Queue Manager in Data Hub
In a Data Hub cluster, Queue Manager is installed with autorestart disabled. Hence, if Queue Manager goes down, it will not restart automatically.
If Queue Manager goes down in a Data Hub cluster, you must go to the Cloudera Manager Dashboard and restart the Queue Manager service.
COMPX-4644: Queue capacity rounding problem when configuration is initially set via YARN

When setting the capacity scheduler configuration through the YARN/Cloudera Manager configuration, there may be capacity values that use multiple decimal places. This results in rounding/floating point precision discrepancies in the UI when trying to validate that all sibling capacities equal 100%. The UI looks like all the numbers add up to 100, but the validation still displays an error and does not allow to save the capacities. It is also observed that the capacity is being calculated as, for example, 99.9999999991 in the backend.

  • Create queues within the UI, or
  • Ensure that capacities configured through the Capacity Scheduler safety valve do not have more than one decimal place.
COMPX-5817: Queue Manager UI will not be able to present a view of pre-upgrade queue structure. CM Store is not supported and therefore Yarn will not have any of the pre-upgrade queue structure preserved.
When a Data Hub cluster is deleted, all saved configurations are also deleted. All YARN configurations are saved in CM Store and this is yet to be supported in Data Hub and Cloudera Manager. Hence, the YARN queue structure also will be lost when a Data Hub cluster is deleted or upgraded or restored.
CDPD-67150: During the restore procedure it could happen that a node becomes unhealthy as the default YARN Capacity Scheduler configuration is loaded onto the node during the restart.
Perform an extra restart on the Resource Manager role that has the incorrect configuration to restore the correct configuration.

Unsupported Features

The following YARN features are currently not supported in Cloudera Data Platform:
  • Application Timeline Server (ATSv2 and ATSv1)
  • Container Resizing
  • Distributed or Centralized Allocation of Opportunistic Containers
  • Distributed Scheduling
  • Docker on YARN (DockerContainerExecutor) on Data Hub clusters
  • Fair Scheduler
  • GPU support for Docker
  • Hadoop Pipes
  • Native Services
  • Pluggable Scheduler Configuration
  • Queue Priority Support
  • Reservation REST APIs
  • Resource Estimator Service
  • Resource Profiles
  • (non-Zookeeper) ResourceManager State Store
  • Rolling Log Aggregation
  • Shared Cache
  • YARN Federation
  • Moving jobs between queues