This is the documentation for Cloudera Manager 5.1.x. Documentation for other versions is available at Cloudera Documentation.

Fixed Issues

Fixed Issues in Cloudera Manager 5.1.6

Sensitive Information in Cloudera Manager Diagnostic Support Bundles

Cloudera Manager is designed to transmit certain diagnostic data (or “bundles”) to Cloudera. These diagnostic bundles are used by the Cloudera support team to reproduce, debug, and address technical issues for our customers. Cloudera internally discovered a potential vulnerability in this feature, which could cause any sensitive data stored as “advanced configuration snippets (ACS)” (formerly called “safety valves”) to be included in diagnostic bundles and transmitted to Cloudera. Notwithstanding any possible transmission, such sensitive data is not used by Cloudera for any purpose.

Cloudera has taken the following actions: (1) modified Cloudera Manager so that it no longer transmits advanced configuration snippets containing the sensitive data, and (2) modified Cloudera Manager SSL configuration to increase the protection level of the encrypted communication.

Cloudera strives to follow and also help establish best practices for the protection of customer information. In this effort, we continually review and improve our security practices, infrastructure, and data handling policies.

Users affected:
  • Users storing sensitive data in advanced configuration snippets

Severity: High

Impact: Possible transmission of sensitive data

CVE: CVE-2015-6495

Immediate Action Required:
  • Upgrade Cloudera Manager to one of the following releases: Cloudera Manager 5.4.6, 5.3.7, 5.2.7, 5.1.6, 5.0.7, 4.8.6

Cloudera Manager Agent may become slow or get stuck when responding to commands and when sending heartbeats to Cloudera Manager Server

This issue can occur when Cloudera Navigator auditing is turned on. The auditing code reads audit logs and sends them to the Audit Server. It acquires a lock to protect the list of roles being audited. The same list is also modified by the Cloudera Manager Agent's main thread when a role is started or stopped. If the Audit thread takes too much time to send audits to the Audit Server (which can happen if there is backlog of audit logs), it starves the main Agent thread. This causes the main Agent thread to not send heartbeats and to not respond to commands from the Cloudera Manager Server.

Fixed Issues in Cloudera Manager 5.1.5

Slow staleness calculation can lead to ZooKeeper data loss when new servers are added

In Cloudera Manager 5, starting new ZooKeeper Servers shortly after adding them can cause ZooKeeper data loss when the number of new servers exceeds the number of old servers.

Permissions set incorrectly on YARN Keytab files

Permissions on YARN Keytab files for NodeManager were set incorrectly to allow read access to any user.

Fixed Issues in Cloudera Manager 5.1.4

“POODLE” vulnerability on SSL/TLS enabled ports

The POODLE (Padding Oracle On Downgraded Legacy Encryption) attack takes advantage of a cryptographic flaw in the obsolete SSLv3 protocol, after first forcing the use of that protocol. The only solution is to disable SSLv3 entirely. This requires changes across a wide variety of components of CDH and Cloudera Manager. Cloudera Manager 5.1.4 provides these changes for Cloudera Manager 5.1.x deployments. All Cloudera Manager 5.1.x users should upgrade to 5.1.4 as soon as possible. For more information, see the Cloudera Security Bulletin.

Fixed Issues in Cloudera Manager 5.1.3

Improved speed and heap usage when deleting hosts on cluster with long history

Speed and heap usage have been improved when deleting hosts on clusters that have been running for a long time.

When there are multiple clusters, each cluster's topology files and validation for legal topology is limited to hosts in that cluster

When there are multiple clusters, each cluster's topology files and validation for legal topology is limited to hosts in that cluster. Most commands will now fail up front if the cluster's topology is invalid.

The size of the statement cache has been reduced for Oracle databases

For users of Oracle databases, the size of the statement cache has been reduced to help with memory consumption.

Improvements to memory usage of "cluster diagnostics collection" for large clusters.

Memory usage of "cluster diagnostics collection" has been improved for large clusters.

Fixed Issues in Cloudera Manager 5.1.2

If a NodeManager that is used as ApplicationMaster is decommissioned, YARN jobs will hang

Jobs can hang on NodeManager decommission due to a race condition when continuous scheduling is enabled.

Workaround:
  1. Go to the YARN service.
  2. Expand the ResourceManager Default Group > Resource Management category.
  3. Uncheck the Enable Fair Scheduler Continuous Scheduling checkbox.
  4. Click Save Changes to commit the changes.
  5. Restart the YARN service.

Hive replication fails with TLS enabled

Hive replication will fail when the source Cloudera Manager instance has TLS enabled, even though the required certificates have been added to the target Cloudera Manager's trust store.

Workaround: Add the required Certificate Authority or self-signed certificates to the default Java trust store, which is typically a copy of the cacerts file named jssecacerts in the $JAVA_HOME/jre/lib/security/ path of your installed JDK. Use keytool to import your private CA certificates into the jssecacert file.

AWS Installation wizard requires Java 7u45 to be installed on Cloudera Manager Server host

Cloudera Manager 5.1 installs Java 7u55 by default. However, the AWS installation wizard does not work with Java 7u55 due to a bug in the jClouds version packaged with Cloudera Manager.

Workaround:
  1. Stop the Cloudera Manager Server.
    $ sudo service cloudera-scm-server stop 
  2. Uninstall Java 7u55 from the Cloudera Manager Server host.
  3. Install Java 7u45 (which you can download from http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html#jdk-7u45-oth-JPR) on the Cloudera Manager Server host.
  4. Start the Cloudera Manager Server.
    $ sudo service cloudera-scm-server start
  5. Run the AWS installation wizard.
  Note: Due to a bug in Java 7u45 (http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8014618), SSL connections between the Cloudera Manager Server and Cloudera Manager Agents and between the Cloudera Management Service and CDH processes break intermittently. If you do not have SSL enabled on your cluster, there is no impact.

Could not find a healthy host with CDH 5 on it to create HiveServer2 error during upgrade

When upgrading from CDH 4 to CDH 5, if no parcel is active then the error message "Could not find a healthy host with CDH5 on it to create HiveServer2" displays. This can happen when transitioning from packages to parcels, or if you explicitly deactivate the CDH 4 parcel (which is not necessary) before upgrade.

Workaround: Wait 30 seconds and retry the upgrade.

AWS installation wizard requires Java 7u45 to be installed on Cloudera Manager Server host

Cloudera Manager 5.1 installs Java 7u55 by default. However, the AWS installation wizard does not work with Java 7u55 due to a bug in the jClouds version packaged with Cloudera Manager.

Workaround:
  1. Stop the Cloudera Manager Server.
    $ sudo service cloudera-scm-server stop 
  2. Uninstall Java 7u55 from the Cloudera Manager Server host.
  3. Install Java 7u45 (which you can download from http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html#jdk-7u45-oth-JPR) on the Cloudera Manager Server host.
  4. Start the Cloudera Manager Server.
    $ sudo service cloudera-scm-server start
  5. Run the AWS installation wizard.
  Note: Due to a bug in Java 7u45 (http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8014618), SSL connections between the Cloudera Manager Server and Cloudera Manager Agents and between the Cloudera Management Service and CDH processes break intermittently. If you do not have SSL enabled on your cluster, there is no impact.

The YARN property ApplicationMaster Max Retries has no effect in CDH 5

The issue arises because yarn.resourcemanager.am.max-retries was replaced with yarn.resourcemanager.am.max-attempts.

Workaround:
  1. Add the following to ResourceManager Advanced Configuration Snippet for yarn-site.xml, replacing MAX_ATTEMPTS with the desired maximum number of attempts:
    <property>
    <name>yarn.resourcemanager.am.max-attempts</name><value>MAX_ATTEMPTS</value>
    </property>
  2. Restart the ResourceManager(s) to pick up the change.

(BDR) Replications can be affected by other replications or commands running at the same time

Replications can be affected by other replications or commands running at the same time, causing replications to fail unexpectedly or even be silently skipped sometimes. When this occurs, a StaleObjectException is logged to the Cloudera Manager logs. This is known to occur even with as few as four replications starting at the same time.

Fixed Issues in Cloudera Manager 5.1.1

Checking "Install Java Unlimited Strength Encryption Policy Files" During Add Cluster or Add/Upgrade Host Wizard on RPM based distributions if JDK 7 or above is pre-installed will cause Cloudera Manager and CDH to fail

If you have manually installed Oracle's official JDK 7 or 8 rpm on a host (or hosts), and check the Install Java Unlimited Strength Encryption Policy Files checkbox in the Add Cluster or Add Host wizard when installing Cloudera Manager on that host (or hosts), or when upgrading Cloudera Manager to 5.1, Cloudera Manager installs JDK 6 policy files, which will prevent any Java programs from running against that JDK. Additionally, if this situation does apply, Cloudera Manager/CDH will also choose that particular Java as the default to run against, meaning that Cloudera Manager/CDH fail to start, throwing the following message in logs: Caused by: java.lang.SecurityException: The jurisdiction policy files are not signed by a trusted signer!.

Workaround: Do not select the Install Java Unlimited Strength Encryption Policy Files checkbox during the aforementioned wizards. Instead download and install them manually, following the instructions on Oracle's website.
  Note: To return to the default limited strength files, reinstall the original Oracle rpm:
  • yum - yum reinstall jdk
  • zypper - zypper in -f jdk
  • rpm - rpm -iv --replacepkgs filename, where filename is jdk-7u65-linux-x64.rpm or jdk-8u11-linux-x64.rpm)

Fixed Issues in Cloudera Manager 5.1.0

  Important: Cloudera Manager 5.1.0 is no longer available for download from the Cloudera website or from archive.cloudera.com due to the JCE policy file issue described in the Fixed Issues in Cloudera 5.1.1 section of the Release Notes. The download URL at archive.cloudera.com for Cloudera Manager 5.1.0 now forwards to Cloudera Manager 5.1.1 for the RPM-based distributions for Linux RHEL and SLES.

(BDR) Multiple concurrent Hive replications can result in time-out errors.

Many Hive replications running at the same time can cause some of them to fail with "Read timed out" errors.

(BDR) Concurrent Hive replications can result in some indicating success without replicating the metadata

When multiple Hive replications are started at exactly the same time (typically triggered by replication schedules), one or more of them may complete successfully without actually replicating the desired Hive metadata.

Changes to property for yarn.nodemanager.remote-app-log-dir are not included in the JobHistory Server yarn-site.xml and Gateway Logging file

When "Remote App Log Directory" is changed in YARN configuration, the property yarn.nodemanager.remote-app-log-dir are not included in the JobHistory Server yarn-site.xml and Gateway logging file.

Workaround: Set JobHistory Server Advanced Configuration Snippet for yarn-site.xml and Gateway Logging Advanced Configuration Snippet (Safety Valve) to:
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/path/to/logs</value>
</property>

Secure CDH 4.1 clusters can't have Hue and Impala share the same Hive

In a secure CDH 4.1 cluster, Hue and Impala cannot share the same Hive instance. If "Bypass Hive Metastore Server" is disabled on the Hive service, then Hue will not be able to talk to Hive. Conversely, if "Bypass Hive Metastore" enabled on the Hive service, then Impala will have a validation error.

Severity: High

Workaround: Upgrade to CDH 4.2.

The command history has an option to select the number of commands, but doesn't always return the number you request

Workaround: None.

Hue doesn't support YARN ResourceManager High Availability

Workaround: Configure the Hue Server to point to the active ResourceManager:
  1. Go to the Hue service.
  2. Click the Configuration tab.
  3. ClickHue Server Default Group > Advanced.
  4. In the Hue Server Advanced Configuration Snippet for hue_safety_valve_server.ini field, add the following:
    [hadoop]
    [[ yarn_clusters ]]
    [[[default]]]
    resourcemanager_host=<hostname of active ResourceManager>
    resourcemanager_api_url=http://<hostname of active resource manager>:<web port of active resource manager>
    proxy_api_url=http://<hostname of active resource manager>:<web port of active resource manager>
    The default web port of Resource Manager is 8088.
  5. Click Save Changes to have these configurations take effect.
  6. Restart the Hue service.

Cloudera Manager does not support encrypted shuffle.

Encrypted shuffle has been introduced in CDH 4.1, but it is not currently possible to enable it through Cloudera Manager.

Severity: Medium

Workaround: None.

Hive CLI does not work in CDH 4 when "Bypass Hive Metastore Server" is enabled

Hive CLI does not work in CDH 4 when "Bypass Hive Metastore Server" is enabled.

Workaround: Configure Hive and disable the "Bypass Hive Metastore Server" option.

Alternatively, an approach can be taken that will cause the "Hive Auxiliary JARs Directory" to not work, but will enable basic Hive commands to work. Add the following to "Gateway Client Environment Advanced Configuration Snippet for hive-env.sh," then re-deploy the Hive client configuration:

HIVE_AUX_JARS_PATH=""
AUX_CLASSPATH=/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:$(find /usr/share/cmf/lib/postgresql-jdbc.jar 2> /dev/null | tail -n 1)

Incorrect Absolute Path to topology.py in Downloaded YARN Client Configuration

The downloaded client configuration for YARN includes the topology.py script. The location of this script is given by the net.topology.script.file.name property in core-site.xml. But the core-site.xml file downloaded with the client configuration has an incorrect absolute path to /etc/hadoop/... for topology.py. This can cause clients that run against this configuration to fail (including Spark clients run in yarn-client mode, as well as YARN clients).

Workaround: Edit core-site.xml to change the value of the net.topology.script.file.name property to the path where the downloaded copy of topology.py is located. This property must be set to an absolute path.

search_bind_authentication for Hue is not included in .ini file

When search_bind_authentication is set to false, CM does not include it in hue.ini.

Workaround: Add the following to the Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini:
[desktop]
[[ldap]]
search_bind_authentication=false

Erroneous warning displayed on the HBase configuration page on CDH 4.1 in Cloudera Manager 5.0.0

An erroneous "Failed parameter validation" warning is displayed on the HBase configuration page on CDH 4.1 in Cloudera Manager 5.0.0

Severity: Low

Workaround: Use CDH4.2 or higher, or ignore the warning.

Host recommissioning and decommissioning should occur independently

In large clusters, when problems appear with a host or role, administrators may choose to decommission the host or role to fix it and then recommission the host or role to put it back in production. Decommissioning, especially host decommissioning, is slow, hence the importance of parallelization, so that host recommissioning can be initiated before decommissioning is done.

Fixed Issues in Cloudera Manager 5.0.7

Sensitive Information in Cloudera Manager Diagnostic Support Bundles

Cloudera Manager is designed to transmit certain diagnostic data (or “bundles”) to Cloudera. These diagnostic bundles are used by the Cloudera support team to reproduce, debug, and address technical issues for our customers. Cloudera internally discovered a potential vulnerability in this feature, which could cause any sensitive data stored as “advanced configuration snippets (ACS)” (formerly called “safety valves”) to be included in diagnostic bundles and transmitted to Cloudera. Notwithstanding any possible transmission, such sensitive data is not used by Cloudera for any purpose.

Cloudera has taken the following actions: (1) modified Cloudera Manager so that it no longer transmits advanced configuration snippets containing the sensitive data, and (2) modified Cloudera Manager SSL configuration to increase the protection level of the encrypted communication.

Cloudera strives to follow and also help establish best practices for the protection of customer information. In this effort, we continually review and improve our security practices, infrastructure, and data handling policies.

Users affected:
  • Users storing sensitive data in advanced configuration snippets

Severity: High

Impact: Possible transmission of sensitive data

CVE: CVE-2015-6495

Immediate Action Required:
  • Upgrade Cloudera Manager to one of the following releases: Cloudera Manager 5.4.6, 5.3.7, 5.2.7, 5.1.6, 5.0.7, 4.8.6

Cloudera Manager Agent may become slow or get stuck when responding to commands and when sending heartbeats to Cloudera Manager Server

This issue can occur when Cloudera Navigator auditing is turned on. The auditing code reads audit logs and sends them to the Audit Server. It acquires a lock to protect the list of roles being audited. The same list is also modified by the Cloudera Manager Agent's main thread when a role is started or stopped. If the Audit thread takes too much time to send audits to the Audit Server (which can happen if there is backlog of audit logs), it starves the main Agent thread. This causes the main Agent thread to not send heartbeats and to not respond to commands from the Cloudera Manager Server.

Fixed Issues in Cloudera Manager 5.0.6

Slow staleness calculation can lead to ZooKeeper data loss when new servers are added

In Cloudera Manager 5, starting new ZooKeeper Servers shortly after adding them can cause ZooKeeper data loss when the number of new servers exceeds the number of old servers.

Fixed Issues in Cloudera Manager 5.0.5

“POODLE” vulnerability on SSL/TLS enabled ports

The POODLE (Padding Oracle On Downgraded Legacy Encryption) attack takes advantage of a cryptographic flaw in the obsolete SSLv3 protocol, after first forcing the use of that protocol. The only solution is to disable SSLv3 entirely. This requires changes across a wide variety of components of CDH and Cloudera Manager. Cloudera Manager 5.0.5 provides these changes for Cloudera Manager 5.0.x deployments. All Cloudera Manager 5.0.x users should upgrade to 5.0.5 as soon as possible. For more information, see the Cloudera Security Bulletin.

Fixed Issues in Cloudera Manager 5.0.2

Cloudera Manager Impala Query Monitoring does not work with Impala 1.3.1

Impala 1.3.1 contains changes to the runtime profile format that break the Cloudera Manager Query Monitoring feature. This leads to exceptions in the Cloudera Manager Service Monitor logs, and Impala queries no longer appear in the Cloudera Manager UI or API. The issue affects Cloudera Manager 5.0 and 4.6 - 4.8.2.

Workaround: None. The issue will be fixed in Cloudera Manager 4.8.3 and Cloudera Manager 5.0.1. To avoid the Service Monitor exceptions, turn off the Cloudera Manager Query Monitoring feature by going to Impala Daemon > Monitoring and setting the Query Monitoring Period to 0 seconds. Note that the Impala Daemons must be restarted when changing this setting, and the setting must be restored once the fix is deployed to turn the query monitoring feature back on. Impala queries will then appear again in Cloudera Manager’s Impala query monitoring feature.

Fixed Issues in Cloudera Manager 5.0.1

If installing CDH 4 packages, the Impala 1.3.0 option does not work because Impala 1.3 is not yet released for CDH 4.

If installing CDH 4 packages, the Impala 1.3.0 option listed in the install wizard does not work because Impala 1.3.0 is not yet released for CDH 4.

Workaround: Install using parcels (where the unreleased version of Impala does not appear), or select a different version of Impala when installing with packages.

Upgrade of secure cluster requires installation of JCE policy files

When upgrading a secure cluster via Cloudera Manager, the upgrade initially fails due to the JDK not having Java Cryptography Extension (JCE) unlimited strength policy files. This is because Cloudera Manager installs a copy of the Java 7 JDK during the upgrade, which does not include the unlimited strength policy files. To ensure that unlimited strength functionality continues to work, install the unlimited strength JCE policy files immediately after completing the Cloudera Manager Upgrade Wizard and before taking any other actions in Cloudera Manager.

Workaround: Install the unlimited strength JCE policy files immediately after completing the Cloudera Manager Upgrade Wizard and before taking any other action in Cloudera Manager.

When updating dynamic resource pools, Cloudera Manager updates roles but may fail to update role information displayed in the UI

When updating dynamic resource pools, Cloudera Manager automatically refreshes the affected roles, but they sometimes get marked incorrectly as running with outdated configurations and requiring a refresh.

Workaround: Invoke the Refresh Cluster command from the cluster actions drop-down menu.

Reset non-default HDFS File Block Storage Location Timeout value after upgrade from CDH 4 to CDH 5

During an upgrade from CDH 4 to CDH 5, if the HDFS File Block Storage Locations Timeout was previously set to a custom value, it will now be set to 10 seconds or the custom value, whichever is higher. This is required for Impala to start in CDH 5, and any value under 10 seconds is now a validation error. This configuration is only emitted for Impala and no services should be adversely impacted.

Workaround: None.

HDFS NFS gateway works only on RHEL and similar systems

Because of bug in native versions of portmap/rpcbind, the HDFS NFS gateway does not work on SLES, Ubuntu, or Debian systems. It does work on supported versions of RHEL- compatible systems on which rpcbind-0.2.0-10.el6 or later is installed. (See

Bug: 731542 (Red Hat), 823364 (SLES), 594880 (Debian)

Severity: High

Workaround:
  • On Red Hat and similar systems, make sure rpcbind-0.2.0-10.el6 or later is installed.
  • On SLES, Debian, and Ubuntu systems, you can use the gateway by running rpcbind in insecure mode, using the -i option, but keep in mind that this allows anyone from a remote host to bind to the portmap.

Sensitive configuration values exposed in Cloudera Manager

Certain configuration values that are stored in Cloudera Manager are considered sensitive, such as database passwords. These configuration values should be inaccessible to non-administrator users, and this is enforced in the Cloudera Manager Administration Console. However, these configuration values are not redacted when they are read through the API, possibly making them accessible to users who should not have such access.

Failure of HDFS Replication between clusters with YARN

HDFS replication between clusters in different Kerberos realms fails when using YARN if the target cluster is CDH 5.

Workaround: Use MapReduce (MRv1) instead of YARN.

The Details page for MapReduce jobs displays the wrong id for YARN-based replications

The Details link for MapReduce jobs is wrong for YARN-based replications.

Workaround: Find the job id in the link and then go to the YARN Applications page and look for the job there.

Gateway role configurations not respected when deploying client configurations

Gateway configurations set for gateway role groups other than the default one or at the role level were not being respected.

Documentation reflects requirement to enable at least Level 1 encryption before enabling Kerberos authentication

The Cloudera Manager manual Configuring Hadoop Security with Cloudera Manager now indicates that before enabling Kerberos authentication you should first enable at least Level 1 encryption.

HDFS NFS gateway does not work on all Cloudera-supported platforms

The NFS gateway cannot be started on some Cloudera-supported platforms.

Workaround: None. Fixed in Cloudera Manager 5.0.1.

Replace YARN_HOME with HADOOP_YARN_HOME during upgrade

If yarn.application.classpath was set to a non-default value on a CDH 4 cluster, and that cluster is upgraded to CDH 5, the classpath is not updated to reflect that $YARN_HOME was replaced with $HADOOP_YARN_HOME. This will cause YARN jobs to fail.

Workaround: Reset yarn.application.classpath to the default, then re-apply your classpath customizations if needed.

Insufficient password hashing in Cloudera Manager

In versions of Cloudera Manager earlier than 4.8.3 and earlier than 5.0.1, user passwords are only hashed once. Passwords should be hashed multiple times to increase the cost of dictionary based attacks, where an attacker tries many candidate passwords to find a match. The issue only affects user accounts that are stored in the Cloudera Manager database. User accounts that are managed externally (for example, with LDAP or Active Directory) are not affected.

In addition, because of this issue, Cloudera Manager 4.8.3 cannot be upgraded to Cloudera Manager 5.0.0. Cloudera Manager 4.8.3 must be upgraded to 5.0.1 or later.

Workaround: Upgrade to Cloudera Manager 5.0.1.

Upgrade to Cloudera Manager 5.0.0 from SLES older than Service Pack 3 with PostgreSQL older than 8.4 fails

Upgrading to Cloudera Manager 5.0.0 from SUSE Linux Enterprise Server (SLES) older than Service Pack 3 will fail if the embedded PostgreSQL database is in use and the installed version of PostgreSQL is less than 8.4.

Workaround: Either migrate away from the embedded PostgreSQL database (use MySQL or Oracle) or upgrade PostgreSQL to 8.4 or greater.

MR1 to MR2 import fails on a secure cluster

When running the MR1 to MR2 import on a secure cluster, YARN jobs will fail to find container-executor.cfg.

Workaround: Restart YARN after the import.

After upgrade from CDH 4 to CDH 5, Oozie is missing workflow extension schemas

After an upgrade from CDH 4 to CDH 5, Oozie does not pick up the new workflow extension schemas automatically. User will need to update oozie.service.SchemaService.wf.ext.schemas manually and add the schemas added in CDH 5: shell-action-0.3.xsd, sqoop-action-0.4.xsd, distcp-action-0.2.xsd, oozie-sla-0.1.xsd, oozie-sla-0.2.xsd. Note: None of the existing jobs will be affected by this bug, only new workflows that require new schemas.

Workaround: Add the new workflow extension schemas to Oozie manually by editing oozie.service.SchemaService.wf.ext.schemas.

Fixed Issues in Cloudera Manager 5.0.0

Cannot Restore a Snapshot of a deleted HBase Table

If you take a snapshot of an HBase table, and then delete that table in HBase, you will not be able to restore the snapshot.

Severity: Med

Workaround: Use the "Restore As" command to recreate the table in HBase.

Stop dependent HBase services before enabling HDFS Automatic Failover.

When enabling HDFS Automatic Failover, you need to first stop any dependent HBase services. The Automatic Failover configuration workflow restarts both NameNodes, which could cause HBase to become unavailable.

Severity: Medium

New schema extensions have been introduced for Oozie in CDH 4.1

In CDH 4.1, Oozie introduced new versions for Hive, Sqoop and workflow schema. To use them, you must add the new schema extensions to the Oozie SchemaService Workflow Extension Schemas configuration property in Cloudera Manager.

Severity: Low

Workaround: In Cloudera Manager, do the following:

  1. Go to the CDH 4 Oozie service page.
  2. Go to the Configuration tab, View and Edit.
  3. Search for "Oozie Schema". This should show the Oozie SchemaService Workflow Extension Schemas property.
  4. Add the following to the Oozie SchemaService Workflow Extension Schemas property:
    shell-action-0.2.xsd
    hive-action-0.3.xsd
    sqoop-action-0.3.xsd
  5. Save these changes.

YARN Resource Scheduler user FairScheduler rather than FIFO.

Cloudera Manager 5.0.0 sets the default YARN Resource Scheduler to FairScheduler. If a cluster was previously running YARN with the FIFO scheduler, it will be changed to FairScheduler next time YARN restarts. The FairScheduler is only supported with CDH4.2.1 and later, and older clusters may hit failures and need to manually change the scheduler to FIFO or CapacityScheduler.

Severity: Medium

Workaround: For clusters running CDH 4 prior to CDH 4.2.1:
  1. Go the YARN service Configuration page
  2. Search for "scheduler.class"
  3. Click in the Value field and select the schedule you want to use.
  4. Save your changes and restart YARN to update your configurations.

Resource Pools Summary is incorrect if time range is too large.

The Resource Pools Summary does not show correct information if the Time Range selector is set to show 6 hours or more.

Severity: Medium

Workaround: None.

When running the MR1 to MR2 import on a secure cluster, YARN jobs will fail to find container-executor.cfg

Workaround: Restart YARN after the import steps finish. This causes the file to be created under the YARN configuration path, and the jobs now work.

When upgrading to Cloudera Manager 5.0.0, the "Dynamic Resource Pools" page is not accessible

When upgrading to Cloudera Manager 5.0.0, users will not be able to directly access the "Dynamic Resource Pools" page. Instead, they will be presented with a dialog saying that the Fair Scheduler XML Advanced Configuration Snippet is set.

Workaround:
  1. Go to the YARN service.
  2. Click the Configuration tab.
  3. Copy the value of the Fair Scheduler XML Advanced Configuration Snippet into a file.
  4. Clear the value of Fair Scheduler XML Advanced Configuration Snippet.
  5. Recreate the desired Fair Scheduler allocations in the Dynamic Resource Pools page, using the saved file for reference.

New Cloudera Enterprise licensing is not reflected in the wizard and license page

Workaround: None.

The AWS Cloud wizard fails to install Spark due to missing roles

Workaround: Do one of the following:
  • Use the traditional install wizard
  • Open a new window, click the Spark service, click on the Instances tab, click Add, add all required roles to Spark. Once the roles are successfully added, click the Retry button on the First Run page in the wizard.

Spark on YARN requires manual configuration

Spark on YARN requires the following manual configuration to work correctly: modify the YARN Application Classpath by adding /etc/hadoop/conf, making it the very first entry.

Workaround: Add /etc/hadoop/conf as the first entry in the YARN Application classpath.

Monitoring works with Solr and Sentry only after configuration updates

Cloudera Manager monitoring does not work out of the box with Solr and Sentry on Cloudera Manager 5. The Solr service is in Bad health, and all Solr Servers have a failing "Solr Server API Liveness" health check.

Severity: Medium

Workaround: Complete the configuration steps below:

  1. Create "HTTP" user and group on all machines in the cluster (with useradd 'HTTP' on RHEL-type systems).
  2. The instructions that follow this step assume there is no existing Solr Sentry policy file in use. In that case, first create the policy file on /tmp and then copy it over to the appropriate location in HDFS that Solr Servers check. If there is already a Solr Sentry policy in use, it must be modified to add the following [group] / [role] entries for 'HTTP'. Create a file (for example, /tmp/cm-authz-solr-sentry-policy.ini) with the following contents:
    [groups]
    HTTP = HTTP
    [roles]
    HTTP = collection = admin->action=query
  3. Copy this file to the location for the "Sentry Global Policy File" for Solr. The associated config name for this location is sentry.solr.provider.resource, and you can see the current value by navigating to the Sentry sub-category in the Service Wide configuration editing workflow in the Cloudera Manager UI. The default value for this entry is /user/solr/sentry/sentry-provider.ini. This refers to a path in HDFS.
  4. Check if you have entries in HDFS for the parent(s) directory:
    sudo -u hdfs hadoop fs -ls /user
  5. You may need to create the appropriate parent directories if they are not present. For example:
    sudo -u hdfs hadoop fs -mkdir /user/solr/sentry
  6. After ensuring the parent directory is present, copy the file created in step 2 to this location, as follows:
    sudo -u hdfs hadoop fs -put /tmp/cm-authz-solr-sentry-policy.ini /user/solr/sentry/sentry-provider.ini
  7. Ensure that this file is owned/readable by the solr user (this is what the Solr Server runs as):
    sudo -u hdfs hadoop fs -chown solr /user/solr/sentry/sentry-provider.ini
  8. Restart the Solr service. If both Kerberos and Sentry are being enabled for Solr, the MGMT services also need to be restarted. The Solr Server liveness health checks should clear up once SMON has had a chance to contact the servers and retrieve metrics.

Out-of-memory errors may occur when using the Reports Manager

Out-of-memory errors may occur when using the Cloudera Manager Reports Manager.

Workaround: Set the value of the "Java Heap Size of Reports Manager" property to at least the size of the HDFS filesystem image (fsimage) and restart the Reports Manager.

Applying license key using Internet Explorer 9 and Safari fails

Cloudera Manager is designed to work with IE 9 and above and Safari. However the file upload widget used to upload a license currently doesn't work with IE 9 or Safari. Therefore, installing an enterprise license doesn't work.

Workaround: Use another supported browser.

Fixed Issues in Cloudera Manager 5.0.0 Beta 2

The Sqoop Upgrade command in Cloudera Manager may report success even when the upgrade fails

Workaround: Do one of the following:
    1. Click the Sqoop service and then the Instances tab.
    2. Click the Sqoop server role then the Commands tab.
    3. Click the stdout link and scan for the Sqoop Upgrade command.
  • In the All Recent Commands page, select the stdout link for latest Sqoop Upgrade command.
Verify that the upgrade did not fail.

The HDFS Canary Test is disabled for secured CDH 5 services.

Due to a bug in Hadoop's handling of multiple RPC clients with distinct configurations within a single process with Kerberos security enabled, Cloudera Manager will disable the HDFS canary test when security is enabled so as to prevent interference with Cloudera Manager's MapReduce monitoring functionality.

Severity: Medium

Workaround: None

Not all monitoring configurations are migrated from MR1 to MR2.

When MapReduce v1 configurations are imported for use by YARN (MR2), not all of the monitoring configuration values are currently migrated. Users may need to reconfigure custom values for properties such as thresholds.

Severity: Medium

Workaround: Manually reconfigure any missing property values.

"Access Denied" may appear for some features after adding a license or starting a trial.

After starting a 60-day trial or installing a license for Enterprise Edition, you may see an "access denied" message when attempting to access certain Enterprise Edition-only features such as the Reports Manager. You need to log out of the Admin Console and log back in to access these features.

Severity: Low

Workaround: Log out of the Admin Console and log in again.

Hue must set impersonation on when using Impala with impersonation.

When using Impala with impersonation, the impersonation_enabled flag must be present and configured in the hue.ini file. If impersonation is enabled in Impala (i.e. Impala is using Sentry) then this flag must be set true. If Impala is not using impersonation, it should be set false (the default).

Workaround: Set advanced configuration snippet value for hue.ini as follows:
  1. Go to the Hue Service Configuration Advanced Configuration Snippet for hue_safety_valve.ini under the Hue service Configuration settings, Service-Wide > Advanced category.
  2. Add the following, then uncomment the setting and set the value True or False as appropriate:
    #################################################################
    # Settings to configure Impala
    #################################################################
    
    [impala]
      ....
      # Turn on/off impersonation mechanism when talking to Impala
      ## impersonation_enabled=False

Cloudera Manager Server may fail to start when upgrading using a PostgreSQL database.

If you're upgrading to Cloudera Manager 5.0.0 beta 1 and you're using a PostgreSQL database, the Cloudera Manager Server may fail to start with a message similar to the following:
ERROR [main:dbutil.JavaRunner@57] Exception while executing
com.cloudera.cmf.model.migration.MigrateConfigRevisions
java.lang.RuntimeException: java.sql.SQLException: Batch entry <xxx> insert into REVISIONS
(REVISION_ID, OPTIMISTIC_LOCK_VERSION, USER_ID, TIMESTAMP, MESSAGE) values (...)
was aborted. Call getNextException to see the cause.
Workaround: Use psql to connect directly to the server's database and issue the following SQL command:
alter table REVISIONS alter column MESSAGE type varchar(1048576);
After that, your Cloudera Manager server should start up normally.

Fixed Issues in Cloudera Manager 5.0.0 Beta 1

After an upgrade from Cloudera Manager 4.6.3 to 4.7, Impala does not start.

After an upgrade from Cloudera Manager 4.6.3 to 4.7 when Navigator is used, Impala will fail to start because the Audit Log Directory property has not been set by the upgrade procedure.

Severity: Low.

Workaround: Manually set the property to /var/log/impalad/audit. See the Service Auditing Properties section of the Cloudera Navigator Installation and User Guide for more information.

Page generated September 3, 2015.