Known Issues and Workarounds in Cloudera Director

The following sections describe the current known issues in Cloudera Director.

Cloudera Director Does Not Recognize Cloudera Manager Password Changes

Cloudera Director does not recognize changes in the admin password in Cloudera Manager unless the username associated with the new password is also changed.

Workaround: To update Cloudera Director with a new password for Cloudera Manager, perform the following steps:
  1. Change the password for admin in Cloudera Manager.
  2. Create a new user in Cloudera Manager with Full Administrator privileges.
  3. Change Cloudera Director's credentials to this new user, either with the update-deployment.py script or with the Update Cloudera Manager Credentials command on the Add Cluster dropdown menu on the Cloudera Director UI page for the deployment. You can leave Cloudera Director configured to use this new user, or change the Cloudera Director credentials back to admin with the new password.

Cloudera Director resize script cannot resize XFS partitions

Cloudera Director is unable to resize XFS partitions, which makes creating an instance that uses the XFS filesystem fail during bootstrap.

Workaround: Use an image with an ext filesystem such as ext2, ext3, or ext4.

Incorrect yum repo definitions for Google Compute Engine RHEL images

The default RHEL 6 image defined in director-google-plugin version 1.0.1 and lower has an incorrect yum repo definition. This causes yum commands to fail after yum caches are cleared. See the Google Compute Engine issue tracker for issue details.

Workaround: Use the image rhel-6-20160119 or higher.

Cloudera Director does not set up external databases for Sqoop2

Cloudera Director cannot set up external databases for Sqoop2.

Workaround: Set up databases for this service as described in Cloudera Manager and Managed Service Databases.

Long version string required for Kafka

Kafka requires a nonintuitive version string to be specified in the configuration file or UI.

Workaround: Use the following format to specify a version string in the cluster configuration section of the configuration file or UI. For example, to deploy Kafka 1.4 in a cluster, specify 0.8.2.0-1.kafka1.4 or 0.8, instead of 1.4.

Metrics not displayed for clusters deployed in Cloudera Manager 5.4 and earlier clusters

Clusters deployed in Cloudera Manager version 5.4 and lower might not have metrics displayed in the UI if these clusters share the same name as previously deleted clusters.

Workaround: Use Cloudera Manager 5.5 and higher.

Modifying a cluster can leave some roles marked as stale in Cloudera Manager

When growing or shrinking a cluster, you have the option of restarting the cluster. The restart operation should only restart roles that are marked stale by Cloudera Manager—that is, roles that need to be restarted. This prevents unnecessary cluster downtime. However, with Cloudera Manager 5.5.x and lower, some stale roles might not be restarted, even if you select the Restart Cluster option.

Workaround: Go to Cloudera Manager, select the roles marked as stale, and restart them. This will be fixed in a future release.

Validation error after initial setup with high availability

When you set up HDFS high availability using Cloudera Director, the secondary NameNode is not configured, because it is not required for high availability. Because of a Cloudera Manager bug, the absence of a secondary NameNode causes an erroneous validation error to appear in Cloudera Manager in HDFS > Configuration > HDFS Checkpoint Directories.

Workaround: Update the field with a value for the checkpoint directory—for example, /data/dfs/snn (the value isn't important, because it is not used)—and save.

Default memory autoconfiguration for monitoring services may be suboptimal

Depending on the size of your cluster and your instance types, you may need to manually increase the memory limits for the Host Monitor and Service Monitor. Cloudera Manager displays a configuration validation warning or error if the memory limits are insufficient.

Workaround: Override firehose_heapsize for HOSTMONITOR and SERVICES with a different value in bytes (for example, 536900000 for ~512 MB). Cloudera also recommends using instances with a minimum of 15 GB of memory for management roles (30 GB recommended).

Changes to Cloudera Manager username and password must also be made in Cloudera Director

If the Cloudera Manager username and password are changed directly in Cloudera Manager, Cloudera Director can no longer add new instances or authenticate with Cloudera Manager. Username and password changes must be implemented in Cloudera Director as well.

Workaround: Use the Cloudera Director UI to update the Cloudera Manager username and password.

Cloudera Director does not sync with cluster changes made in Cloudera Manager

Modifying a cluster in Cloudera Manager after it is bootstrapped does not cause the cluster state to be synchronized with Cloudera Director. Services that have been added or removed in Cloudera Manager do not show up in Cloudera Director when growing the cluster.

Workaround: None.

Cloudera Director may use AWS credentials from instance of Cloudera Director Server

Cloudera Director Server uses the AWS credentials from a configured Environment, as defined in a client configuration file or through the Cloudera Director UI. If the Environment is not configured with credentials in Cloudera Director, the Cloudera Director server instead uses the AWS credentials that are configured on the instance on which the Cloudera Director server is running. When those credentials differ from the intended ones, EC2 instances may be allocated under unexpected accounts. Ensure that the Cloudera Director server instance is not configured with AWS credentials.

Severity: Medium

Workaround: Ensure that the Cloudera Director Environment has correct values for the keys. Alternatively, use IAM profiles for the Cloudera Director server instance.

Root partition resize fails on CentOS 6.5 (HVM)

Cloudera Director cannot resize the root partition on Centos 6.5 HVM AMIs. This is caused by a bug in the AMIs. For more information, see the CentOS Bug Tracker.

Workaround: None.

Terminating clusters that are bootstrapping must be terminated twice for the instances to be terminated

Terminating a cluster that is bootstrapping stops ongoing processes but keeps the cluster in the bootstrapping phase.

Severity: Low

Workaround: To transition the cluster to the Terminated phase, terminate the cluster again.

When using RDS and MySQL, Hive Metastore canary may fail in Cloudera Manager

If you include Hive in your clusters and configure the Hive metastore to be installed on MySQL, Cloudera Manager may report, "The Hive Metastore canary failed to create a database." This is caused by a MySQL bug in MySQL 5.6.5 or higher that is exposed when used with the MySQL JDBC driver (used by Cloudera Director) version 5.1.19 or lower. For information on the MySQL bug, see the MySQL bug description.

Workaround: Depending on the driver version installed by Cloudera Director from your platform's software repositories, select an older MySQL version that does not have this bug.