Known Issues and Workarounds in Cloudera Director 1

The following sections describe the current known issues in Cloudera Director 1.

Default Memory Autoconfiguration for Monitoring Services May Be Suboptimal

Depending on the size of your cluster and your choice of instance types, you may need to manually increase the memory limits for the Host Monitor and Service Monitor. Cloudera Manager displays a configuration validation warning or error if the memory limits are insufficient.

Workaround: Override firehose_heapsize for HOSTMONITOR and SERVICES with a different value in bytes (for example, 536900000 for ~512 MB). Cloudera also recommends using instances with a minimum of 15 GB of memory for management roles (30 GB recommended).

Changes to Cloudera Manager Username and Password Must Also Be Made in Cloudera Director

If the Cloudera Manager username and password are changed directly in Cloudera Manager, Cloudera Director can no longer add new instances or authenticate with Cloudera Manager. Username and password changes must be implemented in Cloudera Director as well.

Workaround: Use the update-deployment.py script to update the Cloudera Manager credentials in Cloudera Director:
$ wget https://raw.githubusercontent.com/cloudera/director-scripts/master/util/update-deployment.py  
          $ sudo pip install cloudera-director-python-client
          $ python update-deployment.py --admin-username admin --admin-password admin --server "http://<director_server_host>:7189" --environment <environment_name> --deployment <deployment_name> --deployment-password newPassword
        

In the example, the deployment username, admin, is not changed. The password is changed to newPassword.

Cloning and Growing a Kerberos-Enabled Cluster Fails

Cloning of a cluster that is using Kerberos authentication fails, whether it is cloned manually or by using the kerberize-cluster.py script. Growing a cluster that is using Kerberos authentication fails.

Workaround: None.

Cloudera Director Does Not Sync With Cluster Changes Made in Cloudera Manager

Modifying a cluster in Cloudera Manager after it is bootstrapped does not cause the cluster's state to be synchronized with Cloudera Director. Services that have been added or removed in Cloudera Manager do not show up in Cloudera Director when growing the cluster.

Workaround: None.

Kafka With a Cloudera Manager Version of 5.4.x and Lower Causes Failure

Kafka installed with a Cloudera Manager version of 5.4.x and lower causes the Cloudera Manager first run wizard, and therefore the bootstrap process, to fail, unless you override the configuration setting broker_max_heap_size.

Workaround: Override broker_max_heap_size by setting it to at least 256 MB.

Cloudera Director May Use AWS Credentials From Instance of Cloudera Director Server

Cloudera Director server uses the AWS credentials from a configured Environment, as defined in a client configuration file or through the Cloudera Director UI. If the Environment is not configured with credentials in Cloudera Director, the Cloudera Director server instead uses the AWS credentials that are configured on the instance on which the Cloudera Director server is running. When those credentials differ from the intended ones, EC2 instances may be allocated under unexpected accounts. Ensure that the Cloudera Director server instance is not configured with AWS credentials.

Severity: Medium

Workaround: Ensure that the Cloudera Director Environment has correct values for the keys. Alternatively, use IAM profiles for the Cloudera Director server instance.

Root Partition Resize Fails on CentOS 6.5 (HVM)

Cloudera Director cannot resize the root partition on Centos 6.5 HVM AMIs. This is caused by a bug in the AMIs. For more information, see the CentOS Bug Tracker.

Workaround: None.

Cloudera Director Does Not Set Up External Databases for Oozie, Hue, and Sqoop2

Cloudera Director cannot set up external databases for Oozie, Hue, and Sqoop2.

Workaround: Set up the databases for these services as described in Cloudera Manager and Managed Service Databases. Provide the database properties such as host address and username to Cloudera Director in the relevant Oozie service configuration section.

Terminating Clusters That are Bootstrapping Must be Terminated Twice for the Instances to be Terminated

Terminating a cluster that is bootstrapping stops ongoing processes but keeps the cluster in the bootstrapping phase.

Severity: Low

Workaround: To transition the cluster to the Terminated phase, terminate the cluster again.

When Using RDS and MySQL, Hive Metastore Canary May Fail in Cloudera Manager

If you include Hive in your clusters and configure the Hive metastore to be installed on MySQL, Cloudera Manager may report, "The Hive Metastore canary failed to create a database." This is caused by a MySQL bug in MySQL 5.6.5 or later that is exposed when used with the MySQL JDBC driver (used by Cloudera Director) version 5.1.19 or earlier. For information on the MySQL bug, see the MySQL bug description.

Workaround: Depending on the driver version installed by Cloudera Director from your platform's software repositories, select an older MySQL version that does not have this bug.