Cloudbreak Release Notes
Also available as:
PDF

Known issues

Cloudbreak 2.9.0 includes the following known issues:

Note
Note
As of December 31, 2021, Cloudbreak reached end of support. For more information, see Support lifecycle policy. Cloudera recommends that you migrate your workloads to CDP Public Cloud.

Known issues: Cloudbreak

Issue Description Workaround 
BUG-114632 If you started your cluster on Azure with a Cloudbreak version not greater than 2.8.0, 2.7.2 or 2.4.3, then your instances in any 0 or 1 node-sized host group were neither placed in any availability sets nor have rack-info other than 'Default-rack'. If rack information regarding the related host group is important to you, you should terminate your affected cluster and relaunch it with Cloudbreak 2.9.0.
BUG-116919 When defining network security group rules on Google Cloud, it is possible to specify an incorrect port range such as "5555-3333", causing the cluster deployment to fail with an error similar to:
Infrastructure creation failed. 
Reason: Invalid value for field 'resource.allowed[8].ports[0]': '5555-3333'.
Second port cannot be smaller than the first port.
When defining network security group rules on Google Cloud, make sure to define a valid range.
BUG-117004 When defining network security rules during cluster creation on Azure, when ICMP protocol is used, cluster creation fails with an error similar to:
Infrastructure creation failed. 
Reason: Stack provisioning failed, 
status code InvalidTemplateDeployment, 
error message The template deployment is not valid according to the validation procedure.
See inner errors for details. Please see https://aka.ms/arm-deploy for usage details,
details: Security rule has invalid Protocol. Clues provided: Icmp Allowed values: Tcp,Udp
When defining network security rules during cluster creation on Azure, do not use the ICMP protocol.
BUG-117005 When defining network security rules during cluster creation on Google Cloud via CLI, when ICMP protocol is specified and a port is specified, cluster creation fails with an error similar to:
Infrastructure creation failed. 
Reason: Invalid value for field 'resource.allowed[6].ports[0]':'43543'.
Ports may only be specified on rules whose protocol is one of [TCP, UDP, SCTP].
This is because when the ICMP protocol is used, no ports should be specified. The UI already enforces this automatically, but with CLI it is possible to specify a port with the ICMP protocol.
When defining network security rules during cluster creation on Google Cloud via CLI, if you would like to define a rule for using the ICMP protocol, do not specify any ports.
BUG-110998 When creating a cluster, the Cloud Storage page in the create cluster wizard includes an option to provide "Path to Ranger Audit Logs for Hive Property" when "Configure Storage Locations" is enabled. This option should only be available for data lakes and not for workload clusters. Click on "Do not configure".
BUG-99581 The Event History in the Cloudbreak web UI displays the following message:

Manual recovery is needed for the following failed nodes: []

This message is displayed when Ambari agent doesn't send the heartbeat and Cloudbreak thinks that the host is unhealthy. However, if all services are green and healthy in Ambari web UI, then it is likely that the status displayed by Cloudbreak is incorrect.

If all services are green and healthy in Ambari web UI, then syncing the cluster should fix the problem.
BUG-110999 The auto-import of HDP/HDF images on OpenStack does not work. This means, that in order to start creating HDP or HDF clusters on OpenStack, your OpenStack admin must import these images manually. Your OpenStack admin must import these images manually by using the instructions in Import HDP and HDF images to OpenStack.
BUG-112787 When a cluster with the same name as specified in CLI JSON already exists, CLI returns:
ERROR: status code: 403, message: Access is denied.
To avoid this error, pass the cluster name as a parameter with cb cluster create instead of including cluster name in the CLI JOSN definition.

Known issues: HDP

The known issues described here were discovered when testing Cloudbreak with HDP versions that are used by default in Cloudbreak. For general HDP known issues, refer to HDP release notes published at https://docs.hortonworks.com/.

There are no known issues related to HDP.

Known issues: HDF

The known issues described here were discovered when testing Cloudbreak with HDF versions that are used by default in Cloudbreak. For general HDF known issues, refer to HDF release notes published at https://docs.hortonworks.com/.

Issue Description Workaround 
BUG-98865 Blueprint configuration parameters are not applied when scaling an HDF cluster. One example that affects all users is that after HDF cluster upscale/downscale the nifi.web.proxy.host blueprint parameter does not get updated to include the new nodes, and as a result the NiFi UI is not reachable from these nodes. Configuration parameters set in the blueprint are not applied when scaling an HDF cluster. One example that affects all NiFi users is that after HDF cluster upscale the nifi.web.proxy.host parameter does not get updated to include the new hosts, and as a result the NiFi UI is not reachable from these hosts.

HOST1-IP:PORT,HOST2-IP:PORT,HOST3-IP:PORT

Known issues: Data lake

Issue Description Workaround 
BUG-109369 Hive does not start on a HDP 2.6 data lake when Kerberos is enabled.
  1. Modify /etc/hadoop/<Ambari-version>/0/core-site.xml and /etc/hadoop/conf.backup/core-site.xml by adding the following:
    <configuration>
     <property>
      <name>hadoop.security.authentication</name>
      <value>kerberos</value>
     </property>
    </configuration>
  2. Restart affected services.
BUG-116913, BUG-114150 HiveServer2 does not start on an HDP 3.1 cluster attached to a data lake. The following error is printed to Ambari logs:
ERROR client.ServiceClient: 
Error on destroy 'llap0': not found.
Failed: org.apache.hadoop.security.AccessControlException: 
/user/hive/.yarn/package/LLAP (is not a directory)
  1. Delete the "/user/hive/.yarn/package/LLAP" file, and then create a new directory in this location with the relevant permissions for the hive user.
  2. Start HiveServer2.