Release notes
2.7.0
Cloudbreak 2.7.0 is a general availability release, which is suitable for production deployments.
New features
Launching Cloudbreak from Templates
Cloudbreak 2.7.0 introduces a new way to launch Cloudbreak from cloud provider templates on AWS and Google Cloud; On Azure, this option was previously available. These are quickstart options, not suitable for production. To launch Cloudbreak by using these quickstart options refer to Quickstart on AWS, Quickstart on Azure, and Quickstart on GCP. To review current Cloudbreak deployment options (quickstart and production), refer to Deployment options.
Protected Gateway Powered by Apache Knox
To access HDP cluster resources, a gateway powered by Apache Knox is configured. When creating a cluster, you can optionally instruct Cloudbreak to install and configure this gateway to protect access to the cluster resources. By default, transport layer security on the gateway endpoint is via a self-signed SSL certificate on port 8443. By default, Ambari is proxied through the gateway. For more information, refer to Configuring a Gateway.
Microsoft Azure ADLS and WASB Cloud Storage
When creating a cluster on Azure, you can configure access to ADLS and WASB from the Cloud Storage page of the advanced create cluster wizard. For more information, refer to Access data in ADLS and Access data in WASB.
Google GCS Cloud Storage
When creating a cluster on Google Cloud, you can configure access to Google Cloud Storage from the Cloud Storage page of the advanced create cluster wizard. Authentication with GCS is via a service account. For more information, refer to Access data in GCS.
Base Cloud Storage Locations
After configuring access to S3, ADLS, WASB, or GCS you can optionally use these as a base storage location, primarily for the Hive Warehouse Directory. For more information, refer to Configure ADLS storage locations on Amazon S3, ADLS, WASB, GCS.
Custom Properties
Cloudbreak allows you to add custom property variables (by using mustache template syntax) in your blueprint for replacement, and then set custom properties on a per-cluster basis. For more information, refer to Custom properties.
Dynamic Blueprints
Cloudbreak allows you to create special "dynamic" blueprints which include templating: the values of the variables specified in the blueprint are dynamically replaced in the cluster creation phase, picking up the parameter values that you provided in the Cloudbreak UI or CLI.
Dynamic blueprints offer the ability to manage external sources (such as RDBMS and LDAP/AD) outside of your blueprint.
For more information, refer to Dynamic Blueprints and Creating dynamic blueprints.
HDF Clusters
Cloudbreak introduces the ability to create HDF flow management clusters with Apache NiFi and NiFi Registry, as well as HDF messaging clusters with Apache Kafka. To help you get started, Cloudbreak provides two new built-in blueprints:
- Flow Management: Apache NiFi
- HDF Messaging Management: Apache Kafka
For more information, refer to Creating HDF clusters.
External Databases for Cluster Services
You can register an existing external RDBMS in the Cloudbreak UI or CLI so that it can be used for those cluster components which have support for it. After the RDBMS has been registered with Cloudbreak, it will be available during the cluster create and can be reused with multiple clusters. For more information, refer to Register an External Database.
External Authentication Source (LDAP/AD) for Clusters
You can configure an existing LDAP/AD authentication source in the Cloudbreak UI or CLI so that it can later be associated with one or more Cloudbreak-managed clusters. After the authentication source has been registered with Cloudbreak, it will be available during the cluster create and can be reused with multiple clusters. For more information, refer to Register an Authentication Source.
Using an Existing LDAP/AD for Cloudbreak
You can configure Cloudbreak to use your existing LDAP/AD so that you can authenticate Cloudbreak users against an existing LDAP/AD server. For more information, refer to Configuring Cloudbreak for LDAP/AD Authentication.
Launching Cloudbreak in Environments with Restricted Internet Access or Required Use of Proxy
You can launch Cloudbreak in environments with limited or restricted internet access and/or required use of a proxy to obtain internet access. For more information, refer to Configure Outbound Internet Access and Proxy.
Management Packs
Cloudbreak 2.7 introduces support for using management packs, allowing you to register them in Cloudbreak web UI or CLI and then select to install them as part of cluster creation. For more information, refer to Using management packs.
Modifying Existing Cloudbreak Credentials
Cloudbreak allows you to modify existing credentials by using the edit option available in Cloudbreak UI or by using the credential modify
command in the CLI. For more information, refer to Modify an Existing Credential.
Retrying Failed Clusters
When stack provisioning or cluster creation failure occurs, the new "retry" UI option allows you to resume the process from the last failed step. A corresponding cb cluster retry
CLI command has been introduced. For more information, refer to Retry a cluster and CLI documentation.
Root Volume Size Configuration
When creating a cluster, you can modify the root volume size. This option is available on the advanced Hardware and Storage page of the create cluster wizard. Default is 50 GB for AWS and GCP, and 30 GB for Azure. This option is useful if your custom image requires more space than provided by default.
JSON Key on Google Cloud
Cloudbreak introduces support for Google Cloud's service account JSON key. Since activating service accounts with P12 private keys has been deprecated in the Cloud SDK, we recommend using JSON private keys. For updated instructions for creating a Cloudbreak credential, refer to Create Cloudbreak credential.
Viewing Cluster Blueprints
Cloudbreak includes a useful option to view blueprints of a future cluster (from the create cluster wizard) or an existing cluster (from cluster details). For more information, refer to View cluster blueprints.
CLI Autocomplete
Cloudbreak CLI now includes an autocomplete option. For more information, refer to Configure CLI autocomplete.
Instructions for Using Custom Hostnames on AWS
New documentation is available for using custom hostnames based on DNS for clusters running on AWS. For instructions, refer to Using custom hostnames based on DNS.
New features (TP)
The following features are introduced in Cloudbreak 2.7 as technical preview; these features are for evaluation only and are not suitable for production.
Data Lake Technical Preview
Cloudbreak allows you to create a long-running data lake cluster and attach it to a short-running cluster. To get started, refer to Setting up a data lake.
Gateway SSO Technical Preview
As part of Apache Knox-powered gateway introduced in Cloudbreak 2.7, you can configure the gateway as the SSO identity provider. For more information, refer to Configure single sign-on (SSO).
Behavioral changes
Removal of Prebuilt Cloudbreak Deployer Images
In earlier versions of Cloudbreak, Cloudbreak deployer images for AWS, Google Cloud, and OpenStack were provided for each release. Instead, Cloudbreak 2.7.0 introduces a new way to launch Cloudbreak from cloud provider templates. To review current Cloudbreak deployment options, refer to Deployment options.
Auto-import of HDP/HDF Images on OpenStack
When using Cloudbreak on OpenStack, you no longer need to import HDP and HDF images manually, because during your first attempt to create a cluster, Cloudbreak automatically imports HDP and HDF images to your OpenStack. Only Cloudbreak image must be imported manually.
Installing Mysql Connector as a Recipe
Starting with Ambari version 2.6, if you have 'MYSQL_SERVER' component in your blueprint, you have to manually install and register the 'mysql-connector-java.jar'. If you would like to automate this process in Cloudbreak, you can apply the following recipe.
Sorting and Filtering Resource Tables in the UI
Cloudbreak introduces the ability to sort and filter resource tables that list resources such as clusters, cluster hardware, blueprints, recipes, and so on, in the Cloudbreak web UI.
Redesigned Hardware and Storage UI
The UI of the Hardware and Storage page in the create cluster wizard and in cluster details was redesigned for better user experience.
New Image Settings Page in Create Cluster wizard
All cluster options related to image settings, image catalog selection, and Ambari and HDP/HDF repository specification were moved to a separate Image Settings page in the create cluster wizard.
File System Cluster Page Was Renamed to Cloud Storage
The File System page of the advanced create cluster wizard was renamed to Cloud Storage.
Image Catalog Option Was Moved to External Sources
The options related to registering a custom image catalog and selecting a default image catalog were removed from the Settings navigation menu option and are now available under External Sources > Image Catalogs.
Recipes Option Was Moved to Cluster Extension
The Recipes navigation menu option was moved under Cluster Extensions, so to find recipe-related settings, select Cluster Extensions > Recipes from the navigation menu.
Image catalog updates
Default versions provided with Cloudbreak 2.7:
Default Ambari version 2.6.2.0
Default HDP version 2.6.5.0-292
Default HDF version 3.1.1.0-35
Fixed issues
Issue | Issue description | Category |
---|---|---|
BUG-105440 | LDAPS does not work. | Stability |
BUG-105191 | LLAP is enabled with EDW-ETL blueprint. | Stability |
BUG-105061 | NullPointerException when kerberized cluster is being terminated. | Stability |
BUG-105057 | Cloudbreak recipe in "pre-termination" stage does not run till completion. Machine shuts down in between. | Stability |
BUG-104950 | cbd update causes data loss. | Stability |
BUG-104949 | NullPointerException during upscale. | Stability |
BUG-104948 | Cannot delete instance when the upscale failed. | Stability |
BUG-104947 | Cluster status should be updated to AVAILABLE even when there are stopped services. | Stability |
BUG-104931 | When it aborts scaling, Cloudbreak should return a message showing which services are stopped. | Stability |
BUG-104930 | Upscale needs to be limited to 100 instances per one request. | Stability |
BUG-104915 | Cluster termination failed when kerberos is enabled. | Stability |
BUG-104889 | NullPointerException during stack creation. | Stability |
BUG-104790 | NullPointerException when scaling up an HDF cluster to 1000 nodes on OpenStack. | Stability |
BUG-104789 | Some error messages in the CLI and UI are hard to understand. | Usability |
BUG-104787 | UI menu is not scrollable (unusable in case of small window). | Usability |
BUG-104786 | "Copy JSON" button text is invalid | Usability |
BUG-104785 | In the UI, the sync option is disabled but clickable (on a stopped cluster). | Usability |
BUG-104782 | Wrong region is selected after cluster creation. | Stability |
BUG-104779 | EDW-Analytics blueprint fails on AWS. | Stability |
BUG-104759 | Cannot load custom image catalog. | Usability |
BUG-104758 | Credential error causes invalid error message: "Failed to VM types for the cloud provider". | Usability |
BUG-104544 | In create stack request, improve support for deprecated gateway requests. | Stability |
BUG-104530 | The host groups in the validation [Services,ZooKeeper,NiFi] must match the host groups in the request [Services,NiFi] | Stability |
BUG-104529 | Request to acquire token failed. | Stability |
BUG-104480 | JPA has too many connections. | Stability |
BUG-104475 | Unable to acquire JDBC connection. | Stability |
BUG-104473 | Missing node configuration in the blueprint causes NullPointerException. | Stability |
BUG-104469 | Failed to get platform networks java.lang.IllegalArgumentException: No region provided. | Stability |
BUG-104451 | Could not verify credential [credential: 'temp-user-credential'], detailed message: Unauthorized. | Stability |
BUG-104450 | Failed to get platform networks com.google.api.client.googleapis.json.GoogleJsonResponseException. | Stability |
BUG-104445 | Error during stack termination flow: java.lang.NullPointerException. | Stability |
BUG-104275 | Structured events contain passwords and sensitive data. | Security |
BUG-104274 | HDP cluster version is incorrect in structured events. | Stability |
BUG-104235 | Filter valid images by provider | Usability |
BUG-104124 | Registering Postgres RDS causes Null Pointer Exception. | Stability |
BUG-104120 | Back-and-forth navigation in the create cluster wizard confuses the UI and opens the wrong port (443 instead of 8443). | Usability |
BUG-103678 | Remove instanceProfileStrategy recommendation from the CLI. | Stability |
BUG-102890 | Periscope result returns more than one elements for cluster. | Stability |
BUG-102884 | Saved instance groups will be removed if credential has changed. | Usability |
BUG-102732 | CLI can't reinstall a cluster without the --blueprint-name flag. |
Usability |
BUG-102730 | Null Pointer Exception during stack repair. | Stability |
BUG-102714 | Cannot update the status of cluster 'X' to STARTED, because the stack is not in AVAILABLE state. | Stability |
BUG-102711 | Null Pointer Exception during downscale if Ambari is not reachable. | Stability |
BUG-102441 | Can't create Azure cluster after wrong ssh-rsa key was submitted. | Stability |
BUG-102296 | Openstack4j glance V2 error. | Stability |
BUG-102201 | Images are shown for regions that have no available images. | Usability |
BUG-101988 | Match tag restrictions with cloud provider's restrictions. | Usability |
BUG-101746 | Duplicated error message when backend is down. | Usability |
BUG-101483 | Sometimes the hostnames are not good after cluster installation. | Stability |
BUG-101473 | Cloudbreak CLI doesn't show any error if the cloudbreak host is wrong. | Stability |
BUG-101236 | Time to live not calculated or displayed well. | Usability |
BUG-101231 | After Recipe delete: page not found. | Usability |
BUG-101230 | The curl command listed on the Download CLI page for Windows does not work on Windows and therefore it should be removed or replaced. |
Usability |
BUG-101228 | History for the same start and end date filters out cluster. | Usability |
BUG-101225 | CLI cb cluster repair does not work as expected. |
Usability |
BUG-101223 | After stopping and starting a cluster, cluster state is incorrectly listed as "Unhealty", even though the nodes are healthy. | Stability |
BUG-101222 | Filter by button should be removed from UI External sources > Authentication configs. | Usability |
BUG-101204 | Using the instanceProfileStrategy parameter in the CLI JSON from creating an instance profile does not work as expected. |
Usability |
BUG-100844 | In the Cloudbreak UI create cluster wizard the side menu is incorrect for Cluster Extensions. | Usability |
BUG-100684 | Zeppelin Shiro config is wrong. | Stability |
BUG-100549 | GCP cluster creation failed if existing subnet is defined. | Stability |
BUG-100468 | Images from a custom image catalog are not listed in the UI after Cloudbreak version changed. | Stability |
BUG-100110 | Availability zones do not refreshed in Cloudbreak to match the actual AWS availability zones. | Usability |
BUG-100027 | Change default instance/storage settings in AWS Paris region. | Usability |
BUG-99168 | All clusters created on Google Cloud Platform fail. | Stability |
BUG-99400 | Time-based cluster autoscaling does not work. | Stability |
BUG-99505 | Sync does not work for an AWS instance that was terminated a long time ago. | Stability |
BUG-98277 | Network interface handling in CloudBreak should be improved. | Stability |
BUG-97395 | Networks are duplicated on networks tab of the cluster create wizard. | Stability |
BUG-97259 | "Update failed" status after downscale failed, even though cluster was not modified and its status should be "Running". | Stability |
BUG-97207 | Changing lifecycle management on YARN causes NPE. | Stability |
BUG-99189 | ImageCatalog PUT endpoint is not secured. | Security |
BUG-97895 | LDAP password should be removed from Cloudbreak logs. | Security |
BUG-97300 | Cloudbreak should show proper error messages when the given credential is not valid anymore. | Usability |
BUG-97296 | GCP credential creation should validate whether resources are available with the credential. | Usability |
BUG-97660 | Ignore repository warnings checkbox are missing after changing base image Ambari or HDP to a custom one. | Usability |
BUG-97307 | Ignore repository warnings checkbox is not selectable after change the HDP VDF URL. | Usability |
BUG-96764 | "Failed to remove instance" error when using the delete icon. | Usability |
BUG-97390 | Cloudbreak should support longer resource ID-s on AWS. | Usability |
BUG-99512 | Azure ES_v3 instances should support premium storage. | Usability |
BUG-97206 | Backend should return only images for enabled platforms. | Usability |
Known issues
Known issues: Cloudbreak
(BUG-96788) Azure Availability Set Option Is Not Available for Instance Count of 1
When creating a cluster, the Azure availability set feature is not available for host groups with the instance count of 1.
Workaround:
If you would like to use the Azure availability sets feature now, you must add at least 2 instances to the host group for which you want to use them. The option Azure availability sets is available on the advanced Hardware and Storage page of the create cluster wizard.
(BUG-92605) Cluster Creation Fails with ResourceInError
Cluster creation fails with the following error:
Infrastructure creation failed. Reason: Failed to create the stack for CloudContext{id=3689, name='test-exisitngnetwork', platform='StringType{value='OPENSTACK'}', owner='e0307f96-bd7d-4641-8c8f-b95f2667d9c6'} due to: Resource CREATE failed: ResourceInError: resources.ambari_volume_master_0_0: Went to status error due to "Unknown"
Workaround:
This may mean that the volumes that you requested exceed volumes available on your cloud provider account. When creating a cluster, on the advanced Hardware and Storage page of the create cluster wizard, try reducing the amount of requested storage. If you need more storage, try using a different region or ask your cloud provider admin to increase the resource quota for volumes.
(BUG-105188) On OpenStack, Cluster Creation Hangs with Stack Has Flows Under Operation
Cluster creation hangs with message ' Stack
(BUG-105439) When a Master Node Goes Down the Cluster Status Is Available
When the master node goes down or is removed, the cluster remains in available status.
(BUG-104825) Upscaling the Compute Host Group is Not Possible on AWS
When using Cloudbreak with Ambari 2.6.2, upscaling the compute host group on AWS fails with the following error due to Ambari being unable to install the HIVE_CLIENT:
New node(s) could not be added to the cluster. Reason com.sequenceiq.cloudbreak.core.ClusterException: Ambari operation failed: component: 'UPSCALE_REQUEST', requestID: '8'
(BUG-93241) Error When Scaling Multiple Host Groups
Scaling of multiple host groups fails with the following error:
Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; nested exception is org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
Workaround:
Scaling multiple host groups at once is not supported. If you would like to scale multiple host groups: scale the first host group and wait until scaling has completed, then scale the second host group, and so on.
(BUG-101226) Scaling is Ambiguous Nodes are in Created State
In some cases, when a node is in Created status (which means that the node is healthy, but it hasn't joined to the cluster yet), such a node is not counted when the cluster is scaled. This can lead to additional nodes being added to the cluster when some nodes are in Created status.
(BUG-105312) Wrong Notification After Deleting an Autoscaling Policy
When you delete an existing scaling policy or an existing alert, you will see the following confirmation message: "Scaling policy / Alert has been saved". This message should state: "Scaling policy / Alert has been deleted". The deletion occurs correctly, but the confirmation message is incorrect.
(BUG-105205) Auto Repair Does Not Work with a HA Cluster
When the ambari server host was removed from a HA cluster with autorepair on ambari server hostgroup, no autorepair occurred.
(BUG-105308) Exception When a Pending Cluster is Terminated
In some cases, when a cluster is terminated while its creation process is still pending, the cluster termination fails with the following exception:
Unable to find com.sequenceiq.cloudbreak.domain.Constraint with id 2250
Workaround:
Try terminating the cluster again.
(BUG-105309) Error When Deleting a Custom Blueprint
In some cases, when a cluster is terminated while its creation process is still pending, blueprint remains attached to the cluster, causing the following error when one tries to delete the blueprint:
There is a cluster 'perftest-d0e5q5y7lj' which uses blueprint 'multinode-hdfs-yarn-ez0plywf2c'. Please remove this cluster before deleting the blueprint
(BUG-105090) Unable to Edit Optional Credential Parameters
When modifying an existing CLoudbreak credential, it is possible to modify mandatory parameters, but it is not possible to save changes in the optional parameters due to the Save button remaining disabled.
(BUG-105065) Error Message is Missing When LDAP Server Has No Port
When registering an LDAP with Cloudbreak CLI, if you do not specify the port as part of the -ldap-server
, no error is returned even though the configuration is incorrect. For example the following parameter value is incorrect:
--ldap-server ldap://hwxad-7f9ecb4d75206b09.elb.eu-west-1.amazonaws.com
In this case, Cloudbreak will not save the configuration.
(BUG-97044) Show CLI Command Copy JSON Button Does Not Work
When using the Show CLI Command > Copy the JSON or Copy the Command button with Firefox, the content does not does not get copied if adblock plugin or other advertise blocker plugins are present.
Workaround:
Use a browser without an adblock plugin.
(BUG-93257) Clusters Are Missing From History
After changing the dates on the History page multiple times, the results displayed may sometimes be incorrect.
Workaround:
Refresh the page if you think that the history displayed may be incorrect.
Known issues: Ambari and HDP
The known issues described here were discovered when testing Cloudbreak with Ambari and HDP versions that are used by default in Cloudbreak. For general Ambari and HDP known issues, refer to Ambari and HDP release notes.
(BUG-96707) Druid Overload Does Not Start
Druid overload start fails with the following error when using Ambari 2.6.1.3 and HDP 2.6.4.0:
ERROR [main] io.druid.cli.CliOverlord - Error when starting up. Failing. com.google.inject.ProvisionException: Unable to provision
(BUG-97080) Ambari Files In Some Cases When an Mpack is Installed
If you set the following properties then cluster install may fail (in 20-30% of the cases), because of the Ambari agent cache being updated concurrently:
/etc/ambari-server/conf/ambari.properties agent.auto.cache.update=true* */etc/ambari-agent/conf/ambari-agent.ini parallel_execution=1
(AMBARI-14149) In Ambari, Cluster Cannot Be Started After Stop
When using Ambari version 2.5.0.3, after stopping and starting a cluster, Event History shows the following error:
Ambari cluster could not be started. Reason: Failed to start Hadoop services. 2/7/2018, 12:47:05 PM Starting Ambari services. 2/7/2018, 12:47:04 PM Manual recovery is needed for the following failed nodes: [host-10-0-0-4.openstacklocal, host-10-0-0-3.openstacklocal, host-10-0-0-5.openstacklocal
Ambari dashboard shows that nodes are not sending heartbeats.
Workaround:
This issue is fixed in Ambari version 2.5.1.0 and newer.
Known issues: HDF
The known issues described here were discovered when testing Cloudbreak with the hDF version used by default in Cloudbreak. For general HDF known issues, refer to HDF release notes.
(BUG-98865) Scaling HDF Clusters Does Not Update Configurations on New Nodes
Blueprint configuration parameters are not applied when scaling an HDF cluster.
One example that affects all users is that after HDF cluster upscale/downscale the nifi.web.proxy.host
blueprint parameter does not get updated to include the new nodes, and as a result the NiFi UI is not reachable from these nodes.
Workaround:
Configuration parameters set in the blueprint are not applied when scaling an HDF cluster. One example that affects all NiFi users is that after HDF cluster upscale the nifi.web.proxy.host
parameter does not get updated to include the new hosts, and as a result the NiFi UI is not reachable from these hosts.
HOST1-IP:PORT,HOST2-IP:PORT,HOST3-IP:PORT