Known Issues in Cloudera Manager 7.0.3
This topic describes known issues and workarounds for Cloudera Manager.
Installation and Upgrade Known Issues
- Installation and Upgrade Limitations
- Cloudera Manager 7.0.3 supports only installation of clusters running the Cloudera Runtime 7.0.3 components in this release.
- Ranger Setup Issues
- OPSAPS-52016: When running the Cloudera Manager
Add Service wizard, passwords for the
Ranger service must conform tto the following restrictions:
- Passwords must be at least 8 characters.
- Passwords must contain at least one alphabetic and one numeric character.
- The following characters cannot be used in
passwords:
" ' \ ` ´ .
Other Known Issues
- Connections to External Data Sources
- The Microsoft ADLS connector is not supported in this release.
- OPSAPS-53304: During Hive replication, while importing column statistics, the system administrator must provide a valid engine type for the statistics to be usable.
- Workaround: The system administrator must set the
HIVE_REPL_STATS_ENGINE
property in the Hive Replication Environment Advanced Configuration Snippet (Safety Valve), with the correct engine type for the column statistics to be usable. The valid values for the engine type are:hive
,impala
, andspark
. - OPSAPS-53604 JDK Support
- Only OpenJDK 8 is supported in CDP Data Center 7.0.
- CDPD-5756: Hive replication fails at metastore import step with "java.net.BindException:Cannot assign requested address"
- Workaround: You must run the Replication Manager service with a single thread. In the Advanced tab, the value for Number of concurrent HMS connections must be set to 0.
- OPSAPS- 52546: Using Hue with HBase requires additional configurations
- You must enable the following configuration parameters in the
HBase service:
- Enable HBase Thrift Http Server
- Enable HBase Thrift Proxy Users
- OPSAPS-53731:
- When adding a new cluster, a new service, YARN Queue
Manager is added by default. This service is useful
for configuring the Capacity Scheduler. The Add
Cluster and Add Service wizards
will prompt users for the following credentials:
- Existing Cloudera Manager API Client Username
- Existing Cloudera Manager API Client Password
Enter the credentials for a user that is authorized to make configuration changes using the Cloudera Manager API.
- OPSAPS-53214 HBase Hook for Atlas not enabled by default
- When installing Atlas and HBase in a cluster, you must enable the
HBase hook by doing the following in the Cloudera Manager Admin Console:
- Go to the HBase service page.
- Click the Configuration tab.
- Search for the "Enable Atlas Hook" configuration property.
- Select the Enable Atlas Hook for HBase.
- Restart the HBase service.
- OPSAPS-52454: Extra steps required to enable Ranger authorization in the Solr instance used by Ranger
- After installing Ranger, do the following:
- CDPD-4139: Enabling TLS/SSL after creating a collection in Solr results in Solr not knowing that the node hosting the shard is the same.
- A cluster with indices stored on HDFS created before enabling
TLS1, will get the following error message after enabling TLS and
starting the cluster:
"Will not load SolrCore SOLR_CORE_NAME because it has been replaced due to failover."
- OPSAPS-51224: Atlas custom properties ignored in client services
-
When adding a custom property for the atlas-application.properties in Atlas hook-based services such as Hive, HBase, and Impala, the custom property is not reflected in the actual configuration file that Cloudera Manager generates, causing these properties to be ignored.
- CDPD-6022 Accumulo not supported
- Apache Accumulo is currently not supported in this version of Cloudera Runtime. Although you can access Accumulo from the command-line interface, you must not use this component in production because Cloudera does not support it.
- Apache Flume is no longer supported.
- Apache Pig is no longer supported
- Virtual Private Clusters (VPC) are not recommended for use in production environments.
- You can still create Virtual Private Clusters (Base clusters and Data contexts using Cloudera Manager, but you should only use these in development or testing environments.
- OPSAPS-54299 – Installing Hive on Tez and HMS in the incorrect order causes HiveServer failure
- You need to install Hive on Tez and HMS in the correct order; otherwise, HiveServer fails. You need to install additional HiveServer roles to Hive on Tez, not the Hive service; otherwise, HiveServer fails. See Installing Hive on Tez for the correct procedures.
- OPSAPS-65189: Accessing Cloudera Manager through Knox displays the following error:
Bad Message 431 reason: Request Header Fields Too Large
Workaround: Modify the Cloudera Manager Server configuration /etc/default/cloudera-scm-server file to increase the header size from 8 KB, which is the default value, to 65 KB in the Java options as shown below:export CMF_JAVA_OPTS="...existing options... -Dcom.cloudera.server.cmf.WebServerImpl.HTTP_HEADER_SIZE_BYTES=65536 -Dcom.cloudera.server.cmf.WebServerImpl.HTTPS_HEADER_SIZE_BYTES=65536"
Technical Service Bulletins (TSB)
- TSB 2022-507 Certificate expiry issue in CDP
- The Transport Layer Security (TLS) keystore needs to be manually rotated due to an issue with certificate rotation.
- Knowledge article
- For the latest update on this issue, please see the corresponding Knowledge article: TSB 2022-507: Certificate expiry issue in CDP
- TSB 2021-530: Local File Inclusion (LFI) Vulnerability in Navigator
- After successful user authentication to the Navigator Metadata Server and enabling dev mode of Navigator Metadata Server, local file inclusion can be performed through the Navigator’s embedded Solr web UI. All files can be accessed for reading which can be opened as cloudera-scm OS user. This is related to Apache Solr CVE-2020-13941.
- Knowledge article
- For the latest update on this issue see the corresponding Knowledge article: TSB 2021-530: CVE-2021-30131 - Local File Inclusion (LFI) Vulnerability in Navigator