Known Issues in Cloudera Manager 7.6.0
Known issues in Cloudera Manager 7.6.0
- Cloudera bug: OPSAPS-59764: Memory leak in the Cloudera Manager agent while downloading the parcels.
-
When using the M2Crpyto library in the Cloudera Manager agent to download parcels causes a memory leak.
The Cloudera Manager server requires parcels to install a cluster. If any of the URLs of parcels are modified, then the server provides information to all the Cloudera Manager agent processes that are installed on each cluster host.
The Cloudera Manager agent then starts checking for updates regularly by downloading the manifest file that is available under each of the URLs. However, if the URL is invalid or not reachable to download the parcel, then the Cloudera Manager agent shows a 404 error message and the memory of the Cloudera Manager agent process increases due to a memory leak in the file downloader code of the agent.
- Cloudera bug: OPSAPS-63881: When CDP Private
Cloud Base is running on RHEL/CentOS/Oracle Linux 8.4, services fail to start because
service directories under the
/var/lib
directory are created with 700 permission instead of 755. - Run the following command on all managed hosts to change the
permissions to 755. Run the command for each directory under
/var/lib
:chmod -R 755 [***path_to_service_dir***]
- OPSAPS-65189: Accessing Cloudera Manager through Knox displays the following error:
Bad Message 431 reason: Request Header Fields Too Large
Workaround: Modify the Cloudera Manager Server configuration /etc/default/cloudera-scm-server file to increase the header size from 8 KB, which is the default value, to 65 KB in the Java options as shown below:export CMF_JAVA_OPTS="...existing options... -Dcom.cloudera.server.cmf.WebServerImpl.HTTP_HEADER_SIZE_BYTES=65536 -Dcom.cloudera.server.cmf.WebServerImpl.HTTPS_HEADER_SIZE_BYTES=65536"
- OPSAPS-65213: Ending the maintenance mode for a commissioned host with either an Ozone DataNode role or a Kafka Broker role running on it, might result in an error.
-
You may see the following error if you end the maintenance mode for Ozone and Kafka services from Cloudera Manager when the roles are not decommissioned on the host.
Execute command Recommission and Start on service OZONE-1 Failed to execute command Recommission and Start on service OZONE-1 Recommission and Start Command Recommission and Start is not currently available for execution.
Technical Service Bulletins
- TSB 2022-597: Cloudera Manager Event server does not clean up old events
- The Event Server in Cloudera Manager (CM) does not clean up old events from its index, which can fill up the disk. This leads to wrong “Event Store Size” health checks.
- Component affected:
-
- Event Server
- Products affected:
-
- Cloudera Data Platform (CDP) Private Cloud Base
- CDP Public Cloud
- Releases affected:
-
- CDP Public Cloud 7.2.14 (CM 7.6.0), and 7.2.15 (CM 7.6.2)
- CDP Private Cloud Base 7.1.7 Service Pack (SP) 1 (CM 7.6.1)
- Users affected:
-
- Users who have Event Server running
- Impact:
-
- Event Server’s index fills up the space on the used disk eventually.
- Action required
- Patch: Please contact support for a patch to address this issue.
- Monitoring:
-
- CM by default has thresholds to monitor the Event
Server space using
[
eventserver_index_directory_free_space_percentage_thresholds
] parameter.You can adjust these as well by following the Cloudera Manager documentation.
- CM by default has thresholds to monitor the Event
Server space using
[
- Knowledge article
- For the latest update on this issue see the corresponding Knowledge article: TSB 2022-597: Cloudera Manager Event server does not clean up old events