Cloudera Manager 7.11.3 Cumulative hotfix 12

Know more about the Cloudera Manager 7.11.3 cumulative hotfixes 12.

This cumulative hotfix was released on February 20, 2025.

New features and changed behavior for Cloudera Manager 7.11.3 CHF 12 (version: 7.11.3.31-62995507):
OPSAPS-71872: FedRAMP-Compliant TLS Cipher Configuration for Kudu
Earlier, the default TLS ciphers for Kudu were not FedRAMP compliant and could not be configured through Cloudera Manager. To address this issue, the default TLS cipher values have been updated to align with FedRAMP compliance, and Kudu now allows configuring the minimum TLS version and cipher suite preferences directly through Cloudera Manager. This enhancement ensures improved security and compliance for TLS-secured RPC connections.
Following are the list of known issues and their corresponding workarounds that are shipped for Cloudera Manager 7.11.3 CHF 12 (version: 7.11.3.31-62995507):
ENGESC-30503, OPSAPS-74868: Cloudera Manager limited support for custom external repository requiring basic authentication
Current Cloudera Manager does not support custom external repository with basic authentication (the Cloudera Manager Wizard supports either HTTP (non-secured) repositories or usage of Cloudera https://archive.cloudera.com only). In case customers want to use a custom external repository with basic authentication, they might get errors.

The assumption is that you can access the external custom repository (such as Nexus or JFrog, or others) using LDAP credentials. In case an applicative user is used to fetch the external content (as done in Data Services with the docker imager repository), the customer should ensure that this applicative user is located under the user's base search path where the real users are being retrieved during LDAP authentication check (so the external repository will find it and will allow it to gain access for fetching the files).

Once done, you can use the current custom URL fields in the Cloudera Manager Wizard and enter the URL for the RPMs or parcels/other files in the format of "https://USERNAME:PASSWORD@server.example.com/XX".

While using the password, you are advised to use only the printable ASCII character range (excluding space), whereas in case of a special character (not letter/number) it can be replaced with HEX value (For example, you can replace Aa1234$ with Aa1234%24 as '%24' is translated into $ sign).

OPSAPS-60726: Newly saved parcel URLs are not showing up in the parcels page in the Cloudera Manager HA cluster.
To safely manage parcels in a Cloudera Manager HA environment, follow these steps:
  1. Shutdown the Passive Cloudera Manager Server.
  2. Add and manage the parcel as usual, as described in Install Parcels.
  3. Restart the Passive Cloudera Manager server after parcel operations are complete.
OPSAPS-72756:The runOzoneCommand API endpoint fails during the Ozone replication policy run
The /clusters/{clusterName}/runOzoneCommand Cloudera Manager API endpoint fails when the API is called with the getOzoneBucketInfo command. In this scenario, the Ozone replication policy runs also fail if the following conditions are true:
  • The source Cloudera Manager version is 7.11.3 CHF11 or 7.11.3 CHF12.
  • The target Cloudera Manager is version 7.11.3 through 7.11.3 CHF10 or 7.13.0.0 or later where the feature flag API_OZONE_REPLICATION_USING_PROXY_USER is disabled.
Choose one of the following methods as a workaround:
  • Upgrade the target Cloudera Manager before you upgrade the source Cloudera Manager for 7.11.3 CHF12 version only.
  • Pause all replication policies, upgrade source Cloudera Manager, upgrade destination Cloudera Manager, and resume the replication policies' job runs.
  • Upgrade source Cloudera Manager, upgrade target Cloudera Manager, and rerun the failed Ozone replication policies between the source and target clusters.
OPSAPS-73211: Cloudera Manager 7.11.3 does not clean up Python Path impacting Hue to start

When you upgrade from Cloudera Manager 7.7.1 or lower versions to Cloudera Manager 7.11.3 or higher versions with CDP Private Cloud Base 7.1.7.x Hue does not start because Cloudera Manager forces Hue to start with Python 3.8, and Hue needs Python 2.7.

The reason for this issue is because Cloudera Manager does not clean up the Python Path at any time, so when Hue tries to start the Python Path points to 3.8, which is not supported in CDP Private Cloud Base 7.1.7.x version by Hue.

To resolve this issue temporarily, you must perform the following steps:

  1. Locate the hue.sh in /opt/cloudera/cm-agent/service/hue/.
  2. Add the following line after export HADOOP_CONF_DIR=$CONF_DIR/hadoop-conf:
    export PYTHONPATH=/opt/cloudera/parcels/CDH/lib/hue/build/env/lib64/python2.7/site-packages
OPSAPS-73655: Cloud replication fails after the delegation token is issued
HDFS and Hive external table replication policies from an on-premises cluster to cloud fail when the following conditions are true:
  1. You choose the Advanced Options > Delete Policy > Delete Permanently option during the replication policy creation process.
  2. Incremental replication is in progress, that is the source paths of the replication are snapshottable directories and the bootstrap replication run is complete.
None
OPSAPS-72984: Alerts due to change in hostname fetching functionality in jdk 8 and jdk 11

Upgrading JAVA from JDK 8 to JDK 11 creates the following alert in CMS:

Bad : CMSERVER:pit666.slayer.mayank: Reaching Cloudera Manager Server failed

This happens due to a functionality change in JDK 11 on hostname fetching.
[root@pit666.slayer ~]# /us/lib/jvm/java-1.8.0/bin/java GetHostName
Hostname: pit666.slayer.mayank

[root@pit666.slayer ~]# /usr/lib/jvm/java-11/bin/java GetHostName
Hostname: pit666.slayer

You can notice that the "hostname" is set to a short name instead of FQDN.

The current workaround is to set the hostname as FQDN.

OPSAPS-72784: Upgrades from CDH6 to CDP Private Cloud Base 7.1.9 SP1 or higher versions fail with a health check timeout exception
If you are using Cloudera Manager 7.11.3 cumulative hotfix 12 and upgrading from CDH 6 to CDP Private Cloud Base 7.1.9 SP1 or higher versions, the upgrade fails with a CMUpgradeHealthException timeout exception. This is because upgrades from CDH 6 to CDP Private Cloud Base 7.1.9 SP1 or to any of its cumulative hotfix versions are not supported.
None.
OPSAPS-73011: Wrong parameter in the /etc/default/cloudera-scm-server file
In case the Cloudera Manager needs to be installed in High Availability (2 nodes or more as explained here), the parameter CMF_SERVER_ARGS in the /etc/default/cloudera-scm-server file is missing the word "export" before it (on the file there is only CMF_SERVER_ARGS= and not export CMF_SERVER_ARGS=), so the parameter cannot be utilized correctly.
Edit the /etc/default/cloudera-scm-server file with root credentials and add the word "export" before the parameter CMF_SERVER_ARGS=.
OPSAPS-65377: Cloudera Manager - Host Inspector not finding Psycopg2 on Ubuntu 20 or Redhat 8.x when Psycopg2 version 2.9.3 is installed.

Host Inspector fails with Psycopg2 version error while upgrading to Cloudera Manager 7.11.3 or Cloudera Manager 7.11.3 CHF-x versions. When you run the Host Inspector, you get an error Not finding Psycopg2, even though it is installed on all hosts.

None
OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.

Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the livy_admin_users configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed.

If you add Zeppelin or Knox services later to the existing cluster, you must manually add the respective service user to the livy_admin_users configuration in the Livy configuration page.

OPSAPS-69847:Replication policies might fail if source and target use different Kerberos encryption types

Replication policies might fail if the source and target Cloudera Manager instances use different encryption types in Kerberos because of different Java versions. For example, the Java 11 and higher versions might use the aes256-cts encryption type, and the versions lower than Java 11 might use the rc4-hmac encryption type.

Ensure that both the instances use the same Java version. If it is not possible to have the same Java versions on both the instances, ensure that they use the same encryption type for Kerberos. To check the encryption type in Cloudera Manager, search for krb_enc_types on the Cloudera Manager > Administration > Settings page.

OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode

MariaDB 10.6, by default, includes the property require_secure_transport=ON in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection.

To resolve the issue temporarily, you can either comment out or disable the line require_secure_transport in the configuration file located at /etc/my.cnf.

OPSAPS-70771: Running Ozone replication policy does not show performance reports
During an Ozone replication policy run, the A server error has occurred. See Cloudera Manager server log for details error message appears when you click:
  • Performance Reports > OZONE Performance Summary or Performance Reports > OZONE Performance Full on the Replication Policies page.
  • Download CSV on the Replication History page to download any report.
None
OPSAPS-70713: Error appears when running Atlas replication policy if source or target clusters use Dell EMC Isilon storage
You cannot create an Atlas replication policy between clusters if one or both the clusters use Dell EMC Isilon storage.
None
CDPD-53185: Clear REPL_TXN_MAP table on target cluster when deleting a Hive ACID replication policy
The entry in REPL_TXN_MAP table on the target cluster is retained when the following conditions are true:
  1. A Hive ACID replication policy is replicating a transaction that requires multiple replication cycles to complete.
  2. The replication policy and databases used in it get deleted on the source and target cluster even before the transaction is completely replicated.

In this scenario, if you create a database using the same name as the deleted database on the source cluster, and then use the same name for the new Hive ACID replication policy to replicate the database, the replicated database on the target cluster is tagged as ‘database incompatible’. This happens after the housekeeper thread process (that runs every 11 days for an entry) deletes the retained entry.

Create another Hive ACID replication policy with a different name for the new database.
DMX-3973: Ozone replication policy with linked bucket as destination fails intermittently
When you create an Ozone replication policy using a linked/non-linked source cluster bucket and a linked target bucket, the replication policy fails during the "Trigger a OZONE replication job on one of the available OZONE roles" step.
None
OPSAPS-68143:Ozone replication policy fails for empty source OBS bucket
An Ozone incremental replication policy for an OBS bucket fails during the “Run File Listing on Peer cluster” step when the source bucket is empty.
None
OPSAPS-72447, CDPD-76705: Ozone incremental replication fails to copy renamed directory
Ozone incremental replication using Ozone replication policies succeed but might fail to sync nested renames for FSO buckets.
When a directory and its contents are renamed between the replication runs, the outer level rename synced but did not sync the contents with the previous name.
None
CDPD-53160: Incorrect job run status appears for subsequent Hive ACID replication policy runs after the replication policy fails
When a Hive ACID replication policy run fails with the FAILED_ADMIN status, the subsequent Hive ACID replication policy runs show SKIPPED instead of FAILED_ADMIN status on the Cloudera Manager > Replication Manager > Replication Policies > Actions > Show History page which is incorrect. It is recommended that you check Hive ACID replication policy runs if multiple subsequent policy runs show the SKIPPED status.
None
OPSAPS-73138, OPSAPS-72435: Ozone OBS-to-OBS replication policies create directories in the target cluster even when no such directories exist on the source cluster
Ozone OBS-to-OBS replication uses Hadoop S3A connector to access data on the OBS buckets. Depending on the runtime version and settings in the clusters:
  • directory marker keys (associated to the parent directories) appear in the destination bucket even when it is not available in the source bucket.
  • delete requests of non-existing keys to the destination storage are submitted which result in `Key delete failed` messages to appear in the Ozone Manager log.

The OBS buckets are flat namespaces with independent keys, and the character ‘/’ has no special significance in the key names. Whereas in FSO buckets, each bucket is a hierarchical namespace with filesystem-like semantics, where the ‘/’ separated components become the path in the hierarchy. The S3A connector provides filesystem-like semantics over object stores where the connector mimics the directory behaviour, that is, it creates and optionally deletes the “empty directory markers”. These markers get created when the S3A connector creates an empty directory. Depending on the runtime (S3A connector) version and settings, these markers are deleted when a descendant path is created and is not deleted.

Empty directory marker creation is inherent to S3A connector. Empty directory marker deletion behavior can be adjusted using the fs.s3a.directory.marker.retention = keep or delete key-value pair. For information about configuring the key-value pair, see Controlling the S3A Directory Marker Behavior.
OPSAPS-72804: For recurring policies, the interval is overwritten to 1 after the replication policy is edited
When you edit an Atlas, Iceberg, Ozone, or a Ranger replication policy that has a recurring schedule on the Replication Manager UI, the Edit Replication Policy modal window appears as expected. However, the frequency of the policy is reset to run at “1” unit where the unit depends on what you have set in the replication policy. For example, if you have set the replication policy to run every four hours, it is reset to one hour when you edit the replication policy.
After you edit the replication policy as required, you must ensure that you manually set the frequency to the original scheduled frequency, and then save the replication policy.
OPSAPS-74398: Ozone and HDFS replication policies might fail when you use different destination proxy user and source proxy user
HDFS on-premises to on-premises replication fails when the following conditions are true:
  • You configure different Run As Username and Run on Peer as Username during the replication policy creation process.
  • The user configured in Run As Username does not have the permission to access the source path on the source HDFS.
Ozone replication fails when the following conditions are true:
  • FSO-to-FSO replication or an OBS-to-OBS replication with Incremental with fallback to full file listing or Incremental only replication type.
  • You configured different Run As Username and Run on Peer as Username during the replication policy creation process.
  • The user configured in Run As Username does not have the permission to access the source bucket on the source Ozone.
Provide the same permissions to the user configured in Run As Username as the permissions of Run on Peer as Username on the source cluster.
OPSAPS-69622: Cannot view the correct number of files copied for Ozone replication policies
The last run of an Ozone replication policy does not show the correct number of the files copied during the policy run when you load the Cloudera Manager > Replication Manager > Replication Policies page after the Ozone replication policy run completes successfully.
None
Following are the list of fixed issues that were shipped for Cloudera Manager 7.11.3 CHF 12 (version: 7.11.3.31-62995507):
OPSAPS-70449: After creating a new Dashboard from the Cloudera Manager UI, the Chart Title field was allowing Javascript as input
In Cloudera Manager UI, while creating a new plot object, a Chart Title field allows Javascript as input. This allows the user to execute a script, which results in an XSS attack. This issue is fixed now.
OPSAPS-72215: ECS Cloudera Manager UI Config for docker cert CANNOT accept the new line - unable to update new registry cert in correct format
Currently there is no direct way to update the external docker certificate in the UI for ECS because newlines are removed when the field is saved. Certs can be uploaded by adding '\n' character for newline.
When you want to update docker cert through Cloudera Manager UI config, add '\\n' to specify a newline character in the certificate.
For example:

-----BEGIN CERTIFICATE-----\nMIIERTCCAy2gAwIBAgIUIL8o1MjD5he7nZKKa/C8rx9uPjcwDQYJKoZIhvcNAQEL\nBQAwXTELMAkGA1
UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExEzARBgNVBAcM\nClNhbnRhQ2xhcmExETAPBgN
VBAoMCENsb3VkZXJhMREwDwYDVQQLDAhDbG91ZGVy\nYTAeFw0yNDAzMTExMjU5NDVaFw0zNDA
zMDkxMjU5NDVaMF0xCzAJBgNVBAYTAlVT\nMRMwEQYDVQQIDApDYWxpZm9ybmlhMRMwEQYDVQ
QHDApTYW50YUNsYXJhMREwDwYD\nVQQKDAhDbG91ZGVyYTERMA8GA1UECwwIQ2xvdWRlcmEwg
gEiMA0GCSqGSIb3DQEB\nAQUAA4IBDwAwggEKAoIBAQDcuxGszWmzVnWCwDICnlxUBtO+tI6RPs2jx
Q7C7kIj\nTHTaQ2kGl/ZzQOJBpYT/jFmiQGPSKb4iLSxed+Xk5xAOkNWDIL+Hlf5txjkw/FTf\nHiyWep9Da
QDF07M/Cl3nb8JmpRyA5fKYpVbJAFIEXOhTxrcnH/4o5ubLM7mHVXwY\nafoPD5AuiOD/I+xxmqb/x+fKt
HzY1eEzDb2vjjDJBRqxpHvg/S4hHsgZJ7wU7wg+\nPk4uPV3O83h9NI+b4SOwXunuKRCCh4dRKm8/Q
w4f7tDFdCAIubvO1AGtfyJJp9xR\npMIjhIuna1K2TnPQomdoIy/KqrFFzVaHevyinEnRLG2NAgMBAAGjgfw
wgfkwHQYD\nVR0OBBYEFHWX21/BhL5J5kNpxmb8FmDchlmBMIGaBgNVHSMEgZIwgY+AFHWX21/B
\nhL5J5kNpxmb8FmDchlmBoWGkXzBdMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2Fs\naWZvcm
5pYTETMBEGA1UEBwwKU2FudGFDbGFyYTERMA8GA1UECgwIQ2xvdWRlcmEx\nETAPBgNVBAsMCEN
sb3VkZXJhghQgvyjUyMPmF7udkopr8LyvH24+NzAMBgNVHRME\nBTADAQH/MAsGA1UdDwQEAwIC/
DAPBgNVHREECDAGhwQKgW26MA8GA1UdEgQIMAaH\nBAqBbbowDQYJKoZIhvcNAQELBQADggEBA
Mks+sY+ETaPzFLg2PolUTT4GeXEqnGl\nSmZzIkiA6l2DCYQD/7mTLd5Ea63oI78fxatRnG5MLf5aHVLs4
W+WYhoP6B7HLPUo\nNGPJviRBHtUDRYVpD5Q0hhQtHB4Q1H+sgrE53VmbIQqLPOAxvpM//oJCFDT8
NbOI\n+bTJ48N34ujosjNaiP6x09xbzRzOnYd6VyhZ/pgsiRZ4qlZsVyv1TImP9VpHcC7P\nukxNuBdXBS3j
EXcvEV1Eq4Di+z6PIWoPIHUunQ9P0akYEvbXuL88knM5FNhS6YBP\nGd91KkGdz6srRIVRiF+XP0e6IwZC70kkWiw
f8vX/CuR64ZQxc30ot70=\n-----END CERTIFICATE-----\n
OPSAPS-71659: Ranger replication policy failed because of incorrect source to destination service name mapping
Ranger replication policy failed during the transform step because of incorrect source to destination service name mapping. This issue is fixed.
OPSAPS-71592: Replication Manager does not read the default value of “ozone_replication_core_site_safety_valve” during Ozone replication policy run
When the ozone_replication_core_site_safety_valve advanced configuration snippet is set to its default value, Replication Manager does not read its value during the Ozone replication policy run. To mitigate this issue, the default value of ozone_replication_core_site_safety_valve has been set to an empty value. If you have set any key-value pairs for ozone_replication_core_site_safety_valve, then these values are written to core-site.xml during the Ozone replication policy run.
OPSAPS-71424: The 'configuration sanity check' step ignores the replication advanced configuration snippet values during the Ozone replication policy job run
The OBS-to-OBS Ozone replication policy jobs failed when the S3 property values for fs.s3a.endpoint, fs.s3a.secret.key, and fs.s3a.access.key were empty in Ozone Service Advanced Configuration Snippet (Safety Valve) for ozone-conf/ozone-site.xml even when these properties were defined in Ozone Replication Advanced Configuration Snippet (Safety Valve) for core-site.xml. This issue is fixed.
OPSAPS-72559: Incorrect error messages appear for Hive ACID replication policies
Replication Manager now shows correct error messages for every Hive ACID replication policy run on the Cloudera Manager > Replication Manager > Replication Policies > Actions > Show History page as expected.
OPSAPS-72558, OPSAPS-72505: Replication Manager chooses incorrect target cluster for Iceberg, Atlas, and Hive ACID replication policies
When a Cloudera Manager instance managed multiple clusters, Replication Manager picked the first cluster in the list as the Destination during the Iceberg, Atlas, and Hive ACID replication policy creation process, and the Destination field was non-editable. You can now edit the replication policy to change the target cluster in these scenarios.
OPSAPS-72468: Subsequent Ozone OBS-to-OBS replication policy do not skip replicated files during replication
Replication Manager now skips the replicated files during subsequent Ozone replication policy runs after you add the following key-value pairs in Cloudera Manager > Clusters > Ozone service > Configuration > Ozone Replication Advanced Configuration Snippet (Safety Valve) for core-site.xml:
  • com.cloudera.enterprise.distcp.ozone-schedules-with-unsafe-equality-check = [***ENTER COMMA-SEPARATED LIST OF OZONE REPLICATION POLICIES’ ID or ENTER all TO APPLY TO ALL OZONE REPLICATION POLICIES***]

    The advanced snippet skips the already replicated files when the relative file path, file name, and file size are equal and ignores the modification times.

  • com.cloudera.enterprise.distcp.require-source-before-target-modtime-in-unsafe-equality-check = [***ENTER true OR false***]

When you add both the key-value pairs, the subsequent Ozone replication policy runs skip replicating files when the matching file on the target has the same relative file path, file name, file size and the source file’s modification time is less or equal to the target file modification time.

OPSAPS-72276: Cannot edit Ozone replication policy if the MapReduce service is stale
You could not edit an Ozone replication policy in Replication Manager if the MapReduce service does not load completely. This issue is fixed.
OPSAPS-72214: Cannot create a Ranger replication policy if the source and target cluster names are not the same
You could not create a Ranger replication policy if the source cluster and target cluster names were not the same. This issue is fixed.
OPSAPS-67498: The Replication Policies page takes a long time to load
To ensure that the Cloudera Manager > Replication Manager > Replication Policies page loads faster, new query parameters have been added to the internal policies that fetch the REST APIs for the page which improves pagination. Replication Manager also caches internal API responses to speed up the page load.
OPSAPS-72143: Atlas replication policies fail if the source and target clusters support FIPS
The Atlas replication policies fail during the Exporting atlas entities from remote atlas service step if the source and target clusters support FIPS.
OPSAPS-72111: Directory creation fails during Hive ACID replication policy creation if the target cluster uses Dell EMC Isilon storage
Directory creation failed during the Hive ACID replication policy creation process if the target cluster used Dell EMC Isilon storage. To mitigate this issue, ensure that the hive user and the hive group have 0755 port permission to the staging location.
OPSAPS-72509: Hive metadata transfer to GCS fails with ClassNotFoundException
Hive external table replication policies from an on-premises cluster to cloud failed during the Transfer Metadata Files step when the target is on Google Cloud and the source Cloudera Manager version is 7.11.3 CHF7, 7.11.3 CHF8, 7.11.3 CHF9, 7.11.3 CHF9.1, 7.11.3 CHF10, or 7.11.3 CHF11. This issue is fixed.
OPSAPS-71105: Expose or set YARN cgroup v2 settings in Cloudera Manager
Cgroup v2 support is now enabled by default, and YARN detects and uses the correct cgroup handling code.
OPSAPS-72427: Node Managers fail to start with the No cgroup controllers file found error
Previously, when cgroup was enabled, cgroup v2 was enabled automatically. This caused RM startup failures, in case of cgroup v1-only clusters, due to a missing cgroup.controllers file. This issue is now resolved and cgroup v2 support now falls back to v1 when there are no v2 controllers.
Fixed Common Vulnerabilities and Exposures
For information about Common Vulnerabilities and Exposures (CVE) that are fixed in Cloudera Manager 7.11.3 cumulative hotfix 12, see Fixed Common Vulnerabilities and Exposures in Cloudera Manager 7.11.3 cumulative hotfixes.

The repositories for Cloudera Manager 7.11.3-CHF 12 are listed in the following table:

Table 1. Cloudera Manager 7.11.3-CHF 12
Repository Type Repository Location
RHEL 9 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/redhat9/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/redhat9/yum/cloudera-manager.repo
RHEL 8 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/redhat8/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/redhat8/yum/cloudera-manager.repo
RHEL 7 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/redhat7/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/redhat7/yum/cloudera-manager.repo
SLES 15 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/sles15/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/sles15/yum/cloudera-manager.repo
SLES 12 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/sles12/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/sles12/yum/cloudera-manager.repo
Ubuntu 22 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/ubuntu2204/apt
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/ubuntu2204/apt/cloudera-manager.list
Ubuntu 20 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/ubuntu2004/apt
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.31/ubuntu2004/apt/cloudera-manager.list