Cloudera Security Bulletins
This topic lists the security bulletins that have been released to address vulnerabilities in Cloudera Enterprise and its components.
- Cloudera Enterprise
- Cloudera Data Science Workbench
- Apache Hadoop
- Apache HBase
- Apache Hive
- Hue
- Apache Impala
- Apache Kafka
- Cloudera Manager
- Cloudera Navigator
- Cloudera Navigator Key Trustee
- Apache Oozie
- Cloudera Search
- Apache Sentry
- Apache Spark
- Cloudera Distribution of Apache Spark 2
- Apache ZooKeeper
Cloudera Enterprise
This section lists security bulletins for vulnerabilities that potentially affect the entire Cloudera Enterprise product suite. Bulletins specific to a component, such as Cloudera Manager, Impala, Spark etc., can be found in the sections that follow.
Potentially Sensitive Information in Cloudera Diagnostic Support Bundles
Cloudera Manager transmits certain diagnostic data (or "bundles") to Cloudera. These diagnostic bundles are used by the Cloudera support team to reproduce, debug, and address technical issues for customers.
Cloudera support discovered that potentially sensitive data may be included in diagnostic bundles and transmitted to Cloudera. This sensitive data cannot be used by Cloudera for any purpose.
Cloudera has modified Cloudera Manager so that known sensitive data is redacted from the bundles before transmission to Cloudera. Work is in progress in Cloudera CDH components to remove logging and output of known potentially sensitive properties and configurations.
See Cloudera Manager Release Notes, specifically, What's New in Cloudera Manager 5.9.0 for more information (scroll to Diagnostic Bundles). Also see Sensitive Data Redaction in the Cloudera Security Guide for more information about bundles and redaction.
Cloudera strives to establish and follow best practices for the protection of customer information. Cloudera continually reviews and improves security practices, infrastructure, and data-handling policies.
Products affected: Cloudera CDH and Enterprise Editions
Releases affected: All Cloudera CDH and Enterprise Edition releases lower than 5.9.0
Users affected: All users
Date/time of detection: June 20th, 2016
Severity (Low/Medium/High): Medium
Impact: Possible logging and transmission of sensitive data
CVE: CVE-2016-5724
Immediate action required: Upgrade to Cloudera CDH and Enterprise Editions 5.9
Addressed in release/refresh/patch: Cloudera CDH and Enterprise Editions 5.9 and higher
For updates about this issue, see the Cloudera Knowledge article, TSB 2016-166: Potentially Sensitive Information in Cloudera Diagnostic Support Bundles.
Apache Commons Collections Deserialization Vulnerability
Cloudera has learned of a potential security vulnerability in a third-party library called the Apache Commons Collections. This library is used in products distributed and supported by Cloudera (“Cloudera Products”), including core Apache Hadoop. The Apache Commons Collections library is also in widespread use beyond the Hadoop ecosystem. At this time, no specific attack vector for this vulnerability has been identified as present in Cloudera Products.
In an abundance of caution, we are currently in the process of incorporating a version of the Apache Commons Collections library with a fix into the Cloudera Products. In most cases, this will require coordination with the projects in the Apache community. One example of this is tracked by HADOOP-12577.
The Apache Commons Collections potential security vulnerability is titled “Arbitrary remote code execution with InvokerTransformer” and is tracked by COLLECTIONS-580. MITRE has not issued a CVE, but related CVE-2015-4852 has been filed for the vulnerability. CERT has issued Vulnerability Note #576313 for this issue.
Cloudera Products affected:Cloudera Manager, Cloudera Navigator, Cloudera Director, CDH
Releases affected:CDH 5.5.0, CDH 5.4.8 and lower, Cloudera Manager 5.5.0, Cloudera Manager 5.4.8 and lower, Cloudera Navigator 2.4.0, Cloudera Navigator 2.3.8 and lower, Director 1.5.1 and lower
Users affected: All
Date/time of detection: Nov 7, 2015
Severity (Low/Medium/High): High
Impact: This potential vulnerability might enable an attacker to run arbitrary code from a remote machine without requiring authentication.
Immediate action required: Upgrade to the latest suitable version containing this fix when it is available.
Addressed in release/refresh/patch: Beginning with CDH 5.5.1, 5.4.9, and 5.3.9, Cloudera Manager 5.5.1, 5.4.9, and 5.3.9, Cloudera Navigator 2.4.1, 2.3.9 and 2.2.9, and Director 1.5.2, the new Apache Commons Collections library version is included in all Cloudera products.
Heartbleed Vulnerability in OpenSSL
The Heartbleed vulnerability is a serious vulnerability in OpenSSL as described at http://heartbleed.com/ (OpenSSL TLS heartbeat read overrun, CVE-2014-0160). Cloudera products do not ship with OpenSSL, but some components use this library. Customers using OpenSSL with Cloudera products need to update their OpenSSL library to one that doesn’t contain the vulnerability.
- All versions of OpenSSL 1.0.1 prior to 1.0.1g
- Hadoop Pipes uses OpenSSL.
- If SSL encryption is enabled for Impala's RPC implementation (by setting --ssl_server_certificate). This applies to any of the three Impala demon processes: impalad, catalogd and statestored.
- If HTTPS is enabled for Impala’s debug web server pages (by setting --webserver_certificate_file). This applies to any of the three Impala demon processes: impalad, catalogd and statestored.
- If HTTPS is used with Hue.
- Cloudera Manager agents, with TLS turned on, will use OpenSSL.
- All users of the above scenarios.
Severity: High (If using the scenarios above)
CVE: CVE-2014-0160
- Ensure your Linux distribution version does not have the vulnerability.
“POODLE” Vulnerability on SSL/TLS enabled ports
The POODLE (Padding Oracle On Downgraded Legacy Encryption) attack, announced by Bodo Möller, Thai Duong, and Krzysztof Kotowicz at Google, forces the use of the obsolete SSLv3 protocol and then exploits a cryptographic flaw in SSLv3. The result is that an attacker on the same network as the victim can potentially decrypt parts of an otherwise encrypted channel.
SSLv3 has been obsolete, and known to have vulnerabilities, for many years now, but its retirement has been slow because of backward-compatibility concerns. SSLv3 has in the meantime been replaced by TLSv1, TLSv1.1, and TLSv1.2. Under normal circumstances, the strongest protocol version that both sides support is negotiated at the start of the connection. However, an attacker can introduce errors into this negotiation and force a fallback to the weakest protocol version -- SSLv3.
The only solution to the POODLE attack is to completely disable SSLv3. This requires changes across a wide variety of components of CDH, and in Cloudera Manager.
Products affected: Cloudera Manager and CDH.
- Cloudera Manager and CDH 5.2.1
- Cloudera Manager and CDH 5.1.4
- Cloudera Manager and CDH 5.0.5
- CDH 4.7.1
- Cloudera Manager 4.8.5
Users affected: All users
Date and time of detection: October 14th, 2014.
Severity: (Low/Medium/High): Medium. NIST rates the severity at 4.3 out of 10 .
Impact: Allows unauthorized disclosure of information; allows component impersonation.
CVE: CVE-2014-3566
- If you are running Cloudera Manager and CDH 5.2.0, upgrade to Cloudera Manager and CDH 5.2.1
- If you are running Cloudera Manager and CDH 5.1.0 through 5.1.3, upgrade to Cloudera Manager and CDH 5.1.4
- If you are running Cloudera Manager and CDH 5.0.0 through 5.0.4, upgrade to Cloudera Manager and CDH 5.0.5
- If you are running a CDH version earlier than 4.7.1, upgrade to CDH 4.7.1
- If you are running a Cloudera Manager version earlier than 4.8.5, upgrade to Cloudera Manager 4.8.5
Cloudera Data Science Workbench
This section lists the security bulletins that have been released for Cloudera Data Science Workbench.
- TSB-349: SQL Injection Vulnerability in Cloudera Data Science Workbench
- TSB-350: Risk of Data Loss During Cloudera Data Science Workbench (CDSW) Shutdown and Restart
- TSB-351: Unauthorized Project Access in Cloudera Data Science Workbench
- TSB-346: Risk of Data Loss During Cloudera Data Science Workbench (CDSW) Shutdown and Restart
- TSB-328: Unauthenticated User Enumeration in Cloudera Data Science Workbench
- TSB-313: Remote Command Execution and Information Disclosure in Cloudera Data Science Workbench
- TSB-248: Privilege Escalation and Database Exposure in Cloudera Data Science Workbench
TSB-349: SQL Injection Vulnerability in Cloudera Data Science Workbench
An SQL injection vulnerability was found in Cloudera Data Science Workbench. This would allow any authenticated user to run arbitrary queries against CDSW’s internal database. The database contains user contact information, bcrypt-hashed CDSW passwords (in the case of local authentication), API keys, and stored Kerberos keytabs.
Products affected: Cloudera Data Science Workbench (CDSW)
Releases affected: CDSW 1.4.0, 1.4.1, 1.4.2
Users affected: All
Date/time of detection: 2018-10-18
Detected by: Milan Magyar (Cloudera)
Severity (Low/Medium/High): Critical (9.9): CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H
Impact: An authenticated CDSW user can arbitrarily access and modify the CDSW internal database. This allows privilege escalation in CDSW, kubernetes, and the linux host; creation, deletion, modification, and exfiltration of data, code, and credentials; denial of service; and data loss.
CVE: CVE-2018-20091
Immediate action required:
-
Strongly consider performing a backup before beginning. We advise you to have a backup before performing any upgrade and before beginning this remediation work.
-
Upgrade to Cloudera Data Science Workbench 1.4.3 (or higher).
-
In an abundance of caution Cloudera recommends that you revoke credentials and secrets stored by CDSW. To revoke these credentials:
-
Change the password for any account with a keytab or kerberos credential that has been stored in CDSW. This includes the Kerberos principals for the associated CDH cluster if entered on the CDSW “Hadoop Authentication” user settings page.
-
With Cloudera Data Science Workbench 1.4.3 running, run the following remediation script on each CDSW node, including the master and all workers: Remediation Script for TSB-349
Note: Cloudera Data Science Workbench will become unavailable during this time.
- The script performs the following actions:
-
If using local user authentication, logs out every user and resets their CDSW password.
-
Regenerates or deletes various keys for every user.
-
Resets secrets used for internal communications.
-
-
Fully stop and start Cloudera Data Science Workbench (a restart is not sufficient).
-
For CSD-based deployments, restart the CDSW service in Cloudera Manager.
OR
-
For RPM-based deployments, run cdsw stop followed by cdsw start on the CDSW master node.
-
-
If using internal TLS termination: revoke and regenerate the CDSW TLS certificate and key.
-
For each user, revoke the previous CDSW-generated SSH public key for git integration on the git side (the private key in CDSW has already been deleted). A new SSH key pair has already been generated and should be installed in the old key’s place.
-
Revoke and regenerate any credential stored within a CDSW project, including any passwords stored in projects’ environment variables.
-
-
Verify all CDSW settings to ensure they are unchanged (e.g. SMTP server, authentication settings, custom docker images, host mounts, etc).
-
Treat all CDSW hosts as potentially compromised with root access. Remediate per your policy.
Addressed in release/refresh/patch: Cloudera Data Science Workbench 1.4.3
For the latest update on this issue see the corresponding Knowledge article:
TSB-350: Risk of Data Loss During Cloudera Data Science Workbench (CDSW) Shutdown and Restart
Stopping Cloudera Data Science Workbench involves unmounting the NFS volumes that store CDSW project directories and then cleaning up a folder where CDSW stores its temporary state. However, due to a race condition, this NFS unmount process can take too long or fail altogether. If this happens, any CDSW projects that remain mounted will be deleted.
TSB-2018-346 was released in the time-frame of CDSW 1.4.2 to fix this issue, but it only turned out to be a partial fix. With CDSW 1.4.3, we have fixed the issue permanently. However, the script that was provided with TSB-2018-346 still ensures that data loss is prevented and must be used to shutdown/restart all the affected CDSW released listed below. The same script is also available under the Immediate Action Required section below.
Products affected: Cloudera Data Science Workbench
-
1.0.x
-
1.1.x
-
1.2.x
-
1.3.0, 1.3.1
-
1.4.0, 1.4.1, 1.4.2
Users affected: This potentially affects all CDSW users.
Detected by: Nehmé Tohmé (Cloudera)
Severity (Low/Medium/High): High
Impact: Potential data loss.
CVE: N/A
Immediate action required: If you are running any of the affected Cloudera Data Science Workbench versions, you must run the following script on the CDSW master node every time before you stop or restart Cloudera Data Science Workbench. Failure to do so can result in data loss.
This script should also be run before initiating a Cloudera Data Science Workbench upgrade. As always, we recommend creating a full backup prior to beginning an upgrade.
cdsw_protect_stop_restart.sh - Available for download at: cdsw_protect_stop_restart.sh.
#!/bin/bash set -e cat << EXPLANATION This script is a workaround for Cloudera TSB-346 and TSB-350. It protects your CDSW projects from a rare race condition that can result in data loss. Run this script before stopping the CDSW service, irrespective of whether the stop precedes a restart, upgrade, or any other task. Run this script only on the master node of your CDSW cluster. You will be asked to specify a target folder on the master node where the script will save a backup of all your project files. Make sure the target folder has enough free space to accommodate all of your project files. To determine how much space is required, run 'du -hs /var/lib/cdsw/current/projects' on the CDSW master node. This script will first back up your project files to the specified target folder. It will then temporarily move your project files aside to protect against the data loss condition. At that point, it is safe to stop the CDSW service. After CDSW has stopped, the script will move the project files back into place. Note: This workaround is not required for CDSW 1.4.3 and higher. EXPLANATION read -p "Enter target folder for backups: " backup_target echo "Backing up to $backup_target..." rsync -azp /var/lib/cdsw/current/projects "$backup_target" read -n 1 -p "Backup complete. Press enter when you are ready to stop CDSW: " echo "Deleting all Kubernetes resources..." kubectl delete configmaps,deployments,daemonsets,replicasets,services,ingress,secrets,persistentvolumes,persistentvolumeclaims,jobs --all kubectl delete pods --all echo "Temporarily saving project files to /var/lib/cdsw/current/projects_tmp..." mkdir /var/lib/cdsw/current/projects_tmp mv /var/lib/cdsw/current/projects/* /var/lib/cdsw/current/projects_tmp echo -e "Please stop the CDSW service." read -n 1 -p "Press enter when CDSW has stopped: " echo "Moving projects back into place..." mv /var/lib/cdsw/current/projects_tmp/* /var/lib/cdsw/current/projects rm -rf /var/lib/cdsw/current/projects_tmp echo -e "Done. You may now upgrade or start the CDSW service." echo -e "When CDSW is running, if desired, you may delete the backup data at $backup_target"
Addressed in release/refresh/patch: This issue is fixed in Cloudera Data Science Workbench 1.4.3.
Note that you are required to run the workaround script above when you upgrade from an affected version to a release with the fix. This helps guard against data loss when the affected version needs to be shut down during the upgrade process.
For the latest update on this issue see the corresponding Knowledge article:
TSB 2019-350: Risk of Data Loss During Cloudera Data Science Workbench (CDSW) Shutdown and Restart
TSB-351: Unauthorized Project Access in Cloudera Data Science Workbench
Malicious CDSW users can bypass project permission checks and gain read-write access to any project folder in CDSW.
Products affected: Cloudera Data Science Workbench
Releases affected: Cloudera Data Science Workbench 1.4.0, 1.4.1, 1.4.2
Users affected: All CDSW Users
Date/time of detection: 10/29/2018
Detected by: Che-Yuan Liang (Cloudera)
Severity (Low/Medium/High): High (8.3: CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:L)
Impact: Project data can be read or written (changed, destroyed) by any Cloudera Data Science Workbench user.
CVE: CVE-2018-20090
Immediate action required:
Upgrade to a version of Cloudera Data Science Workbench with the fix (version 1.4.3, 1.5.0, or higher).
Addressed in release/refresh/patch: Cloudera Data Science Workbench 1.4.3 (and higher)
For the latest update on this issue see the corresponding Knowledge article:
TSB 2019-351: Unauthorized Project Access in Cloudera Data Science Workbench
TSB-346: Risk of Data Loss During Cloudera Data Science Workbench (CDSW) Shutdown and Restart
Stopping Cloudera Data Science Workbench involves unmounting the NFS volumes that store CDSW project directories and then cleaning up a folder where the kubelet stores its temporary state. However, due to a race condition, this NFS unmount process can take too long or fail altogether. If this happens, CDSW projects that remain mounted will be deleted by the cleanup step.
Products affected: Cloudera Data Science Workbench
-
1.0.x
-
1.1.x
-
1.2.x
-
1.3.0, 1.3.1
-
1.4.0, 1.4.1
Users affected: This potentially affects all CDSW users.
Detected by: Nehmé Tohmé (Cloudera)
Severity (Low/Medium/High): High
Impact: If the NFS unmount fails during shutdown, data loss can occur. All CDSW project files might be deleted.
CVE: N/A
Immediate action required: If you are running any of the affected Cloudera Data Science Workbench versions, you must run the following script on the CDSW master node every time before you stop or restart Cloudera Data Science Workbench. Failure to do so can result in data loss.
This script should also be run before initiating a Cloudera Data Science Workbench upgrade. As always, we recommend creating a full backup prior to beginning an upgrade.
cdsw_protect_stop_restart.sh - Available for download at: cdsw_protect_stop_restart.sh.
#!/bin/bash set -e cat << EXPLANATION This script is a workaround for Cloudera TSB-346. It protects your CDSW projects from a rare race condition that can result in data loss. Run this script before stopping the CDSW service, irrespective of whether the stop precedes a restart, upgrade, or any other task. Run this script only on the master node of your CDSW cluster. You will be asked to specify a target folder on the master node where the script will save a backup of all your project files. Make sure the target folder has enough free space to accommodate all of your project files. To determine how much space is required, run 'du -hs /var/lib/cdsw/current/projects' on the CDSW master node. This script will first back up your project files to the specified target folder. It will then temporarily move your project files aside to protect against the data loss condition. At that point, it is safe to stop the CDSW service. After CDSW has stopped, the script will move the project files back into place. Note: This workaround is not required for CDSW 1.4.2 and higher. EXPLANATION read -p "Enter target folder for backups: " backup_target echo "Backing up to $backup_target..." rsync -azp /var/lib/cdsw/current/projects "$backup_target" read -n 1 -p "Backup complete. Press enter when you are ready to stop CDSW: " echo "Deleting all Kubernetes resources..." kubectl delete configmaps,deployments,daemonsets,replicasets,services,ingress,secrets,persistentvolumes,persistentvolumeclaims,jobs --all kubectl delete pods --all echo "Temporarily saving project files to /var/lib/cdsw/current/projects_tmp..." mkdir /var/lib/cdsw/current/projects_tmp mv /var/lib/cdsw/current/projects/* /var/lib/cdsw/current/projects_tmp echo -e "Please stop the CDSW service." read -n 1 -p "Press enter when CDSW has stopped: " echo "Moving projects back into place..." mv /var/lib/cdsw/current/projects_tmp/* /var/lib/cdsw/current/projects rm -rf /var/lib/cdsw/current/projects_tmp echo -e "Done. You may now upgrade or start the CDSW service." echo -e "When CDSW is running, if desired, you may delete the backup data at $backup_target"
Addressed in release/refresh/patch: This issue is fixed in Cloudera Data Science Workbench 1.4.2.
Note that you are required to run the workaround script above when you upgrade from an affected version to a release with the fix. This helps guard against data loss when the affected version needs to be shut down during the upgrade process.
For the latest update on this issue see the corresponding Knowledge article:
TSB 2018-346: Risk of Data Loss During Cloudera Data Science Workbench (CDSW) Shutdown and Restart
TSB-328: Unauthenticated User Enumeration in Cloudera Data Science Workbench
Unauthenticated users can get a list of user accounts of Cloudera Data Science Workbench.
Products affected: Cloudera Data Science Workbench
Releases affected: Cloudera Data Science Workbench 1.4.0 (and lower)
Users affected: All users of Cloudera Data Science Workbench 1.4.0 (and lower)
Date/time of detection: June 11, 2018
Severity (Low/Medium/High): 5.3 (Medium) CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N
Impact: Unauthenticated user enumeration in Cloudera Data Science Workbench.
CVE: CVE-2018-15665
Immediate action required: Upgrade to the latest version of Cloudera Data Science Workbench (1.4.2 or higher).
Note that Cloudera Data Science Workbench 1.4.1 is no longer publicly available due to TSB 2018-346: Risk of Data Loss During Cloudera Data Science Workbench (CDSW) Shutdown and Restart.
Addressed in release/refresh/patch: Cloudera Data Science Workbench 1.4.2 (and higher)
For the latest update on this issue see the corresponding Knowledge article:
TSB 2018-318: Unauthenticated User Enumeration in Cloudera Data Science Workbench
TSB-313: Remote Command Execution and Information Disclosure in Cloudera Data Science Workbench
A configuration issue in Kubernetes used by Cloudera Data Science Workbench can allow remote command execution and privilege escalation in CDSW. A separate information permissions issue can cause the LDAP bind password to be exposed to authenticated CDSW users when LDAP bind search is enabled.
Products affected: Cloudera Data Science Workbench
Releases affected: Cloudera Data Science Workbench 1.3.0 (and lower)
Users affected: All users of Cloudera Data Science Workbench 1.3.0 (and lower)
Date/time of detection: May 16, 2018
Severity (Low/Medium/High): High
Impact: Remote command execution and information disclosure
CVE: CVE-2018-11215
Immediate action required: Upgrade to the latest version of Cloudera Data Science Workbench (1.3.1 or higher) and change the LDAP bind password if previously configured in Cloudera Data Science Workbench.
Addressed in release/refresh/patch: Cloudera Data Science Workbench 1.3.1 (and higher)
For the latest update on this issue see the corresponding Knowledge Base article:
TSB-248: Privilege Escalation and Database Exposure in Cloudera Data Science Workbench
Several web application vulnerabilities allow malicious authenticated Cloudera Data Science Workbench (CDSW) users to escalate privileges in CDSW. In combination, such users can exploit these vulnerabilities to gain root access to CDSW nodes, gain access to the CDSW database which includes Kerberos keytabs of CDSW users and bcrypt hashed passwords, and obtain other privileged information such as session tokens, invitations tokens, and environmental variables.
Products affected: Cloudera Data Science Workbench
Releases affected: Cloudera Data Science Workbench 1.0.0, 1.0.1, 1.1.0, 1.1.1
Users affected: All users of Cloudera Data Science Workbench 1.0.0, 1.0.1, 1.1.0, 1.1.1
Date/time of detection: September 1, 2017
Detected by: NCC Group
Severity (Low/Medium/High): High
Impact: Privilege escalation and database exposure.
CVE: CVE-2017-15536
Immediate action required: Upgrade to the latest version of Cloudera Data Science Workbench.
Addressed in release/refresh/patch: Cloudera Data Science Workbench 1.2.0 or higher.
Apache Hadoop
This section lists the security bulletins that have been released for Apache Hadoop.
- XSS Cloudera Manager
- CVE-2018-1296 Permissive Apache Hadoop HDFS listXAttr Authorization Exposes Extended Attribute Key/Value Pairs
- Hadoop YARN Privilege Escalation
- Zip Slip Vulnerability
- Apache Hadoop MapReduce Job History Server (JHS) vulnerability
- No security exposure due to CVE-2017-3162 for Cloudera Hadoop clusters
- Cross-site scripting exposure (CVE-2017-3161) not an issue for Cloudera Hadoop
- Apache YARN NodeManager Password Exposure
- Short-Circuit Read Vulnerability
- Apache Hadoop Privilege Escalation Vulnerability
- Encrypted MapReduce spill data on the local file system is vulnerable to unauthorized disclosure
- Critical Security Related Files in YARN NodeManager Configuration Directories Accessible to Any User
- Apache Hadoop Distributed Cache Vulnerability
- Some DataNode Admin Commands Do Not Check If Caller Is An HDFS Admin
- JobHistory Server Does Not Enforce ACLs When Web Authentication is Enabled
- Apache Hadoop and Apache HBase "Man-in-the-Middle" Vulnerability
- DataNode Client Authentication Disabled After NameNode Restart or HA Enable
- Several Authentication Token Types Use Secret Key of Insufficient Length
- MapReduce with Security
XSS Cloudera Manager
Malicious Impala queries can result in Cross Site Scripting (XSS) when viewed in Cloudera Manager.
Products affected: Impala
Releases affected:
- Cloudera Manager 5.13.x, 5.14.x, 5.15.1, 5.15.2, 5.16.1
- Cloudera Manager 6.0.0, 6.0.1, 6.1.0
Users affected: All Cloudera Manager Users
Date/time of detection: November 2018
Severity: High
Impact: When a malicious user generates a piece of JavaScript in the impala-shell and then goes to the Queries tab of the Impala service in Cloudera Manager, that piece of JavaScript code gets evaluated, resulting in an XSS.
CVE: CVE-2019-14449
Upgrade: Update to a version of CDH containing the fix.
Workaround: There is no workaround, upgrade to the latest available maintenance release.
- Cloudera Manager 5.16.2
- Cloudera Manager 6.0.2, 6.1.1, 6.2.0, 6.3.0
For the latest update on this issue see the corresponding Knowledge article:
CVE-2018-1296 Permissive Apache Hadoop HDFS listXAttr Authorization Exposes Extended Attribute Key/Value Pairs
HDFS exposes extended attribute key/value pairs during listXAttrs, verifying only path-level search access to the directory rather than path-level read permission to the referent.
Products affected: HDFS
Releases affected:
- CDH 5.4.1-5.15.1, 5.16.0
- CDH 6.0.0, 6.0.1, 6.1.0
Users affected: Users who store sensitive data in extended attributes, such as users of HDFS encryption.
Detected by: Rushabh Shah, Yahoo! Inc., Hadoop committer
Date/time of detection: December 12, 2017
Severity: Medium
Impact: HDFS exposes extended attribute key/value pairs during listXAttrs, verifying only path-level search access to the directory rather than path-level read permission to the referent. This affects features that store sensitive data in extended attributes.
CVE: CVE-2018-1296
Upgrade: Update to a version of CDH containing the fix.
Workaround: If a file contains sensitive data in extended attributes, users and admins need to change the permission to prevent others from listing the directory that contains the file.
- CDH 5.16.2, 5.16.1
- CDH 6.0.1, CDH 6.1.0 and higher
For the latest update on this issue see the corresponding Knowledge article:
Hadoop YARN Privilege Escalation
A vulnerability in Hadoop YARN allows a user who can escalate to the yarn user to possibly run arbitrary commands as the root user.
Products affected: Hadoop YARN
Releases affected:
- All releases prior to CDH 5.16.0
- CDH 5.16.0 and CDH 5.16.1
- CDH 6.0.0
Users affected: Users running the Hadoop YARN service.
Detected by: Cloudera
Date/time of detection: 05/07/2018
Severity: High
Impact: The vulnerability allows a user who has access to a node in the cluster running a YARN NodeManager and who can escalate to the yarn user to run arbitrary commands as the root user even if the user is not allowed to escalate directly to the root user.
CVE: CVE-2018-8029
Upgrade: Upgrade to a release where the issue is fixed.
Workaround: The vulnerability can be mitigated by restricting access to the nodes where the YARN NodeManagers are deployed, and by ensuring that only the yarn user is a member of the yarn group and can sudo to yarn. Please consult with your internal system administration team and adhere to your internal security policy when evaluating the feasibility of the above mitigation steps.
- CDH 5.16.2
- CDH 6.0.1, CDH 6.1.0 and higher
For the latest update on this issue see the corresponding Knowledge article:
TSB 2019-318: CVE-2018-8029 Hadoop YARN privilege escalation
Zip Slip Vulnerability
“Zip Slip” is a widespread arbitrary file overwrite critical vulnerability, which typically results in remote command execution. It was discovered and responsibly disclosed by the Snyk Security team ahead of a public disclosure on June 5, 2018, and affects thousands of projects.
Cloudera has analyzed our use of zip-related software, and has determined that only Apache Hadoop is vulnerable to this class of vulnerability in CDH 5. This has been fixed in upstream Hadoop as CVE-2018-8009.
Products affected: Hadoop
Releases affected:
- CDH 5.12.x and all prior releases
- CDH 5.13.0, 5.13.1, 5.13.2, 5.13.3
- CDH 5.14.0, 5.14.2, 5.14.3
- CDH 5.15.0
Users affected: All
Date of detection: April 19, 2018
Detected by: Snyk
Severity: High
Impact: Zip Slip is a form of directory traversal that can be exploited by extracting files from an archive. The premise of the directory traversal vulnerability is that an attacker can gain access to parts of the file system outside of the target folder in which they should reside. The attacker can then overwrite executable files and either invoke them remotely or wait for the system or user to call them, thus achieving remote command execution on the victim’s machine. The vulnerability can also cause damage by overwriting configuration files or other sensitive resources, and can be exploited on both client (user) machines and servers.
CVE: CVE-2018-8009
Immediate action required: Upgrade to a version that contains the fix.
Addressed in release/refresh/patch: CDH 5.14.4
For the latest update on this issue see the corresponding Knowledge article:
TSB: 2018-307: Zip Slip Vulnerability
Apache Hadoop MapReduce Job History Server (JHS) vulnerability
A vulnerability in Hadoop’s Job History Server allows a cluster user to expose private files owned by the user running the MapReduce Job History Server (JHS) process. See http://seclists.org/oss-sec/2018/q1/79 for reference.
Products affected: Apache Hadoop MapReduce
Releases affected: All releases prior to CDH 5.12.0. CDH 5.12.0, CDH 5.12.1, CDH 5.12.2, CDH 5.13.0, CDH 5.13.1, CDH 5.14.0
Users affected: Users running the MapReduce Job History Server (JHS) daemon
Date/time of detection: November 8, 2017
Detected by: Man Yue Mo of lgtm.com
Severity (Low/Medium/High): High
Impact: The vulnerability allows a cluster user to expose private files owned by the user running the MapReduce Job History Server (JHS) process. The malicious user can construct a configuration file containing XML directives that reference sensitive files on the MapReduce Job History Server (JHS) host.
CVE: CVE-2017-15713
Immediate action required: Upgrade to a release where the issue is fixed.
Addressed in release/refresh/patch: CDH 5.13.2
No security exposure due to CVE-2017-3162 for Cloudera Hadoop clusters
Information only. No action required. In the spirit of being overly cautious, CVE-2017-3162 was filed by the Apache Hadoop community to document the ability of the HDFS client (in the CDH 5.x code base) to browse the HDFS namespace without validating the NameNode as a query parameter.
This benign exposure was discovered independently by Cloudera (as well as other members of the Hadoop community) during regular routine static source code analyses. It is considered benign because there are no known attack vectors from this vulnerability.
Products affected: N/A
Releases affected: CDH 5.x and prior.
Users affected: None
Severity (Low/Medium/High): None
Impact: No impact to Cloudera customers or others running Hadoop clusters.
CVE: CVE-2017-3162
Immediate action required: No action required.
Addressed in release/refresh/patch: Not applicable.
Cross-site scripting exposure (CVE-2017-3161) not an issue for Cloudera Hadoop
Information only: No action required. A vulnerability recently uncovered by the wider security community had already been caught and resolved by Cloudera.
Products affected: Hadoop
Releases affected: CDH prior to 5.2.6 specifically the HDFS web UI would have been exposed to this vulnerability.
Users affected: None
Severity (Low/Medium/High): N/A
Impact: No impact to Cloudera customers or others running Hadoop clusters.
CVE: CVE-2017-3161
Immediate action required: No action required.
- CDH5.2.6
- CDH5.3.4, CDH5.3.5, CDH5.3.6, CDH5.3.8, CDH5.3.9, CDH5.3.10
- CDH5.4.3, CDH5.4.4, CDH5.4.5, CDH5.4.7, CDH5.4.8, CDH5.4.9, CDH5.4.10, CDH5.4.11
- CDH5.5.0 and all higher releases
Apache YARN NodeManager Password Exposure
The YARN NodeManager in Apache Hadoop may leak the password for its credential store. This credential store is created by Cloudera Manager and contains sensitive information used by the NodeManager. Any container launched by that NodeManager can gain access to the password that protects the credential store.
Examples of sensitive information inside the credential store include a keystore password and an LDAP bind user password.
The credential store is also protected by Unix file permissions. When managed by Cloudera Manager, the credential store is readable only by the yarn user and the hadoop group. As a result, the scope of this leak is mitigated, making this a Low severity issue.
Products affected: YARN
- CDH 5.4.0, 5.4.1, 5.4.2, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10
- CDH 5.5.0, 5.5.1, 5.5.2, 5.5.3, 5.5.4
- CDH 5.6.0, 5.6.1
- CDH 5.7.0, 5.7.1, 5.7.2
- CDH 5.8.0, 5.8.1
Users affected: Cloudera Manager users who configure YARN to connect to external services (such as LDAP) that require a password, or who have enabled TLS for YARN.
Date/time of detection: March 15, 2016
Detected by: Robert Kanter
Severity (Low/Medium/High): Low (The credential store itself has restrictive permissions.)
Impact: Potential sensitive data exposure
CVE: CVE-2016-3086
Immediate action required: Upgrade to a release in which this has been addressed or higher.
Addressed in release/refresh/patch: CDH 5.4.11, CDH 5.5.5, CDH 5.6.2, CDH 5.7.3, CDH 5.8.2
Short-Circuit Read Vulnerability
In HDFS short-circuit reads, a local user on an HDFS DataNode may be able to create a block token that grants unauthorized read access to random files by guessing certain fields in the token.
Products affected: HDFS
- CDH 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6
- CDH 5.1.0, 5.1.2, 5.1.3, 5.1.4, 5.1.5
- CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6
- CDH 5.3.0, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.6, 5.3.8, 5.3.9, 5.3.10
- CDH 5.4.0, 5.4.1, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
- CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4, 5.5.5, 5.5.6
- CDH 5.6.0, 5.6.1
Users affected: All HDFS users
Detected by: This issue was reported by Kihwal Lee of Yahoo Inc.
Severity (Low/Medium/High): Medium
Impact: A local user may be able to gain unauthorized read access to block data.
CVE: CVE-2016-5001
Immediate action required: Upgrade to a fixed version.
Addressed in release/refresh/patch: 5.7.0 and higher, 5.8.0 and higher, 5.9.0 and higher.
For the latest update on this issue see the corresponding Knowledge article:
Apache Hadoop Privilege Escalation Vulnerability
A remote user who can authenticate with the HDFS NameNode can possibly run arbitrary commands as the hdfs user.
See CVE-2016-5393 Apache Hadoop Privilege escalation vulnerability
Products affected: HDFS and YARN
Releases affected: CDH 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6
CDH 5.1.0, 5.1.2, 5.1.3, 5.1.4, 5.1.5
CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6
CDH 5.3.0, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.6, 5.3.8, 5.3.9, 5.3.10
CDH 5.4.0, 5.4.1, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10
CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4
CDH 5.6.0, 5.6.1
CDH 5.7.0, 5.7.1, 5.7.2
CDH 5.8.0
Users affected: All
Date/time of detection: July 26th, 2016
Severity (Low/Medium/High): High
Impact: A remote user who can authenticate with the HDFS NameNode can possibly run arbitrary commands with the same privileges as the HDFS service.
This vulnerability is critical because it is easy to exploit and compromises system-wide security. As a result, a remote user can potentially run any arbitrary command as the hdfs user. This bypasses all Hadoop security. There is no mitigation for this vulnerability.
CVE: CVE-2016-5393
Immediate action required: Upgrade immediately.
Addressed in release/refresh/patch: CDH 5.4.11, CDH 5.5.5, CDH 5.7.3, CDH 5.8.2, CDH 5.9.0 and higher.
Encrypted MapReduce spill data on the local file system is vulnerable to unauthorized disclosure
MapReduce spills intermediate data to the local disk. The encryption key used to encrypt this spill data is stored in clear text on the local filesystem along with the encrypted data itself. A malicious user with access to the file with these credentials can load the tokens from the file, read the key, and then decrypt the spill data.
See the upstream announcement on the Mitre site.
Products affected: MapReduce
Releases affected: CDH 5.2.0, CDH 5.2.1, CDH 5.2.3, CDH 5.2.4, CDH 5.2.5, CDH 5.2.6
CDH 5.3.0, CDH 5.3.2, CDH 5.3.3, CDH 5.3.4, CDH 5.3.5, CDH 5.3.6, CDH 5.3.8, CDH 5.3.9
CDH 5.4.0, CDH 5.4.1, CDH 5.4.3, CDH 5.4.4, CDH 5.4.5, CDH 5.4.7, CDH 5.4.8, CDH 5.4.9
CDH 5.5.0, CDH 5.5.1, CDH 5.5.2
Users affected: Users who have enabled encryption of MapReduce intermediate/spilled data to the local filesystem
Severity (Low/Medium/High): High
CVE: CVE-2015-1776
Addressed in release/refresh/patch: CDH 5.3.10, CDH 5.4.10, CDH 5.5.4; CDH 5.6.0 and higher
Immediate action required: Upgrade to one of the above releases if you use spill data encryption. This security fix causes MapReduce ApplicationMaster failures to not be tolerated when spill data is encrypted; post-upgrade, individual MapReduce jobs might fail if the ApplicationMaster goes down.
Critical Security Related Files in YARN NodeManager Configuration Directories Accessible to Any User
When Cloudera Manager starts a YARN NodeManager, it makes all files in its configuration directory (typically /var/run/cloudera-scm-agent/process) readable by all users. This includes the file containing the Kerberos keytabs (yarn.keytab) and the file containing passwords for the SSL keystore (ssl-server.xml).
Global read permissions must be removed on the NodeManager’s security-related files.
Products affected: Cloudera Manager
Releases affected: All releases of Cloudera Manager 4.0 and higher.
Users affected: Customers who are using YARN in environments where Kerberos or SSL is enabled.
Date/time of detection: March 8, 2015
Severity (Low/Medium/High): High
Impact: Any user who can log in to a host where the YARN NodeManager is running can get access to the keytab file, use it to authenticate to the cluster, and perform unauthorized operations. If SSL is enabled, the user can also decrypt data transmitted over the network.
CVE: CVE-2015-2263
- If you are running YARN with Kerberos/SSL with Cloudera Manager 5.x, upgrade to the maintenance release with the security fix. If you are running YARN with Kerberos with Cloudera Manager 4.x, upgrade to any Cloudera Manager 5.x release with the security fix.
- Delete all “yarn” and “HTTP” principals from KDC/Active Directory. After deleting them, regenerate them using Cloudera Manager.
- Regenerate SSL keystores that you are using with the YARN service, using a new password.
ETA for resolution: Patches are available immediately with the release of this TSB.
Addressed in release/refresh/patch: Cloudera Manager releases 5.0.6, 5.1.5, 5.2.5, 5.3.3, and 5.4.0 have the fix for this bug.
For further updates on this issue see the corresponding Knowledge article:
Apache Hadoop Distributed Cache Vulnerability
The Distributed Cache Vulnerability allows a malicious cluster user to expose private files owned by the user running the YARN NodeManager process. The malicious user can create a public tar archive containing a symbolic link to a local file on the host running the YARN NodeManager process.
Products affected: YARN in CDH 5.
- Cloudera Manager and CDH 5.2.1
- Cloudera Manager and CDH 5.1.4
- Cloudera Manager and CDH 5.0.5
Users affected: Users running the YARN NodeManager daemon with Kerberos authentication.
Severity: (Low/Medium/High): High.
Impact: Allows unauthorized disclosure of information.
CVE: CVE-2014-3627
- If you are running Cloudera Manager and CDH 5.2.0, upgrade to Cloudera Manager and CDH 5.2.1
- If you are running Cloudera Manager and CDH 5.1.0 through 5.1.3, upgrade to Cloudera Manager and CDH 5.1.4
- If you are running Cloudera Manager and CDH 5.0.0 through 5.0.4, upgrade to Cloudera Manager and CDH 5.0.5
Some DataNode Admin Commands Do Not Check If Caller Is An HDFS Admin
Three HDFS admin commands—refreshNamenodes, deleteBlockPool, and shutdownDatanode—lack proper privilege checks in Apache Hadoop 0.23.x prior to 0.23.11 and 2.x prior to 2.4.1, allowing arbitrary users to make DataNodes unnecessarily refresh their federated NameNode configs, delete inactive block pools, or shut down. The shutdownDatanode command was first introduced in 2.4.0 and refreshNamenodes and deleteBlockPool were added in 0.23.0. The deleteBlockPool command does not actually remove any underlying data from affected DataNodes, so there is no data loss possibility due to this vulnerability, although cluster operations can be severely disrupted.
- Hadoop HDFS
- CDH 5.0.0 and CDH 5.0.1
- All users running an HDFS cluster configured with Kerberos security
- April 30, 2014
Impact: Through HDFS admin command-line tools, non-admin users can shut down DataNodes or force them to perform unnecessary operations.
CVE: CVE-2014-0229
Immediate action required: Upgrade to CDH 5.0.2 or higher.
JobHistory Server Does Not Enforce ACLs When Web Authentication is Enabled
The JobHistory Server does not enforce job ACLs when web authentication is enabled. This means that any user can see details of all jobs. This only affects users who are using MRv2/YARN with HTTP authentication enabled.
- Hadoop
- All versions of CDH 4.5.x up to 4.5.0
- All versions of CDH 4.4.x up to 4.4.0
- All versions of CDH 4.3.x up to 4.3.1
- All versions of CDH 4.2.x up to 4.2.2
- All versions of CDH 4.1.x up to 4.1.5
- All versions of CDH 4.0.x
- CDH 5.0.0 Beta 1
- Users of YARN who have web authentication enabled.
Date/time of detection: October 14, 2013
Impact: Low
CVE: CVE-2013-6446
- None, if you are not using MRv2/YARN with HTTP authentication.
- If you are using MRv2/YARN with HTTP authentication, upgrade to CDH 4.6.0 or CDH 5.0.0 Beta 2 or contact Cloudera for a patch.
ETA for resolution: Fixed in CDH 5.0.0 Beta 2 released on 2/10/2014 and CDH 4.6.0 released on 2/27/2014.
Addressed in release/refresh/patch: CDH 4.6.0 and CDH 5.0.0 Beta 2.
Verification:
This vulnerability affects the JobHistory Server Web Services; it does not affect the JobHistory Server Web UI.
- Create two non-admin users: 'A' and 'B'
- Submit a MapReduce job as user 'A'. For example:
$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar pi 2 2
- From the output of the above submission, copy the job ID, for example: job_1389847214537_0001
- With a browser logged in to the JobHistory Server Web UI as user 'B', access the following URL:
http://<JHS_HOST>:19888/ws/v1/history/mapreduce/jobs/job_1389847214537_0001
If the vulnerability has been fixed, you should get an HTTP UNAUTHORIZED response; if the vulnerability has not been fixed, you should get an XML output with basic information about the job.
Apache Hadoop and Apache HBase "Man-in-the-Middle" Vulnerability
The Apache Hadoop and HBase RPC protocols are intended to provide bi-directional authentication between clients and servers. However, a malicious server or network attacker can unilaterally disable these authentication checks. This allows for potential reduction in the configured quality of protection of the RPC traffic, and privilege escalation if authentication credentials are passed over RPC.
- Hadoop
- HBase
- All versions of CDH 4.3.x prior to 4.3.1
- All versions of CDH 4.2.x prior to 4.2.2
- All versions of CDH 4.1.x prior to 4.1.5
- All versions of CDH 4.0.x
- Users of HDFS who have enabled Hadoop Kerberos security features and HDFS data encryption features.
- Users of MapReduce or YARN who have enabled Hadoop Kerberos security features.
- Users of HBase who have enabled HBase Kerberos security features and who run HBase co-located on a cluster with MapReduce or YARN.
Date/time of detection: June 10th, 2013
Severity: Severe
Impact:
RPC traffic from Hadoop clients, potentially including authentication credentials, may be intercepted by any user who can submit jobs to Hadoop. RPC traffic from HBase clients to Region Servers may be intercepted by any user who can submit jobs to Hadoop.
CVE: CVE-2013-2192 (Hadoop) and CVE-2013-2193 (HBase)
-
Users of CDH 4.3.0 should immediately upgrade to CDH 4.3.1 or higher.
-
Users of CDH 4.2.x should immediately upgrade to CDH 4.2.2 or higher.
-
Users of CDH 4.1.x should immediately upgrade to CDH 4.1.5 or higher.
ETA for resolution: August 23, 2013
Addressed in release/refresh/patch: CDH 4.1.5, CDH 4.2.2, and CDH 4.3.1.
Verification:
To verify that you are not affected by this vulnerability, ensure that you are running a version of CDH at or higher than the aforementioned versions. To verify that this is true, proceed as follows.
-
On RPM-based systems (RHEL, SLES) rpm -qi hadoop | grep -i version
- On Debian-based systems
dpkg -s hadoop | grep -i version
DataNode Client Authentication Disabled After NameNode Restart or HA Enable
Products affected: HDFS
Releases affected: CDH 4.0.0
Users affected: Users of HDFS who have enabled HDFS Kerberos security features.
Date vulnerability discovered: June 26, 2012
Date vulnerability analysis and validation complete: June 29, 2012
Severity: Severe
Impact: Malicious clients may gain write access to data for which they have read-only permission, or gain read access to any data blocks whose IDs they can determine.
Mechanism: When Hadoop security features are enabled, clients authenticate to DataNodes using BlockTokens issued by the NameNode to the client. The DataNodes are able to verify the validity of a BlockToken, and will reject BlockTokens that were not issued by the NameNode. The DataNode determines whether or not it should check for BlockTokens when it registers with the NameNode.
Due to a bug in the DataNode/NameNode registration process, a DataNode which registers more than once for the same block pool will conclude that it thereafter no longer needs to check for BlockTokens sent by clients. That is, the client will continue to send BlockTokens as part of its communication with DataNodes, but the DataNodes will not check the validity of the tokens. A DataNode will register more than once for the same block pool whenever the NameNode restarts, or when HA is enabled.
Immediate action required:
- Understand the vulnerability introduced by restarting the NameNode, or enabling HA.
- Upgrade to CDH 4.0.1 as soon as it becomes available.
Resolution: July 6, 2012
Addressed in release/refresh/patch: CDH 4.0.1 This release addresses the vulnerability identified by CVE-2012-3376.
Verification: On the NameNode run one of the following:
- yum list hadoop-hdfs-namenode on RPM-based systems
- dpkg -l | hadoop-hdfs-namenode on Debian-based systems
- zypper info hadoop-hdfs-namenode for SLES11
On all DataNodes run one of the following:
- yum list hadoop-hdfs-datanode on RPM-based systems
- dpkg -l | grep hadoop-hdfs-datanode on Debian-base
- zypper info hadoop-hdfs-datanode for SLES11
The reported version should be >= 2.0.0+91-1.cdh4.0.1
Several Authentication Token Types Use Secret Key of Insufficient Length
Products Affected: HDFS, MapReduce, YARN, Hive, HBase
Releases Affected: If you use MapReduce, HDFS, HBase, or YARN, CDH4.0.x and all CDH3 versions between CDH3 Beta 3 and CDH3u5 refresh 1.
Users Affected: Users who have enabled Hadoop Kerberos security features.
Date/Time of Announcement: 10/12/2012 2:00pm PDT (upstream)
Verification: Verified upstream
Severity: High
Impact: Malicious users may crack the secret keys used to sign security tokens, granting access to modify data stored in HDFS, HBase, or Hive without authorization. HDFS Transport Encryption may also be brute-forced.
Mechanism: This vulnerability impacts a piece of security infrastructure in Hadoop Common, which affects the security of authentication tokens used by HDFS, MapReduce, YARN, HBase, and Hive.
Several components in Hadoop issue authentication tokens to clients in order to authenticate and authorize later access to a secured resource. These tokens consist of an identifier and a signature generated using the well-known HMAC scheme. The HMAC algorithm is based on a secret key shared between multiple server-side components.
For example, the HDFS NameNode issues block access tokens, which authorize a client to access a particular block with either read or write access. These tokens are then verified using a rotating secret key, which is shared between the NameNode and DataNodes. Similarly, MapReduce issues job-specific tokens, which allow reducer tasks to retrieve map output. HBase similarly issues authentication tokens to MapReduce tasks, allowing those tasks to access HBase data. Hive uses the same token scheme to authenticate access from MapReduce tasks to the Hive metastore.
The HMAC scheme relies on a shared secret key unknown to the client. In currently released versions of Hadoop, this key was created with an insufficient length (20 bits), which allows an attacker to obtain the secret key by brute force. This may allow an attacker to perform several actions without authorization, including accessing other users' data.
Immediate action required: If Security is enabled, upgrade to the latest CDH release.
ETA for resolution: As of 10/12/2012, this is patched in CDH4.1.0 and CDH3u5 refresh 2. Both are available now.
Addressed in release/refresh/patch: CDH4.1.0 and CDH3u5 refresh 2
Details: CDH Downloads
MapReduce with Security
Products affected: MapReduce
Releases affected: Hadoop 1.0.1 and below, Hadoop 0.23, CDH3u0-CDH3u2, CDH3u3 containing the hadoop-0.20-sbin package, version 0.20.2+923.195 and below.
Users affected: Users who have enabled Hadoop Kerberos/MapReduce security features.
Severity: Critical
Impact: Vulnerability allows an authenticated malicious user to impersonate any other user on the cluster.
Immediate action required: Upgrade the hadoop-0.20-sbin package to version to 0.20.2+923.197 or higher on all TaskTrackers to address the vulnerability. Upgrading hadoop-0.20-sbin causes upgrade of several related (but unchanged) hadoop packages. If using Cloudera Manager versions 3.7.3 and below, you will also need to upgrade to Cloudera Manager 3.7.4 or higher before you can successfully run jobs with Kerberos enabled after upgrading the hadoop-0.20-sbin package.
Resolution: 3/21/2012
Addressed in release/refresh/patch: hadoop-0.20-sbin package, version 0.20.2+923.197 This release addresses the vulnerability identified by CVE-2012-1574.
Remediation verification: On all TaskTrackers run one of the following:
- yum list hadoop-0.20-sbin on RPM-based systems
- dpkg -l | grep hadoop-0.20-sbin on Debian-based systems
- zypper info hadoop-0.20-sbin for SLES11
The reported version should be >= 0.20.2+923.197.
If you are a Cloudera Enterprise customer and have further questions or need assistance, log a ticket with Cloudera Support through http://support.cloudera.com.
Apache HBase
This section lists the security bulletins that have been released for Apache HBase.
Incorrect User Authorization Applied by HBase REST Server
In CDH versions 6.0 to 6.1.1, authorization was incorrectly applied to users of the HBase REST server. Requests sent to the HBase REST server were executed with the permissions of the REST server itself, not with the permissions of the end-user. This problem does not affect previous CDH 5 releases.
Products affected: HBase
Releases affected: CDH 6.0.x, 6.1.0, 6.1.1
Date/time of detection: March, 2019
Users affected: Users of the HBase REST server with authentication and authorization enabled
Severity (Low/Medium/High): 7.3 (High) CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:L
Impact: Authorization was incorrectly applied to users of the HBase REST server. Requests sent to the HBase REST server were executed with the permissions of the REST server itself, instead of the permissions of the end-user. This issue is only relevant when HBase is configured with Kerberos authentication, HBase authorization is enabled, and the REST server is configured with SPNEGO authentication. This issue does not extend beyond the HBase REST server. The impact of this vulnerability is dependent on the authorizations granted to the HBase REST service user, but, typically, this user has significant authorization in HBase.
CVE: CVE-2019-0212
Immediate action required: Stop the HBase REST server to prevent any access of HBase with incorrect authorizations. Upgrade to a version of CDH with the vulnerability fixed, and restart the HBase REST service.
Addressed in release/refresh/patch: CDH 6.1.2, 6.2.0
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2019-367: Incorrect user authorization applied by HBase REST Server
Potential Privilege Escalation for User of HBase “Thrift 1” API Server over HTTP
CVE-2018-8025 describes an issue in Apache HBase that affects the optional "Thrift 1" API server when running over HTTP. There is a race-condition that could lead to authenticated sessions being incorrectly applied to users, e.g. one authenticated user would be considered a different user or an unauthenticated user would be treated as an authenticated user.
Products affected: HBase Thrift Server
- CDH 5.4.x - 5.12.x
- CDH 5.13.0, 5.13.1, 5.13.2, 5.13.3
- CDH 5.14.0, 5.14.2, 5.14.3
- CDH 5.15.0
Fixed versions: CDH 5.14.4
Users affected: Users with the HBase Thrift 1 service role installed and configured to work in “thrift over HTTP” mode. For example, those using Hue with HBase impersonation enabled.
Severity (Low/Medium/High): High
Potential privilege escalation.
CVE: CVE-2018-8025
Immediate action required: Upgrade to a CDH version with the fix, or, disable the HBase Thrift-over-HTTP service. Disabling the HBase Thrift-over-HTTP service will render Hue impersonation inoperable and all HBase access via Hue will be performed using the “hue” user instead of the authenticated user.
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB: 2018-315: Potential privilege escalation for user of HBase “Thrift 1” API Server over HTTP
HBase Metadata in ZooKeeper Can Lack Proper Authorization Controls
In certain circumstances, HBase does not properly set up access control in ZooKeeper. As a result, any user can modify this metadata and perform attacks, including service denial, or cause data loss in a replica cluster. Clusters configured using Cloudera Manager are not vulnerable.
Products affected: HBase
Releases affected: All CDH 4 and CDH 5 versions prior to 4.7.2 , 5.0.7, 5.1.6, 5.2.6, 5.3.4, 5.4.3
Users affected: HBase users with security set up to use Kerberos
Date/time of detection: May 15, 2015
Severity (Low/Medium/High): High
Impact: An attacker could cause potential data loss in a replica cluster, or denial of service.
CVE: CVE-2015-1836
Immediate action required: To determine if your cluster is affected by this problem, open a ZooKeeper shell using hbase zkcli and check the permission on the /hbase znode, using getAcl /hbase.
If the output reads: 'world,'anyone : cdrwa, any unauthenticated user can delete or modify HBase znodes.
- Change the configuration to use hbase.zookeeper.client.keytab.file on Master and RegionServers.
- Edit hbase-site.xml (which should be in /etc/hbase/) and add:
<property> <name>hbase.zookeeper.client.keytab.file</name> <value>hbase.keytab</value> </property>
- Edit hbase-site.xml (which should be in /etc/hbase/) and add:
- Do a rolling restart of HBase (Master and RegionServers), and wait until it has completed.
- To manually fix the ACLs, form a zkcli running as hbase user to have world with only read, and sasl/hbase with cdrwa.
(Some znodes in the list might not be present in your setup, so ignore the "node not found" exceptions.)
$ hbase zkcli setAcl /hbase world:anyone:r,sasl:hbase:cdrwa setAcl /hbase/backup-masters sasl:hbase:cdrwa setAcl /hbase/draining sasl:hbase:cdrwa setAcl /hbase/flush-table-proc sasl:hbase:cdrwa setAcl /hbase/hbaseid world:anyone:r,sasl:hbase:cdrwa setAcl /hbase/master world:anyone:r,sasl:hbase:cdrwa setAcl /hbase/meta-region-server world:anyone:r,sasl:hbase:cdrwa setAcl /hbase/namespace sasl:hbase:cdrwa setAcl /hbase/online-snapshot sasl:hbase:cdrwa setAcl /hbase/region-in-transition sasl:hbase:cdrwa setAcl /hbase/recovering-regions sasl:hbase:cdrwa setAcl /hbase/replication sasl:hbase:cdrwa setAcl /hbase/rs sasl:hbase:cdrwa setAcl /hbase/running sasl:hbase:cdrwa setAcl /hbase/splitWAL sasl:hbase:cdrwa setAcl /hbase/table sasl:hbase:cdrwa setAcl /hbase/table-lock sasl:hbase:cdrwa setAcl /hbase/tokenauth sasl:hbase:cdrwa
Addressed in release/refresh/patch: An update will be provided when solutions are in place.
For updates on this issue, see the corresponding Knowledge article:
TSB 2015-65: HBase Metadata in ZooKeeper can lack proper Authorization Controls
Apache Hadoop and Apache HBase "Man-in-the-Middle" Vulnerability
The Apache Hadoop and HBase RPC protocols are intended to provide bi-directional authentication between clients and servers. However, a malicious server or network attacker can unilaterally disable these authentication checks. This allows for potential reduction in the configured quality of protection of the RPC traffic, and privilege escalation if authentication credentials are passed over RPC.
- Hadoop
- HBase
- All versions of CDH 4.3.x prior to 4.3.1
- All versions of CDH 4.2.x prior to 4.2.2
- All versions of CDH 4.1.x prior to 4.1.5
- All versions of CDH 4.0.x
- Users of HDFS who have enabled Hadoop Kerberos security features and HDFS data encryption features.
- Users of MapReduce or YARN who have enabled Hadoop Kerberos security features.
- Users of HBase who have enabled HBase Kerberos security features and who run HBase co-located on a cluster with MapReduce or YARN.
Date/time of detection: June 10th, 2013
Severity: Severe
Impact:
RPC traffic from Hadoop clients, potentially including authentication credentials, may be intercepted by any user who can submit jobs to Hadoop. RPC traffic from HBase clients to Region Servers may be intercepted by any user who can submit jobs to Hadoop.
CVE: CVE-2013-2192 (Hadoop) and CVE-2013-2193 (HBase)
-
Users of CDH 4.3.0 should immediately upgrade to CDH 4.3.1 or higher.
-
Users of CDH 4.2.x should immediately upgrade to CDH 4.2.2 or higher.
-
Users of CDH 4.1.x should immediately upgrade to CDH 4.1.5 or higher.
ETA for resolution: August 23, 2013
Addressed in release/refresh/patch: CDH 4.1.5, CDH 4.2.2, and CDH 4.3.1.
Verification:
To verify that you are not affected by this vulnerability, ensure that you are running a version of CDH at or higher than the aforementioned versions. To verify that this is true, proceed as follows.
-
On RPM-based systems (RHEL, SLES) rpm -qi hadoop | grep -i version
- On Debian-based systems
dpkg -s hadoop | grep -i version
Apache Hive
This section lists the security bulletins that have been released for Apache Hive.
Apache Hive Vulnerabilities CVE-2018-1282 and CVE-2018-1284
This security bulletin covers two vulnerabilities discovered in Hive.
CVE-2018-1282: JDBC driver is susceptible to SQL injection attack if the input parameters are not properly cleaned
This vulnerability allows carefully crafted arguments to be used to bypass the argument-escaping and clean-up that the Apache Hive JDBC driver does with PreparedStatement objects.
If you use Hive in CDH, you have the option of using the Apache Hive JDBC driver or the Cloudera Hive JDBC driver, which is distributed by Cloudera for use with your JDBC applications. Cloudera strongly recommends that you use the Cloudera Hive JDBC driver and offers only limited support for the Apache Hive JDBC driver. If you use the Cloudera Hive JDBC driver you are not affected by this vulnerability.
Mitigation: Upgrade to use the Cloudera Hive JDBC driver, or perform the following actions in your Apache Hive JDBC client code or application when dealing with user provided input in PreparedStatement objects:
- Avoid passing user input with the PreparedStatement.setBinaryStream method, and
- Sanitize user input for the PreparedStatement.setString method by replacing all occurrences of \' (backslash, single quotation mark) to ' (single quotation mark).
Detected by: CVE-2018-1282 was detected by Bear Giles of Snaplogic
CVE-2018-1284: Hive UDF series UDFXPathXXXX allows users to pass carefully crafted XML to access arbitrary files
If Hive impersonation is disabled and / or Apache Sentry is used, a malicious user might use any of the Hive xpath UDFs to expose the contents of a file on the node that is running HiveServer2 which is owned by the HiveServer2 user (usually hive).
Mitigation: Upgrade to a release where this is fixed. If xpath functions are not currently used, disable them with Cloudera Manager by setting the hive.server2.builtin.udf.blacklist property to xpath,xpath_short,xpath_int,xpath_long,xpath_float,xpath_double,xpath_number,xpath_string in the HiveServer2 Advanced Configuration Snippet (Safety Valve) for hive-site.xml. For more information about setting this property to blacklist Hive UDFs, see the Cloudera Documentation.
Products affected: Hive
Releases affected:
- CDH 5.12 and earlier
- CDH 5.13.0, 5.13.1, 5.13.2, 5.13.3
- CDH 5.14.0, 5.14.1, 5.14.2
Users affected: All
Severity (Low/Medium/High): High
Impact: SQL injection, compromise of the hive user account
CVE: CVE-2018-1282, CVE-2018-1284
Immediate action required: Upgrade to a CDH release with the fix or perform the above mitigations.
Addressed release/refresh/patch: This will be fixed in a future release.
For the latest update on these issues, see the corresponding Knowledge article:
TSB 2018-299: Hive Vulnerabilities CVE-2018-1282 and CVE-2018-1284
Apache Hive SSL Vulnerability Bug Disclosure
- SSL is not turned on, or
- SSL is turned on but only non-self-signed certificates are used.
If neither of the above statements describe your deployment, please read on.
In CDH 5.2 and later releases, the CVE-2016-3083: Apache Hive SSL vulnerability bug disclosure impacts applications and tools that use:
- Apache JDBC driver with SSL enabled, or
-
Cloudera Hive JDBC drivers with self-signed certificates and SSL enabled
The certificate must be self-signed. A certificate signed by a trusted (or untrusted) Certificate Authority (CA) is not impacted by this vulnerability.
Cloudera does not recommend the use of self-signed certificates.
The CVE-2016-3083: Apache Hive SSL vulnerability is fixed by HIVE-13390 and is documented in the Apache community as follows:
"Apache Hive (JDBC + HiveServer2) implements SSL for plain TCP and HTTP connections (it supports both transport modes). While validating the server's certificate during the connection setup, the client doesn't seem to be verifying the common name attribute of the certificate. In this way, if a JDBC client sends an SSL request to server abc.example.com, and the server responds with a valid certificate (certified by CA) but issued to xyz.example.com, the client will accept that as a valid certificate and the SSL handshake will go through."
This means that it would be possible to set up a man-in-the-middle attack to intercept all SSL-protected JDBC communication.
CDH Hive users have the option of deploying either the Apache Hive JDBC driver or the Cloudera Hive JDBC driver that is distributed by Cloudera for use with their JDBC applications. Traditionally, Cloudera has strongly recommended use of the Cloudera Hive JDBC driver — and offers limited support for the Apache Hive JDBC driver. The JDBC jars in the CLASSPATH environment variable can be examined to determine which JDBC driver is in use. If the hive-jdbc-1.1.0-cdh<CDH_VERSION>.jar is included in the CLASSPATH, the Apache JDBC driver is being used. If the HiveJDBC4.jar or the HiveJDBC41.jar is in the CLASSPATH, that indicates the Cloudera Hive JDBC driver is being used.
JDBC drivers can also be used in an embedded mode. For example, when connecting to HiveServer2 by way of tools such as Beeline, the JDBC Client is invoked internally over the Thrift API. The JDBC driver in use by Beeline can also be determined by examining the driver version information printed after the connection is established.
If the output shows:
- Hive JDBC (version 1.1.0-cdh<CDH_VERSION>), the Apache JDBC driver is being used.
- Driver: HiveJDBC (version 02.05.18.1050), the Cloudera Hive JDBC Driver is being used.
hive.server2.use.SSL=true
hive.server2.enable.SSL=true
This information can be used to decide whether a tool or application is impacted by this vulnerability.
For Cloudera Hive JDBC drivers with self-signed certificates and SSL enabled: Generate non-self-signed certificates according to the following documentation: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_create_deploy_certs.html
For Apache JDBC drivers with SSL enabled: You can switch to use the Cloudera Hive JDBC driver. Note that the Cloudera Hive JDBC driver only displays query results and skips displaying informational messages such as those logged by MapReduce jobs (that are invoked as part of executing the JDBC command). For example:
INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 INFO : 2017-06-06 14:19:41,115 Stage-1 map = 0%, reduce = 0% INFO : 2017-06-06 14:19:48,427 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.87 sec INFO : 2017-06-06 14:19:55,845 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.75 sec INFO : MapReduce Total cumulative CPU time: 3 seconds 750 msec INFO : Ended Job = job_1496750846200_0001 INFO : MapReduce Jobs Launched: INFO : Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.75 sec HDFS Read: 31539 HDFS Write: 3 SUCCESS INFO : Total MapReduce CPU Time Spent: 3 seconds 750 msec
The steps required to switch to using the Cloudera Hive JDBC driver for Beeline are:
- Download the latest Cloudera Hive JDBC driver from: https://www.cloudera.com/downloads/connectors/hive/jdbc/2-5-18.html
-
Unzip the archive:
unzip Cloudera_HiveJDBC41_2.5.18.1050.zip
-
Add the HiveJDBC41.jar to the beginning of the CLASSPATH:
export HIVE_CLASSPATH=/root/HiveJDBC41.jar:$HIVE_CLASSPATH
- Execute Beeline, but change the connection URL according to the Cloudera driver documentation at the following location: http://www.cloudera.com/documentation/other/connectors/hive-jdbc/latest/Cloudera-JDBC-Driver-for-Apache-Hive-Install-Guide.pdf
- Confirm the change by checking the driver version when connecting to HiveServer2 with Beeline:
Connecting to jdbc:hive2:://<HOST>:10000 Connected to: Apache Hive (version 1.1.0-cdh<CDH_VERSION>) Driver: HiveJDBC (version 02.05.18.1050)
- The following error message that is displayed can be ignored:
Error: [Cloudera] [JDBC] (11975) Unsupported transaction isolation level: 4. (state=HY000,code=11975)
For other third-party tools and applications, replace the Apache JDBC driver as follows:
-
Add the HiveJDBC41.jar to the beginning of the CLASSPATH for the application:
export CLASSPATH=/root/hiveJDBC41.jar:$CLASSPATH
-
Change the JDBC connection URL according to the Cloudera driver documentation located at:
http://www.cloudera.com/documentation/other/connectors/hive-jdbc/latest/Cloudera-JDBC-Driver-for-Apache-Hive-Install-Guide.pdf
Products affected: Hive
- CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6
- CDH 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.6, 5.3.8, 5.3.9, 5.3.10
- CDH 5.4.0, 5.4.1, 5.4.2, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
- CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4, 5.5.5, 5.5.6
- CDH 5.6.0, 5.6.1
- CDH 5.7.0, 5.7.1, 5.7.2, 5,7.3, 5.7.4, 5.7.5, 5.7.6
- CDH 5.8.0, 5.8.2, 5.8.3, 5.8.4, 5.8.5
- CDH 5.9.0, 5.9.1, 5.9.2
- CDH 5.10.0, 5.10.1
- CDH 5.11.0
Users affected: JDBC (Apache Hive JDBC driver using SSL or Cloudera Hive JDBC driver with self-signed certificates) and HiveServer2 users
Detected by: Branden Crawford from Inteco Systems Limited
Severity (Low/Medium/High): Medium
Impact: As discussed above.
CVE: CVE-2016-3083
- For non-Beeline clients (including third-party tools or applications): If Apache Hive JDBC drivers are being used, switch to Cloudera JDBC drivers (and use externally signed CA certs as always recommended for production use).
- For Beeline (or Beeline-based clients, e.g. Oozie): Update Beeline’s embedded Apache JDBC driver to Cloudera JDBC driver as shown above. Alternatively, if these JDBC-based clients are invoked within a CDH cluster, upgrade the cluster to a release where the issue has been addressed.
Addressed release/refresh/patch: CDH5.11.1 and later
For the latest update on this issue, see the corresponding Knowledge article:
Hive built-in functions “reflect”, “reflect2”, and “java_method” not blocked by default in Sentry
Sentry does not block the execution of Hive built-in functions “reflect”, “reflect2”, and “java_method” by default in some CDH versions. These functions allow the execution of arbitrary user code, which is a security issue.
This issue is documented in SENTRY-960.
Products affected: Hive, Sentry
Releases affected:
CDH 5.4.0, CDH 5.4.1, CDH 5.4.2, CDH 5.4.3, CDH 5.4.4, CDH 5.4.5, CDH 5.4.6, CDH 5.4.7, CDH 5.4.8, CDH 5.5.0, CDH 5.5.1
Users affected: Users running Sentry with Hive.
Date/time of detection: November 13, 2015
Severity (Low/Medium/High): High
Impact: This potential vulnerability may enable an authenticated user to execute arbitrary code as a Hive superuser.
CVE: CVE-2016-0760
Immediate action required: Explicitly add the following to the blacklist property in hive-site.xml of Hive Server2:
<property> <name>hive.server2.builtin.udf.blacklist</name> <value>reflect,reflect2,java_method </value> </property>
Addressed in release/refresh/patch: CDH 5.4.9, CDH 5.5.2, CDH 5.6.0 and higher
HiveServer2 LDAP Provider May Allow Authentication with Blank Password
Hive may allow a user to authenticate without entering a password, depending on the order in which classes are loaded.
Specifically, Hive's SaslPlainServerFactory checks passwords, but the same class provided in Hadoop does not. Therefore, if the Hadoop class is loaded first, users can authenticate with HiveServer2 without specifying the password.
Products affected: Hive
Releases affected:
-
CDH 5.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5
-
CDH 5.1, 5.1.2, 5.1.3, 5.1.4
-
CDH 5.2, 5.2.1, 5.2.3, 5.2.4
-
CDH 5.3, 5.3.1, 5.3.2
-
CDH 5.4.1, 5.4.2, 5.4.3
Users affected: All users using Hive with LDAP authentication.
Date/time of detection: March 11, 2015
Severity: (Low/Medium/High) High
Impact: A malicious user may be able to authenticate with HiveServer2 without specifying a password.
CVE: CVE-2015-1772
Immediate action required: Upgrade to CDH 5.4.4, 5.3.3, 5.2.5, 5.1.5, or 5.0.6
Addressed in release/refresh/patch: CDH 5.4.4, 5.3.3, 5.2.5, 5.1.5, or 5.0.6
For more updates on this issue, see the corresponding Knowledge article:
HiveServer2 LDAP Provider may Allow Authentication with Blank Password
Hue
This section lists the security bulletins that have been released for Hue.
Hue external users granted super user priviliges in C6
When using either the LdapBackend or the SAML2Backend authentication backends in Hue, users that are created on login when logging in for the first time are granted superuser privileges in CDH 6. This does not apply to users that are created through the User Admin application in Hue.
Products affected: Hue
Releases affected: CDH 6.0.0, CDH 6.0.1, CDH 6.1.0
Users affected: All user
Date/time of detection: Dec/12/18
Severity (Low/Medium/High): Medium
Impact:
The superuser privilege is granted to any user that logs in to Hue when LDAP or SAML authentication is used. For example, if you have the create_users_on_login property set to true in the Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini, and you are using LDAP or SAML authentication, a user that logs in to Hue for the first time is created with superuser privileges and can perform the following actions:
- Create/Delete users and groups
- Assign users to groups
- Alter group permissions
- Synchronize Hue users with your LDAP server
- Create local users and groups (these local users can login to Hue only if the mode of multi-backend authentication is set up as LdapBackend and AllowFirstUserDjangoBackend)
- Assign users to groups
- Alter group permissios
- When users are synced with your LDAP server manually by using the User Admin page in Hue.
- When you are using other authentication methods. For example:
- AllowFirstUserDjangoBackend
- Spnego
- PAM
- Oauth
- Local users, including users created by unexpected superusers, can login throug AllowFirstUserDjangoBackend.
- Local users in Hue that created as hive, hdfs, or solr have privileges to access protected data and alter permissions in security app.
- Removing the AllowFirstUserDjangoBackend authentication backend can stop local users login to Hue, but it requires the administrator to have Cloudera Manager access
CVE: CVE-2019-7319
Immediate action required: Upgrade and follow the instructions below.
Addressed in release/refresh/patch: CDH 6.1.1 and CDH 6.2.0
UPDATE useradmin_userprofile SET `creation_method` = 'EXTERNAL' WHERE `creation_method` = 'CreationMethod.EXTERNAL';
After executing the UPDATE statement, new Hue users are no longer automatically created as superusers.
To find out the list of superusers, run SQL query:
SELECT username FROM auth_user WHERE superuser = 1;
- Log in to the Hue UI as an administrator.
- In the upper right corner of the page, click the user drop-down list and select Manage User:
- In the User Admin page, make sure that the Users tab is selected and click the name of the user in the list that you want to edit:
- In the Hue Users - Edit user page, click Step 3: Advanced:
- Clear the checkbox for Superuser status:
- At the bottom of the page, click Update user to save the change.
For the latest update on this issue see the corresponding Knowledge article:
TSB 2019-360: Hue external users granted super user privileges in C6
Access control issue on /desktop/api endpoints on Hue
Hue, as shipped with the releases affected below, allows remote attackers to enumerate user accounts via a request to desktop/api/users/autocomplete.
Products affected: Hue
- CDH 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6
- CDH 5.1.0, 5.1.2, 5.1.3, 5.1.4, 5.1.5
- CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6
- CDH 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.7, 5.3.8, 5.3.9, 5.3.10
- CDH 5.4.0, 5.4.1, 5.4.2, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
- CDH 5.5.0, 5.5.1, 5.5.2, 5.5.3, 5.5.4, 5.5.5, 5.5.6
- CDH 5.6.0, 5.6.1
- CDH 5.7.0, 5.7.1, 5,7.3, 5.7.4, 5.7.5, 5.7.6
- CDH 5.8.0, 5.8.1, 5.8.2
- CDH 5.9.0
Users affected: All Cloudera Hue users
Date/time of detection: May 20, 2016
Severity (Low/Medium/High): Medium
Impact: An attacker can leverage this issue to harvest valid user accounts and attempt to use the accounts in brute-force attacks.
CVE: CVE-2016-4947
Immediate action required: Upgrade to any of the following releases, which resolve this issue.
- CDH 5.8.3 and higher
- CDH 5.9.1 and higher
- CDH 5.10.0 and higher
Hue Document Privilege Escalation
A user with read-only access to a document in Hue can grant oneself write access to that document, and change that document’s privileges for other users. If the document is a Hive, Impala, or Oozie job, the user can inject arbitrary code that runs with the permissions of the next user that runs the job.
Products affected: Hue
Releases affected: CDH 5.0.0, CDH 5.0.1, CDH 5.0.2, CDH 5.0.3, CDH 5.0.4, CDH 5.0.5, CDH 5.0.6, CDH 5.1.0, CDH 5.1.2, CDH 5.1.3, CDH 5.1.4, CDH 5.1.5, CDH 5.2.0, CDH 5.2.1, CDH 5.2.3, CDH 5.2.4, CDH 5.2.5, CDH 5.2.6, CDH 5.3.0, CDH 5.3.2, CDH 5.3.3, CDH 5.3.4, CDH 5.3.5, CDH 5.3.6, CDH 5.3.8, CDH 5.3.9, CDH 5.4.0, CDH 5.4.1, CDH 5.4.3, CDH 5.4.4, CDH 5.4.5, CDH 5.4.7, CDH 5.4.8
Users affected: Customers using Hue
Date/time of detection: October 9, 2015
Severity (Low/Medium/High): Medium
Impact: Malicious users may be able to run arbitrary code with the permissions of another user.
CVE: CVE-2015-7831
Immediate action required: Upgrade to CDH 5.4.9 or CDH 5.5.0.
Addressed in release/refresh/patch: CDH 5.4.9 or CDH 5.5.0
Apache Impala
This section lists the security bulletins that have been released for Apache Impala.
- Impala/Sentry security roles mismatch after Catalog Server restart
- Using Impala with Sentry enabled, revoking ALL privilege from server with grant option does not revoke
- Missing authorization in Apache Impala may allow data injection
- Impala Statestore exposes plaintext data with SSL TLS enabled
- Malicious server can cause Impala server to skip authentication checks
- Read Access to Impala Views in queries with WHERE-clause Subqueries
- Impala issued REVOKE ALL ON SERVER does not revoke all privileges
- Impala does not authorize authenticated Kerberos users who access internal APIs
Impala/Sentry security roles mismatch after Catalog Server restart
This issue occurs when Impala’s Catalog Server is restarted without also restarting all the Impala Daemons.
Impala uses generated numeric identifiers for roles. These identifiers are regenerated during catalogd restarts, and the same role can get a different identifier, possibly used by a different role before restart. An impalad’s metadata cache can contain old id + role pairs, and when it is updated with privileges with new role ids from the catalog, the privilege will be added to the wrong role; the one that previously had the same role id.
Products affected: Apache Impala
- CDH 5.14.4 and all prior releases
Users affected: Impala users with authorization enabled.
Date/time of detection: 5th October, 2018
Severity (Low/Medium/High): 3.8 "Low"; CVSS:3.0/AV:N/AC:H/PR:H/UI:R/S:U/C:L/I:L/A:L
Impact: Users may get privileges of unrelated users.
CVE: CVE-2019-16381
Immediate action required: Update to a version of CDH containing the fix.
Addressed in release/refresh/patch: CDH 5.15.0
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2020-348: Impala/Sentry security roles mismatch after Catalog Server restart
Using Impala with Sentry enabled, revoking ALL privilege from server with grant option does not revoke
If you grant a role the ALL privilege at the SERVER scope and use the WITH GRANT OPTION clause, you cannot revoke the privilege. Although the show grant role command will show that the privilege has been revoked immediately after you run the command, the privilege has not been revoked.
For example, if you use the following command to grant the ALL privilege: grant all on server to role <role name> with grant option;
And you revoke the privilege with this command: revoke all on server from role <role name>;
Sentry will not revoke the ALL privilege from the role.
If you run the following command immediately after you revoke the privilege: show grant role <role name>;
The ALL privilege will not appear as a privilege granted to the role. However, after Sentry refreshes, the ALL privilege will reappear when you run the show grant role command.
Products affected: Apache Impala when used with Apache Sentry
- CDH 5.14.x and all prior releases
- CDH 5.15.0, CDH 5.15.1
- CDH 6.0.0, CDH 6.0.1
Users affected: Impala users
Date/time of detection: Sep 5, 2018
Detected by: Cloudera
Severity (Low/Medium/High): 3.9 "Low"; CVSS:3.0/AV:N/AC:H/PR:H/UI:R/S:U/C:L/I:L/A:L
Impact: Running the revoke command does not revoke privileges
CVE: CVE-2018-17860
Immediate action required: Upgrade to a CDH release with the fix. Once the privilege has been granted, the only way to remove it is to delete the role.
Addressed in release/refresh/patch: CDH 5.15.2, CDH 6.0.2
Missing authorization in Apache Impala may allow data injection
A malicious user who is authenticated with Kerberos may have unauthorized access to internal services used by Impala to transfer intermediate data during query execution. If details of a running query (e.g. query ID, query plan) are available, a user can craft some RPC requests with custom software to inject data into a running query or end query execution prematurely, leading to wrong results of the query.
Products affected: Apache Impala
Releases affected: CDH 5.15.0, CDH 5.15.1
Users affected: Any users of Impala who have configured Kerberos security
Date/time of detection: Aug 21, 2018
Detected by: Cloudera
Severity (Low/Medium/High): 4.5 "Medium"; CVSS:3.0/AV:A/AC:H/PR:L/UI:R/S:U/C:N/I:H/A:N/E:P/RL:T/RC:C/IR:H/MAV:A/MAC:H/MPR:L/MUI:R
Impact: Data injection may lead to wrong results of queries.
CVE: CVE-2018-11785
Immediate action required: Upgrade to a version which contains the fix or as a workaround, disable KRPC by setting --use_krpc=false in the “Impala Command Line Argument Advanced Configuration Snippet (Safety Valve)”. The workaround will disable some improvements in stability and performance implemented in CDH 5.15.0 for highly concurrent workloads.
Addressed in release/refresh/patch: CDH 5.15.2, CDH 5.16.1 and higher
Impala Statestore exposes plaintext data with SSL TLS enabled
During a security analysis, Cloudera found that despite TLS being enabled for “internal” Impala ports, the Statestore thrift port did not actually use TLS. This gap would allow an adversary with network access to eavesdrop and potentially modify the packets going to and coming from that port.
Products affected: Impala
Releases affected:
- CDH 5.7 and lower
- CDH 5.8.0, 5.8.1, 5.8.2, 5.8.3, 5.8.4
- CDH 5.9.0, 5.9.1, 5.9.2
- CDH 5.10.0, 5.10.1
- CDH 5.11.0
Users affected: Deployments that use “internal” TLS (TLS between Impala daemons).
Date/time of detection: April 27, 2017
Detected by: Cloudera
Severity (Low/Medium/High): High
Impact: Data on the wire may be intercepted by a malicious server.
CVE: CVE-2017-5652
Immediate action required: Affected customers should upgrade to latest maintenance version with the fix.
Addressed in release/refresh/patch: CDH 5.8.5, CDH 5.9.3, CDH 5.10.2, CDH 5.11.1, CDH 5.12.0
Malicious server can cause Impala server to skip authentication checks
A malicious server which impersonates an Impala service (either Impala daemon, Catalog Server or Statestore) can cause a client (Impala daemon or Statestore) to skip its authentication checks when Kerberos is enabled. That malicious server may then intercept sensitive data intended for the Impala service.
Products affected: Impala
- CDH 5.7 and lower
- CDH 5.8.0, 5.8.1, 5.8.2, 5.8.3, 5.8.4
- CDH 5.9.0, 5.9.1
- CDH 5.10.0
Users affected: Deployments that use Kerberos, but not TLS, for authentication between Impala daemons. Deployments that use TLS to secure communication between services are not affected by the same issue.
Date/time of detection: February 27, 2017
Detected by: Cloudera
Severity (Low/Medium/High): Medium
Impact: Data on the wire may be intercepted by a malicious server.
CVE: CVE-2017-5640
Immediate action required: Affected customers should upgrade to latest version, or enable TLS for connections between Impala services.
Addressed in release/refresh/patch: CDH 5.8.5, CDH 5.9.2, CDH 5.10.1, CDH 5.11.0.
Read Access to Impala Views in queries with WHERE-clause Subqueries
Impala bypasses Sentry authorization for views if the query or the view itself contains a subquery in any WHERE clause. This gives read access to the views to any user that would otherwise have insufficient privileges.
The underlying base tables of views are unaffected. Queries that do not have subqueries in the WHERE clause are unaffected (unless the view itself contains such a subquery).
Other operations, like accessing the view definition or altering the view, are unaffected.
Products affected: Impala
- CDH 5.2.0 and higher
- CDH 5.3.0 and higher
- CDH 5.4.0 and higher
- CDH 5.5.0 and higher
- CDH 5.6.0, 5.6.1
- CDH 5.7.0, 5.7.1, 5.7.2
- CDH 5.8.0
Users affected: Users who run Impala + Sentry and use views
Date/time of detection: July 26, 2016
Severity (Low/Medium/High): High
Impact: Users can bypass Sentry authorization for Impala views.
CVE: CVE-2016-6605
Immediate action required: Upgrade to a CDH version containing the fix.
Addressed in release/refresh/patch: CDH 5.9.0 and higher, CDH 5.8.2 and higher, CDH 5.7.3 and higher
For the latest update on this issue see the corresponding Knowledge article:
Read Access to Impala Views in the Presence of WHERE-clause Subqueries
Impala issued REVOKE ALL ON SERVER does not revoke all privileges
For Impala users that use Sentry for authorization, issuing a REVOKE ALL ON SERVER FROM <ROLE> statement does not remove all server-level privileges from the <ROLE>. Specifically, Sentry fails to revoke privileges that were issued to <ROLE> through a GRANT ALL ON SERVER TO <ROLE> statement. All other privileges are revoked, but <ROLE> still has ALL privileges at SERVER scope after the REVOKE ALL ON SERVER statement has been executed. The privileges are shown in the output of a SHOW GRANT statement.
Products affected: Impala, Sentry
Releases affected:
CDH 5.5.0, CDH 5.5.1, CDH 5.5.2, CDH 5.5.4
CDH 5.6.0, CDH 5.6.1
CDH 5.7.0
Users affected: Customers who use Sentry authorization in Impala
Date/time of detection: April 25, 2016
Severity (Low/Medium/High): Medium
Impact: Inability to revoke ALL SERVER privileges from a specific role using Impala if they have been granted through a GRANT ALL SERVER statement.
CVE: CVE-2016-4572
Immediate action required: If the affected role has ALL privileges on SERVER, you can remove these privileges by dropping and re-creating the role. Alternatively, upgrade to 5.7.1, or 5.8.0 or higher.
Addressed in release/refresh/patch: CDH 5.7.1, CDH 5.8.0 and higher.
Impala does not authorize authenticated Kerberos users who access internal APIs
In an Impala deployment secured with Kerberos, a malicious authenticated user can create a program that bypasses Impala and Sentry authorization mechanisms to issue internal API calls directly. That user can then query tables to which they should not have access, or alter table metadata.
Products affected: Impala
Releases affected: All versions of CDH 5, except for those indicated in the ‘Addressed in release/refresh/patch’ section below.
Users affected: All users of Impala and Sentry with Kerberos enabled.
Date/time of detection: February 4, 2016
Severity (Low/Medium/High): High
CVE: CVE-2016-3131
Immediate action required: Upgrade to most recent maintenance release.
Addressed in release/refresh/patch: CDH 5.3.10 and higher, 5.4.10 and higher, 5.5.4 and higher, 5.6.1 and higher, 5.7.0 and higher
Apache Kafka
This section lists the security bulletins that have been released for Apache Kafka.
Potential to bypass transaction and idempotent ACL checks in Apache Kafka
It is possible to manually craft a Produce request which bypasses transaction and idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability.
- CDH
- CDK Powered by Apache Kafka
-
CDH versions 6.0.x, 6.1.x, 6.2.0
-
CDK versions 3.0.x, 3.1.x, 4.0.x
Users affected: All users who run Kafka in CDH and CDK.
Date/time of detection: September, 2018
Severity (Low/Medium/High):7.1 (High) (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:H/A:H)
Impact: Attackers can exploit this issue to bypass certain security restrictions to perform unauthorized actions. This can aid in further attacks.
CVE: CVE-2018-17196
Immediate action required: Update to a version of CDH containing the fix.
-
CDH 6.2.1, 6.3.2
-
CDK 4.1.0
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2020-378: Potential to bypass transaction and idempotent ACL checks in Apache Kafka
Authenticated Kafka clients may impersonate other users
Authenticated Kafka clients may impersonate any other user via a manually crafted protocol message with SASL/PLAIN or SASL/SCRAM authentication when using the built-in PLAIN or SCRAM server implementations in Apache Kafka.
Note that the SASL authentication mechanisms that apply to this issue are neither recommended nor supported by Cloudera. In Cloudera Manager (CM) there are four choices: PLAINTEXT, SSL, SASL_PLAINTEXT, and SASL_SSL. The SASL/PLAIN option described in this issue is not the same as SASL_PLAINTEXT option in CM. That option uses Kerberos and is not affected. As a result it is highly unlikely that Kafka is susceptible to this issue when managed by CM unless the authentication protocol is overridden by an Advanced Configuration Snippet (Safety Valve).
Products affected: CDK Powered by Apache Kafka
Releases affected: CDK 2.1.0 to 2.2.0, CDK 3.0.0
Users affected: All users
Detected by: Rajini Sivaram (rsivaram@apache.org)
Severity (Low/Medium/High):8.3 (High) (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:H/A:H)
Impact:Privilege escalation.
CVE:CVE-2017-12610
Immediate action required: Upgrade to a newer version of CDK Powered by Apache Kafka where the issue has been fixed.
Addressed in release/refresh/patch: CDK 3.1, CDH 6.0 and higher
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2018-332: Two Kafka Security Vulnerabilities: Authenticated Kafka clients may impersonate other users and and may interfere with data replication
Authenticated clients may interfere with data replication
Authenticated Kafka users may perform an action reserved for the Broker via a manually created fetch request interfering with data replication, resulting in data loss.
Products affected: CDK Powered by Apache Kafka
Releases affected: CDK 2.0.0 to 2.2.0, CDK 3.0.0
Users affected: All users
Detected by: Rajini Sivaram (rsivaram@apache.org)
Severity (Low/Medium/High):6.3 (Medium) (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L)
Impact:Potential data loss due to improper replication.
CVE:CVE-2018-1288
Immediate action required: Upgrade to a newer version of CDK Powered by Apache Kafka where the issue has been fixed.
Addressed in release/refresh/patch: CDK 3.1, CDH 6.0 and higher
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2018-332: Two Kafka Security Vulnerabilities: Authenticated Kafka clients may impersonate other users and and may interfere with data replication
Cloudera Manager
This section lists the security bulletins that have been released for Cloudera Manager.
- ZooKeeper JMX did not support TLS when managed by Cloudera Manager
- Open Redirect and XSS in Cloudera Manager
- Hard Restart of Cloudera Manager Agents May Cause Subsequent Service Errors
- Cloudera Manager read-only user can access sensitive cluster information
- Cross-site scripting vulnerability in Cloudera Manager
- Cloudera Manager can inadvertently delete YARN's cgroup directory for container isolation
- Privilege Escalation in Cloudera Manager
- Sensitive data of processes managed by Cloudera Manager are not secured by file permissions
- Local Script Injection Vulnerability In Cloudera Manager
- Cross Site Scripting (XSS) Vulnerability in Cloudera Manager
- Sensitive Data Exposed in Plain-Text Readable Files
- Sensitive Information in Cloudera Manager Diagnostic Support Bundles
- Cross Site Scripting Vulnerabilities in Cloudera Manager
- Critical Security Related Files in YARN NodeManager Configuration Directories Accessible to Any User
- Cloudera Manager exposes sensitive data
- Sensitive configuration values exposed in Cloudera Manager
- Cloudera Manager installs taskcontroller.cfg in insecure mode
- Two links in the Cloudera Manager Admin Console allow read-only access to arbitrary files on managed hosts.
ZooKeeper JMX did not support TLS when managed by Cloudera Manager
Products affected: ZooKeeper, Cloudera Manager
Releases affected: Cloudera Manager 6.1.0 and lower, Cloudera Manager 5.16 and lower
Users affected: All
Date/time of detection: June 7, 2018
Severity (Low/Medium/High): High
Impact: The ZooKeeper service optionally exposes a JMX port used for reporting and metrics. By default, Cloudera Manager enables this port, but prior to Cloudera Manager 6.1.0, it did not support mutual TLS authentication on this connection. While JMX has a password-based authentication mechanism that Cloudera Manager enables by default, weaknesses have been found in the authentication mechanism, and Oracle now advises JMX connections to enable mutual TLS authentication in addition to password-based authentication. A successful attack may leak data, cause denial of service, or even allow arbitrary code execution on the Java process that exposes a JMX port. Starting in Cloudera Manager 6.1.0, it is possible to configure mutual TLS authentication on ZooKeeper’s JMX port.
CVE: CVE-2018-11744
Immediate action required: Upgrade to Cloudera Manager 6.1.0 and enable TLS for the ZooKeeper JMX port by turning on the configuration settings “Enable TLS/SSL for ZooKeeper JMX” and “Enable TLS client authentication for JMX port” on the ZooKeeper service and configuring the appropriate TLS settings. Alternatively, disable the ZooKeeper JMX port via the configuration setting “Enable JMX Agent” on the ZooKeeper service.
Addressed in release/refresh/patch: Cloudera Manager 6.1.0
Open Redirect and XSS in Cloudera Manager
Technical Service Bulletin 2018-321 (TSB)
One type of page in Cloudera Manager uses a returnUrl parameter to redirect the user to another page in Cloudera Manager once a wizard is completed. The validity of this parameter was not checked. As a result, the user could be automatically redirected to an attacker’s external site or perform a malicious JavaScript function that results in cross-site scripting (XSS).
With this fix, Cloudera Manager no longer allows any value in the returnUrl parameter with patterns such as http://, https://, //, or javascript. The only exceptions to this rule are the SAML login/logout URLs, since they are explicitly configured and are not passed via the returnUrl parameter.
Products affected: Cloudera Manager
Releases affected:
- 5.15.0 and all earlier releases
Users affected: The following Cloudera Manager roles: “cluster administrator”, “full administrators”, and “configurators”.
Date/time of detection: June 20, 2018
Detected by: Mohit Rawat & Ekta Mittal
Severity (Low/Medium/High): 8.8 High (CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H)
Impact: Open redirects can silently redirect a victim to an attacker’s site. XSS vulnerabilities can be used to steal credentials or to perform arbitrary actions as the targeted user.
CVE: CVE-2018-15913
Immediate action required: Upgrade to Cloudera Manager 5.15.1 or higher
Addressed in release/refresh/patch:
- Cloudera Manager 5.15.1 and higher
- Cloudera Manager 6.0.0
Hard Restart of Cloudera Manager Agents May Cause Subsequent Service Errors
If a “hard restart” or “hard stop” operation is performed on a Cloudera Manager Agent, the restarted agent will erroneously restart roles that existed prior to the restart and, subsequently, 60 days later these roles may experience errors or be killed.
Products affected: Cloudera Manager
Releases affected: All releases of Cloudera Manager 5
For the latest update on this issue, see the corresponding Knowledge article:
TSB 2018-308: Hard Restart of Cloudera Manager Agents May Cause Subsequent Service Errors
Cloudera Manager read-only user can access sensitive cluster information
Fixed an issue where, due to a security vulnerability, a Cloudera Manager read-only user can access sensitive cluster information.
Products affected: Cloudera Manager
- Cloudera Manager 5.12 and all prior releases
- Cloudera Manager 5.13.0, 5.13.1, 5.13.2, 5.13.3
- Cloudera Manager 5.14.0, 5.14.1, 5.14.2, 5.14.3
Users affected: All
Date of detection: April 18th, 2018
Detected by: Cloudera
Severity: High
Impact: Sensitive Information Disclosure
CVE: CVE: CVE-2018-10815
Immediate action required: Upgrade Cloudera Manager to a release with the fix.
Addressed in release/refresh/patch: Cloudera Manager 5.15.0 and higher, 5.14.4
For the latest update on this issue, see the corresponding Knowledge article:
TSB 2018-306: Cloudera Manager Information Disclosure
Cross-site scripting vulnerability in Cloudera Manager
Several pages in the Cloudera Manager Admin console are vulnerable to a cross-site scripting attack.
Products affected: Cloudera Manager Admin Console
- Cloudera Manager releases lower than 5.12
- Cloudera Manager 5.12.0, 5.12.1, 5.12.2
- Cloudera Manager 5.13.0, 5.13.1
- Cloudera Manager 5.14.0, 5.14.1
Users roles affected: Cluster Administrator, Full Administrator
Date of detection: January 19, 2018
Detected by: Shafeeque Olassery Kunnikkal of Ingram Micro Asia Ltd, Security Consulting & Services
Severity (Low/Medium/High): High
Impact: A cross-site scripting vulnerability can be used by an attacker to perform malicious actions. One probable form of attack is to steal the credentials of a victim’s Cloudera Manager account.
CVE: CVE-2018-5798
Immediate action required: Upgrade to a release in which this issue has been fixed.
Addressed in release/refresh/patch:
- Cloudera Manager 5.13.2 and higher
- Cloudera Manager 5.14.2 and higher
For the latest update on this issue see the corresponding Knowledge article:
Cloudera Manager can inadvertently delete YARN's cgroup directory for container isolation
On systems that are using YARN’s cgroup-based mechanism for CPU isolation, when a YARN NodeManager with active YARN workload (YARN containers running on the host) is started by Cloudera Manager within 30 seconds of being stopped, the NodeManager may be missing its root container cgroup directory, leaving that node unable to launch new YARN containers. In this case, however, YARN reschedules the containers to other hosts, and the affected hosts will not be able to do any further processing, which slows the overall progress of existing workloads and potentially blocks new or existing YARN workloads from completing when many hosts are affected.
Products affected: Cloudera Manager
Releases affected:
- Cloudera Manager 5.8 and lower
- Cloudera Manager 5.9.0, 5.9.1, 5.9.2, 5.9.3
- Cloudera Manager 5.10.0, 5.10.1, 5.10.2
- Cloudera Manager 5.11.0, 5.11.1, 5.11.2
- Cloudera Manager 5.12.0, 5.12.1
- Cloudera Manager 5.13.0
Users affected: All users of Cloudera Manager who manage CDH clusters configured to use YARN’s cgroup-based CPU isolation. This is a non-default configuration. To check whether you are using this configuration, visit the YARN service configuration page in Cloudera Manager and check whether “use cgroups for resource management” is selected.
Severity (Low/Medium/High): High
Impact: Depending on the number of affected hosts, this can potentially slow or pause all YARN workloads on the cluster until resolved.
Immediate action required: If you experience this issue, restart the individual NodeManagers on the affected hosts using Cloudera Manager. The restart should be performed as a stop, wait 60 seconds, and then start. Additionally, performing restarts as “stop, wait 60 seconds, start” rather than using “restart” directly prevents this from occurring.
Addressed in release/refresh/patch: Cloudera Manager 5.12.2, 5.13.1 and higher.
Privilege Escalation in Cloudera Manager
Under certain circumstances, a read-only Cloudera Manager user can discover the usernames of other users and elevate the privileges of another user. A user cannot elevate their own privilege.
Products affected: Cloudera Manager
Releases affected:
- Cloudera Manager 5.0.0, 5.0.1, 5.0.2, 5.0.5, 5.0.6, 5.0.7
- Cloudera Manager 5.1.0, 5.1.1, 5.1.2, 5.1.3, 5.1.4, 5.1.5, 5.1.6
- Cloudera Manager 5.2.0, 5.2.1, 5.2.2, 5.2.4, 5.2.5, 5.2.6, 5.2.7
- Cloudera Manager 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.6, 5.3.7, 5.3.8, 5.3.9, 5.3.10
- Cloudera Manager 5.4.0, 5.4.1, 5.4.3, 5.4.5, 5.4.6, 5.4.7, 5.4.8, 5.4.9, 5.4.10
- Cloudera Manager 5.5.0, 5.5.1, 5.5.2, 5.5.3, 5.5.4, 5.5.6
- Cloudera Manager 5.6.0, 5.6.1
- Cloudera Manager 5.7.0, 5.7.1, 5.7.2, 5.7.4, 5.7.5
- Cloudera Manager 5.8.0, 5.8.1, 5.8.3, 5.8.4
- Cloudera Manager 5.9.0, 5.9.1
- Cloudera Manager 5.10.0
Users affected: All Cloudera Manager users.
Severity (Low/Medium/High): High
CVE: CVE-2017-7399
Immediate action required: Upgrade Cloudera Manager to 5.8.5, 5.9.2, 5.10.1, 5.11.0 or higher.
Addressed in release/refresh/patch: Cloudera Manager 5.8.5, 5.9.2, 5.10.1, 5.11.0 or higher.
Sensitive data of processes managed by Cloudera Manager are not secured by file permissions
Products affected: Cloudera Manager
Releases affected: 5.9.2, 5.10.1, 5.11.0
Users affected: All users of Cloudera Manager on 5.9.2, 5.10.1, 5.11.0
Severity (Low/Medium/High): High
Impact: Sensitive data (such as passwords) might be exposed to users with direct access to cluster hosts due to overly-permissive local file system permissions for certain files created by Cloudera Manager.
The password is also visible in the Cloudera Manager Admin Console in the configuration files for the Spark History Server process.
CVE: CVE-2017-9327
Immediate action required: Upgrade Cloudera Manager to 5.9.3, 5.10.2, 5.11.1, 5.12.0 or higher
Addressed in release/refresh/patch: Cloudera Manager 5.9.3, 5.10.2, 5.11.1, 5.12.0 or higher
Local Script Injection Vulnerability In Cloudera Manager
There is a script injection vulnerability in Cloudera Manager’s help search box. The user of Cloudera Manager can enter a script but there is no way for an attacker to inject a script externally. Furthermore, the script entered into the search box has to actually return valid search results for the script to execute.
Products affected: Cloudera Manager
- Cloudera Manager 5.0.0, 5.0.1, 5.0.2, 5.0.5, 5.0.6, 5.0.7
- Cloudera Manager 5.1.0, 5.1.1, 5.1.2, 5.1.3, 5.1.4, 5.1.5, 5.1.6
- Cloudera Manager 5.2.0, 5.2.1, 5.2.2, 5.2.4, 5.2.5, 5.2.6, 5.2.7
- Cloudera Manager 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.6, 5.3.7, 5.3.8, 5.3.9, 5.3.10
- Cloudera Manager 5.4.0, 5.4.1, 5.4.3, 5.4.5, 5.4.6, 5.4.7, 5.4.8, 5.4.9, 5.4.10
- Cloudera Manager 5.5.0, 5.5.1, 5.5.2, 5.5.3, 5.5.4, 5.5.6
- Cloudera Manager 5.6.0, 5.6.1
- Cloudera Manager 5.7.0, 5.7.1, 5.7.2, 5.7.4, 5.7.5
- Cloudera Manager 5.8.0, 5.8.1, 5.8.2, 5.8.3
- Cloudera Manager 5.9.0
Users affected: All Cloudera Manager users
Date/time of detection: November 10th, 2016
Severity (Low/Medium/High): Low
Impact: Possible override of client-side JavaScript controls.
CVE: CVE-2016-9271
Immediate action required: Upgrade to one of the releases below
- Cloudera Manager 5.7.6 and higher
- Cloudera Manager 5.8.4 and higher
- Cloudera Manager 5.9.1 and higher
- Cloudera Manager 5.10.0 and higher
Cross Site Scripting (XSS) Vulnerability in Cloudera Manager
Several pages in the Cloudera Manager UI are vulnerable to a XSS attack.
Products affected: Cloudera Manager
Releases affected: All versions of Cloudera Manager 5 except for those indicated in the ‘Addressed in release/refresh/patch’ section below.
Users affected: All customers who use Cloudera Manager.
Date/time of detection: May 19, 2016
Detected by: Solucom Advisory
Severity (Low/Medium/High): High
Impact: A XSS vulnerability can be used by an attacker to perform malicious actions. One probable form of attack is to steal the credentials for a victim Cloudera Manager account.
CVE: CVE-2016-4948
Immediate action required: Upgrade Cloudera Manager to version 5.7.2 or higher or 5.8.x
Addressed in release/refresh/patch: Cloudera Manager 5.7.2 and higher and 5.8.x.
Sensitive Data Exposed in Plain-Text Readable Files
Cloudera Manager Agent stores configuration information in various configuration files that are world-readable. Some of this configuration information may involve sensitive user data, including credentials values used for authentication with other services. These files are located in /var/run/cloudera-scm-agent/supervisor/include on every host. Cloudera Manager passes information such as credentials to Hadoop processes it manages via environment variables, which are written in configuration files in this directory.
Additionally, the response from Cloudera Manager Server to heartbeat messages sent by the Cloudera Manager Agent is stored in a world-readable file (/var/lib/cloudera-scm-agent/response.avro) on every host. This file may contain sensitive data.
These files and directories have been restricted to being readable only by the user running Cloudera Manager Agent, which by default is root.
Products affected: Cloudera Manager
Releases affected: All versions of Cloudera Manager 5, except for those indicated in the Addressed in release/refresh/patch section below.
Users affected: All users of Cloudera Manager using the releases affected above.
Date/time of detection: March 16, 2016
Severity (Low/Medium/High): High
Impact: An unauthorized user that gains access to an affected system may be able to leverage that access to subsequently authenticate with other services.
CVE: CVE-2016-3192
Immediate action required:
- Upgrade Cloudera Manager to one of the maintenance releases indicated below.
- Regenerate Kerberos principals used by all the services in the cluster.
- Regenerate SSL keystores used by all the services in the cluster, with a new password.
- If you are using a version of Cloudera Manager lower than 5.5.0, change the database passwords for all the CDH services, wherever applicable.
Addressed in release/refresh/patch: Cloudera Manager 5.5.4 and higher, 5.6.1 and higher, 5.7.1 and higher
Sensitive Information in Cloudera Manager Diagnostic Support Bundles
Cloudera Manager is designed to transmit certain diagnostic data (or "bundles") to Cloudera. These diagnostic bundles are used by the Cloudera support team to reproduce, debug, and address technical issues for our customers. Cloudera internally discovered a potential vulnerability in this feature, which could cause any sensitive data stored as "advanced configuration snippets (ACS)" (formerly called "safety valves") to be included in diagnostic bundles and transmitted to Cloudera. Notwithstanding any possible transmission, such sensitive data is not used by Cloudera for any purpose.
Cloudera has taken the following actions:
- Modified Cloudera Manager so that it no longer transmits advanced configuration snippets containing the sensitive data, and
- Modified Cloudera Manager SSL configuration to increase the protection level of the encrypted communication.
Cloudera strives to follow and also help establish best practices for the protection of customer information. In this effort, we continually review and improve our security practices, infrastructure, and data handling policies.
Products affected: Cloudera Manager
- All Cloudera Manager releases prior to 4.8.6
- Cloudera Manager 5.0.x prior to Cloudera Manager 5.0.7
- Cloudera Manager 5.1.x prior to Cloudera Manager 5.1.6
- Cloudera Manager 5.2.x prior to Cloudera Manager 5.2.7
- Cloudera Manager 5.3.x prior to Cloudera Manager 5.3.7
- Cloudera Manager 5.4.x prior to Cloudera Manager 5.4.6
Users affected: Users storing sensitive data in advanced configuration snippets
Severity: High
Impact: Possible transmission of sensitive data
CVE: CVE-2015-6495
Immediate Action Required: Upgrade Cloudera Manager to one of the releases listed below.
ETA for resolution: September 1st, 2015
- Cloudera Manager 4.8.6
- Cloudera Manager 5.0.7
- Cloudera Manager 5.1.6
- Cloudera Manager 5.2.7
- Cloudera Manager 5.3.7
- Cloudera Manager 5.4.6
Cross Site Scripting Vulnerabilities in Cloudera Manager
Multiple cross-site scripting (XSS) vulnerabilities in the Cloudera Manager UI before version 5.4.3 allow remote attackers to inject arbitrary web script or HTML using unspecified vectors. Authentication to Cloudera Manager is required to exploit these vulnerabilities.
Products affected: Cloudera Manager
Releases affected: All releases prior to 5.4.3
Users affected: All Cloudera Manager users
Date/time of detection: May 8th, 2015
Severity: (Low/Medium/High) Medium
Impact: Allows unauthorized modification.
CVE: CVE-2015-4457
Immediate action required: Upgrade to Cloudera Manager 5.4.3.
Addressed in release/refresh/patch: Cloudera Manager 5.4.3
Critical Security Related Files in YARN NodeManager Configuration Directories Accessible to Any User
When Cloudera Manager starts a YARN NodeManager, it makes all files in its configuration directory (typically /var/run/cloudera-scm-agent/process) readable by all users. This includes the file containing the Kerberos keytabs (yarn.keytab) and the file containing passwords for the SSL keystore (ssl-server.xml).
Global read permissions must be removed on the NodeManager’s security-related files.
Products affected: Cloudera Manager
Releases affected: All releases of Cloudera Manager 4.0 and higher.
Users affected: Customers who are using YARN in environments where Kerberos or SSL is enabled.
Date/time of detection: March 8, 2015
Severity (Low/Medium/High): High
Impact: Any user who can log in to a host where the YARN NodeManager is running can get access to the keytab file, use it to authenticate to the cluster, and perform unauthorized operations. If SSL is enabled, the user can also decrypt data transmitted over the network.
CVE: CVE-2015-2263
- If you are running YARN with Kerberos/SSL with Cloudera Manager 5.x, upgrade to the maintenance release with the security fix. If you are running YARN with Kerberos with Cloudera Manager 4.x, upgrade to any Cloudera Manager 5.x release with the security fix.
- Delete all “yarn” and “HTTP” principals from KDC/Active Directory. After deleting them, regenerate them using Cloudera Manager.
- Regenerate SSL keystores that you are using with the YARN service, using a new password.
ETA for resolution: Patches are available immediately with the release of this TSB.
Addressed in release/refresh/patch: Cloudera Manager releases 5.0.6, 5.1.5, 5.2.5, 5.3.3, and 5.4.0 have the fix for this bug.
For further updates on this issue see the corresponding Knowledge article:
Cloudera Manager exposes sensitive data
In the Cloudera Manager 5.2 release, the LDAP bind password was erroneously marked such that it would be written to the world-readable files in /etc/hadoop, in addition to the more private files in /var/run. Thus, any user on any host of a Cloudera Manager managed cluster could read the LDAP bind password.
The fix to this issue removes the LDAP bind password from the files in /etc/hadoop; it is only written to configuration files in /var/run. Those files are owned by and only readable by the appropriate service.
Cloudera Manager writes configuration parameters to several locations. Each service gets every parameter that it requires in a directory in /var/run, and the files in those directories are not world-readable. Clients (for example, the “hdfs” command) obtain their configuration parameters from files in /etc/hadoop. The files in /etc/hadoop are world-readable. Cloudera Manager keeps track of where each configuration parameter is to be written so as to expose each parameter only in the location where it is required.
Products affected: Cloudera Manager
Releases affected: Cloudera Manager 5.2.0, Cloudera Manager 5.2.1, Cloudera Manager 5.3.0
Users Affected: All users
Date/time of detection: December 30, 2014
Severity: High
Impact: Exposure of sensitive data
CVE: CVE-2014-8733
Immediate action required: Upgrade to Cloudera Manager 5.2.2 or higher, or Cloudera Manager 5.3.1 or higher.
Sensitive configuration values exposed in Cloudera Manager
Certain configuration values that are stored in Cloudera Manager are considered "sensitive", such as database passwords. These configuration values are expected to be inaccessible to non-admin users, and this is enforced in the Cloudera Manager Admin Console. However, these configuration values are not redacted when reading them through the API, possibly making them accessible to users who should not have such access.
Products affected: Cloudera Manager
Releases affected: Cloudera Manager 4.8.2 and lower, Cloudera Manager 5.0.0
Users Affected: Cloudera Manager installations with non-admin users
Date/time of detection: May 7, 2014
Severity: High
Impact: Through the API only, non-admin users can access potentially sensitive configuration information
CVE: CVE-2014-0220
Immediate action required: Upgrade to Cloudera Manager 4.8.3 or Cloudera Manager 5.0.1 or disable non-admin users if you do not want them to have this access.
ETA for resolution: May 13, 2014
Addressed in release/refresh/patch: Cloudera Manager 4.8.3 and Cloudera Manager 5.0.1
Cloudera Manager installs taskcontroller.cfg in insecure mode
Products affected: Cloudera Manager and Service and Configuration Manager
Releases affected: Cloudera Manager 3.7.0-3.7.4, Service and Configuration Manager 3.5 (in certain cases)
Users affected: Users on multi-user systems who have not enabled Hadoop Kerberos features. Users using the Hadoop security features are not affected.
Severity: Critical
Impact: Vulnerability allows a malicious user to impersonate other users on the systems running the Hadoop cluster.
Immediate action required: Upgrade to Cloudera Manager 3.7.5 and subsequently restart the MapReduce service.
Workarounds are available: Any of these workarounds is sufficient.
- For CM 3.7.x (Enterprise Edition), edit the configuration "Minimum user ID for job submission" to a number higher than any UIDs on the system. 65535 is the largest value that Cloudera Manager will accept, and is typically sufficient. Restart the MapReduce service. To find the current maximum UID on your system, run
getent passwd | awk -F: '{ if ($3 > max) { max = $3; name = $1 } } END { print name, max }'
- For CM 3.7.x Free Edition, remove the file /usr/lib/hadoop-0.20/sbin/Linux-amd64-64/task-controller. This file is part of the hadoop-0.20-sbin package and is re-installed by upgrades.
- For SCM 3.5, if the cluster has been run in both secure and non-secure configurations, remove /etc/hadoop/conf/taskcontroller.cfg from all TaskTrackers. Repeat this in the future if you reconfigure the cluster from a Kerberized to a non-Kerberized configuration.
Resolution: Mar 27, 2012
Addressed in release/refresh/patch: Cloudera Manager 3.7.5
Verification: Verify that, in non-secure clusters, /etc/hadoop/conf/taskcontroller.cfg is unconfigured on all TaskTrackers. (A file with only lines starting with # is unconfigured.)
If you are a Cloudera Enterprise customer and have further questions or need assistance, log a ticket with Cloudera Support through http://support.cloudera.com.
Two links in the Cloudera Manager Admin Console allow read-only access to arbitrary files on managed hosts.
Products affected: Cloudera Manager
Releases affected: Cloudera Manager 3.7.0 through 3.7.6, 4.0.0 (beta), and 4.0.1 (GA)
Users affected: All Cloudera Manager Users
Date vulnerability discovered: June 6, 2012
Date vulnerability analysis and validation complete: June 15, 2012
Severity: Medium
Impact: Any user, including non-admin users, logged in to the Cloudera Manager Admin Console can access any file on any host managed by Cloudera Manager.
Immediate action required:
Solution:
Upgrade to Cloudera Manager or Cloudera Manager Free Edition, version 3.7.7 or higher, or version 4.0.2 or higher.
Work Around:
If immediate upgrade is not possible, disable non-admin user access to Cloudera Manager to limit the vulnerability to Cloudera Manager admins.
Resolution: June 25th
Addressed in release/refresh/patch: Cloudera Manager or Cloudera Manager Free Edition 3.7.7 or higher and 4.0.2 or higher.
Verification: Check the Cloudera Manager version number in the
If you are a Cloudera Enterprise customer and have further questions or need assistance, log a ticket with Cloudera Support at http://support.cloudera.com.
Apache Oozie
This section lists the security bulletins that have been released for Apache Oozie.
Apache Oozie Server Vulnerability
A vulnerability in the Oozie Server allows a cluster user to read private files owned by the user running the Oozie Server process.
Products affected: Oozie
Releases affected: All releases prior to CDH 5.12.0, CDH 5.12.0, CDH 5.12.1, CDH 5.12.2, CDH 5.13.0, CDH 5.13.1, CDH 5.14.0
Users affected: Users running the Oozie Server
Date/time of detection: November 13, 2017
Detected by: Daryn Sharp and Jason Lowe of Oath (formerly Yahoo! Inc)
Severity: (Low/Medium/High) High
Impact: The vulnerability allows a cluster user to read private files owned by the user running the Oozie Server process. The malicious user can construct a workflow XML file containing XML directives and configuration that reference sensitive files on the Oozie server host.
CVE: CVE-2017-15712
Immediate action required: Upgrade to release where the issue is fixed.
- CDH 5.13.2 and higher
- CDH 5.14.2 and higher
- CDH 5.15.0 and higher
Cloudera Search
This section lists the security bulletins that have been released for Cloudera Search.
SSRF issue in Apache Solr
The "shards" parameter in the query URL does not have a corresponding whitelist mechanism,so it can request any URL..
Products affected: Apache Solr in CDH
Releases affected: CDH versions before 5.16.2 and 6.2.0
Users affected: Every Solr user
Date/time of detection: 12th February 2019
Detected by: dk from Chaitin Tech
Severity (Low/Medium/High): High
Impact: The "shards" parameter in Solr queries does not have a corresponding whitelist mechanism. A remote attacker with access to the server could make Solr perform an HTTP GET request to any reachable URL.
CVE: CVE-2017-3164
Immediate action required: Upgrade to latest C5 and C6 maintenance releases. Ensure your network settings are configured so that only trusted traffic is allowed to ingress/egress your hosts running Solr.
Addressed in release/refresh/patch: CDH 6.2.0, CDH 5.16.2
For the latest update on this issue see the corresponding Knowledge article:
Sample solrconfig.xml file for enabling Solr/Sentry Authorization is missing critical attribute
The solrconfig.xml.secure sample configuration which was provided with CDH, if used to create solrconfig.xml, does not enforce Sentry authorization on the request URI /update/json/docs because it is missing a necessary attribute.
Products affected: Solr (if Sentry enabled)
- CDH 5.8 and lower
- CDH 5.9.2 and lower
- CDH 5.10.1 and lower
- CDH 5.11.1 and lower
Users affected: Those who are using Sentry authorization with Cloudera Search and who have used the provided sample configuration and have not specified the below attributes in their solrconfig.xml file.
Date/time of detection: May 18, 2017
Detected by: István Farkas, Hrishikesh Gadre
Severity (Low/Medium/High): High
Impact: Unauthorized users using the request URI /update/json/docs may insert, update, or delete documents.
CVE: CVE-2017-9325
Immediate action required: Every solrconfig.xml of a collection protected by Sentry should be updated in Zookeeper.
<updateRequestProcessorChain name="updateIndexAuthorization">
<updateRequestProcessorChain name="updateIndexAuthorization" default="true">
After updating the configuration in Zookeeper, the collections must be reloaded.
- CDH 5.9.3 and higher
- CDH 5.10.2 and higher
- CDH 5.11.2 and higher
- CDH 5.12.0 and higher
Upgrading will only correct the sample configuration file. The fix mentioned above will still need to be applied on the affected cluster.
Apache Solr ReplicationHandler Path Traversal Attack
When using the Index Replication feature, Solr nodes can pull index files from a master/leader node using an HTTP API that accepts a file name. However, Solr did not validate the file name, hence it was possible to craft a special request involving path traversal, leaving any file readable to the Solr server process exposed. Solr servers using Kerberos authentication are at less risk since only authenticated users can gain direct HTTP access.
See SOLR-10031 for details. Here is the relevant public announcement.
Products affected: Cloudera Search
- CDH 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6
- CDH 5.1.0, 5.1.2, 5.1.3, 5.1.4, 5.1.5
- CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6
- CDH 5.3.0, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.6, 5.3.8, 5.3.9, 5.3.10
- CDH 5.4.0, 5.4.1, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
- CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4, 5.5.5, 5.5.6
- CDH 5.6.0, 5.6.1
- CDH 5.7.0, 5.7.1, 5.7.2, 5.7.3, 5.7.4, 5.7.5
- CDH 5.8.0, 5.8.1, 5.8.2, 5.8.3
- CDH 5.9.0, 5.9.1
- CDH 5.10.0
Users affected: All users using Cloudera Search
Date/time of detection: January 25, 2017
Detected by: Hrishikesh Gadre (Cloudera Inc.)
Severity (Low/Medium/High): Medium
Impact: Moderate. This vulnerability will allow an authenticated remote user to read arbitrary files as the solr user.
CVE: CVE-2017-3163
Immediate action required:
Upgrade to a release that addresses this issue. Also consider enabling Kerberos authentication and TLS for Solr.
- 5.7.6, 5.8.4, 5.9.2, 5.10.1, 5.11.0 (and higher releases).
For the latest update on this issue see the corresponding Knowledge article:
TSB 2017-222: Apache Solr ReplicationHandler path traversal attack
Solr Queries by document id can bypass Sentry document-level security via the RealTimeGetHandler
Solr RealTimeGet queries with the id or ids parameters are not checked by Sentry document-level security in versions prior to CDH5.7.0. The id or ids parameters must be exact matches for document ids (wild-carding is not supported) and the document ids are not otherwise visible to users who are denied access by document-level security. However, a user with internal knowledge of the document id structure or who is able to guess document ids is able to access unauthorized documents. This issue is documented in SENTRY-989.
Products affected: Cloudera Search
Releases affected: All versions of CDH 5, except for those indicated in the Addressed in release/refresh/patch section below.
Users affected: Cloudera Search users implementing document-level security
Date/time of detection: December 17, 2015
Severity (Low/Medium/High): Medium
CVE: CVE-2016-6353
Immediate action required: Upgrade to CDH 5.7.0 or higher.
Addressed in release/refresh/patch: CDH 5.7.0 and higher.
Apache Sentry
This section lists the security bulletins that have been released for Cloudera Search.
- Using Impala with Sentry enabled, revoking ALL privilege from server with grant option does not revoke
- Sentry allows the ALTER TABLE EXCHANGE PARTITION operation on a restricted database
- Sample solrconfig.xml file for enabling Solr/Sentry Authorization is missing critical attribute
- Impala issued REVOKE ALL ON SERVER does not revoke all privileges
- Hive built-in functions “reflect”, “reflect2”, and “java_method” not blocked by default in Sentry
Using Impala with Sentry enabled, revoking ALL privilege from server with grant option does not revoke
If you grant a role the ALL privilege at the SERVER scope and use the WITH GRANT OPTION clause, you cannot revoke the privilege. Although the show grant role command will show that the privilege has been revoked immediately after you run the command, the privilege has not been revoked.
For example, if you use the following command to grant the ALL privilege: grant all on server to role <role name> with grant option;
And you revoke the privilege with this command: revoke all on server from role <role name>;
Sentry will not revoke the ALL privilege from the role.
If you run the following command immediately after you revoke the privilege: show grant role <role name>;
The ALL privilege will not appear as a privilege granted to the role. However, after Sentry refreshes, the ALL privilege will reappear when you run the show grant role command.
Products affected: Apache Impala when used with Apache Sentry
- CDH 5.14.x and all prior releases
- CDH 5.15.0, CDH 5.15.1
- CDH 6.0.0, CDH 6.0.1
Users affected: Impala users
Date/time of detection: Sep 5, 2018
Detected by: Cloudera
Severity (Low/Medium/High): 3.9 "Low"; CVSS:3.0/AV:N/AC:H/PR:H/UI:R/S:U/C:L/I:L/A:L
Impact: Running the revoke command does not revoke privileges
CVE: CVE-2018-17860
Immediate action required: Upgrade to a CDH release with the fix. Once the privilege has been granted, the only way to remove it is to delete the role.
Addressed in release/refresh/patch: CDH 5.15.2, CDH 6.0.2
Sentry allows the ALTER TABLE EXCHANGE PARTITION operation on a restricted database
If a user has ALL permissions on a database, the ALTER TABLE EXCHANGE PARTITION command allows the user to move partitions from a table that the user does not have access to. For example, if a user has ALL permissions on database A, but no permissions on database B, the user can create a table with a schema in database A that is identical to a table in database B. The user can then move partitions from database B into database A, which allows the user to view restricted data and remove that data from the source database.
After you upgrade to a version of CDH listed in the "Addressed in release" section below, a user that tries to use the EXCHANGE PARTITION command to move a partition from a restricted database will receive a "No valid privileges" error.
Products affected: Hive services running Sentry
Releases affected:
- CDH 5.13.x and below
- CDH 5.14.0, 5.14.2, 5.14.3
- CDH 5.15.0
Users affected: Hive users running Sentry
Date/time of detection: May 10, 2018
Severity (Low/Medium/High): 8.1 High (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N)
Impact: Sensitive data exposure
CVE: CVE-2018-8028
Immediate action required: Upgrade to a version of CDH with the fix.
Addressed in release/refresh/patch:
- CDH 5.14.4
- CDH 5.15.1
- CDH 6.0.0
For the latest update on this issue see the corresponding Knowledge article:
TSB 2018-312: Sentry allows the "ALTER TABLE EXCHANGE PARTITION" operation on a restricted database
Sample solrconfig.xml file for enabling Solr/Sentry Authorization is missing critical attribute
The solrconfig.xml.secure sample configuration which was provided with CDH, if used to create solrconfig.xml, does not enforce Sentry authorization on the request URI /update/json/docs because it is missing a necessary attribute.
Products affected: Solr (if Sentry enabled)
- CDH 5.8 and lower
- CDH 5.9.2 and lower
- CDH 5.10.1 and lower
- CDH 5.11.1 and lower
Users affected: Those who are using Sentry authorization with Cloudera Search and who have used the provided sample configuration and have not specified the below attributes in their solrconfig.xml file.
Date/time of detection: May 18, 2017
Detected by: István Farkas, Hrishikesh Gadre
Severity (Low/Medium/High): High
Impact: Unauthorized users using the request URI /update/json/docs may insert, update, or delete documents.
CVE: CVE-2017-9325
Immediate action required: Every solrconfig.xml of a collection protected by Sentry should be updated in Zookeeper.
<updateRequestProcessorChain name="updateIndexAuthorization">
<updateRequestProcessorChain name="updateIndexAuthorization" default="true">
After updating the configuration in Zookeeper, the collections must be reloaded.
- CDH 5.9.3 and higher
- CDH 5.10.2 and higher
- CDH 5.11.2 and higher
- CDH 5.12.0 and higher
Upgrading will only correct the sample configuration file. The fix mentioned above will still need to be applied on the affected cluster.
Impala issued REVOKE ALL ON SERVER does not revoke all privileges
For Impala users that use Sentry for authorization, issuing a REVOKE ALL ON SERVER FROM <ROLE> statement does not remove all server-level privileges from the <ROLE>. Specifically, Sentry fails to revoke privileges that were issued to <ROLE> through a GRANT ALL ON SERVER TO <ROLE> statement. All other privileges are revoked, but <ROLE> still has ALL privileges at SERVER scope after the REVOKE ALL ON SERVER statement has been executed. The privileges are shown in the output of a SHOW GRANT statement.
Products affected: Impala, Sentry
Releases affected:
CDH 5.5.0, CDH 5.5.1, CDH 5.5.2, CDH 5.5.4
CDH 5.6.0, CDH 5.6.1
CDH 5.7.0
Users affected: Customers who use Sentry authorization in Impala
Date/time of detection: April 25, 2016
Severity (Low/Medium/High): Medium
Impact: Inability to revoke ALL SERVER privileges from a specific role using Impala if they have been granted through a GRANT ALL SERVER statement.
CVE: CVE-2016-4572
Immediate action required: If the affected role has ALL privileges on SERVER, you can remove these privileges by dropping and re-creating the role. Alternatively, upgrade to 5.7.1, or 5.8.0 or higher.
Addressed in release/refresh/patch: CDH 5.7.1, CDH 5.8.0 and higher.
Hive built-in functions “reflect”, “reflect2”, and “java_method” not blocked by default in Sentry
Sentry does not block the execution of Hive built-in functions “reflect”, “reflect2”, and “java_method” by default in some CDH versions. These functions allow the execution of arbitrary user code, which is a security issue.
This issue is documented in SENTRY-960.
Products affected: Hive, Sentry
Releases affected:
CDH 5.4.0, CDH 5.4.1, CDH 5.4.2, CDH 5.4.3, CDH 5.4.4, CDH 5.4.5, CDH 5.4.6, CDH 5.4.7, CDH 5.4.8, CDH 5.5.0, CDH 5.5.1
Users affected: Users running Sentry with Hive.
Date/time of detection: November 13, 2015
Severity (Low/Medium/High): High
Impact: This potential vulnerability may enable an authenticated user to execute arbitrary code as a Hive superuser.
CVE: CVE-2016-0760
Immediate action required: Explicitly add the following to the blacklist property in hive-site.xml of Hive Server2:
<property> <name>hive.server2.builtin.udf.blacklist</name> <value>reflect,reflect2,java_method </value> </property>
Addressed in release/refresh/patch: CDH 5.4.9, CDH 5.5.2, CDH 5.6.0 and higher
Apache Spark
This section lists the security bulletins that have been released for Apache Spark.
Apache Spark local files left unencrypted
Certain operations in Spark leave local files unencrypted on disk, even when local file encryption is enabled with “spark.io.encryption.enabled”.
-
In SparkR when parallelize is used
-
In Pyspark when broadcast and parallelize are used
-
In Pyspark when python udfs is used
-
CDH
-
CDS Powered by Apache Spark
-
CDH 5.15.1 and earlier
-
CDH 6.0.0
-
CDS 2.0.0 releases
-
CDS 2.1.0 release 1 and 2
-
CDS 2.2.0 release 1 and 2
-
CDS 2.3.0 release 3
Users affected: All users who run Spark on CDH and CDS in a multi-user environment. .
Date/time of detection: July 2018
Severity (Low/Medium/High): 6.3 Medium (CVSS AV:L/AC:H/PR:N/UI:R/S:U/C:H/I:H/A:N)
Impact: Unencrypted data accessible.
CVE: CVE-2019-10099
Immediate action required: Update to a version of CDH containing the fix.
Workaround: Do not use of pyspark and the fetch-to-disk options.
-
CDH 5.15.2
-
CDH 5.16.0
-
CDH 6.0.1
-
CDS 2.1.0 release 3
-
CDS 2.2.0 release 3
-
CDS 2.3.0 release 4
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2020-336: Apache Spark local files left unencrypted
Apache Spark XSS vulnerability in UI
A malicious user can construct a URL pointing to a Spark UI's job and stage info pages that can be used to execute arbitrary scripts and expose user information, if this URL is accessed by another user who is unaware of the malicious intent.
Products affected: CDS Powered By Apache Spark
- CDS 2.1.0 release 1 and release 2
- CDS 2.2.0 release 1 and release 2
- CDS 2.3.0 release 2
Users affected: Potentially any user who uses the Spark UI.
Date/time of detection: May 28, 2018
Detected by: Spencer Gietzen (Rhino Security Labs)
Severity (Low/Medium/High): High
Impact: XSS vulnerabilities can be used to steal credentials or to perform arbitrary actions as the targeted user.
CVE: CVE-2018-8024
Immediate action required: Upgrade to a version of CDS Powered by Apache Spark where this issue is fixed, or as a workaround, disable the Spark UI for jobs and the Spark History Server.
Addressed in release/refresh/patch: CDS 2.3.0 release 3
Unsafe deserialization in Apache Spark launcher API
In Apache Spark 1.6.0 until 2.1.1, the launcher API performs unsafe deserialization of data received by its socket. This makes applications launched programmatically using the SparkLauncher#startApplication() API potentially vulnerable to arbitrary code execution by an attacker with access to any user account on the local machine. It does not affect apps run by spark-submit or spark-shell. The attacker would be able to execute code as the user that ran the Spark application. Users are encouraged to update to Spark version 2.2.0 or later.
Products affected: Cloudera Distribution of Apache Spark 2 and Spark in CDH.
Releases affected:
- CDH: all 5.7.x, all 5.8.x, 5.9.0-5.9.2, 5.10.0-5.10.1, 5.11.0-5.11.1
- Cloudera's Distribution of Apache Spark: 2.0 Release 1, 2.0 Release 2, 2.1 Release 1
Users affected: All
Date/time of detection: June 1, 2017
Detected by: Aditya Sharad (Semmle)
Severity (Low/Medium/High): Medium
Impact: Privilege escalation to the user who ran the Spark application.
CVE: CVE-2017-12612
Immediate action required: Affected customers should upgrade to latest maintenance version with the fix:
- Spark 1.x users: Update to CDH 5.9.3, 5.10.2, 5.11.2, or 5.12.0 or later
- Spark 2.x users: Update to Cloudera's Distribution of Apache Spark 2.1 Release 2 or later, or 2.2 Release 1 or later
- Or, discontinue use of the programmatic launcher API.
Addressed in release/refresh/patch:
- CDH: 5.9.3, CDH 5.10.2, CDH 5.11.2, CDH 5.12.0
- Cloudera's Distribution of Apache Spark: 2.1 Release 2, 2.2 Release 1
Keystore password for Spark History Server not properly secured
Products affected: Cloudera Manager, Spark
Releases affected: 5.11.0
Users affected: All users with TLS enabled for the Spark History Server.
Date/time of detection: April 18, 2017
Severity (Low/Medium/High): Medium
Impact: The keystore password for the Spark History Server is exposed in a world-readable file on the machine running the Spark History Server. The keystore file itself is not exposed.
The password is also visible in the Cloudera Manager Admin Console in the configuration files for the Spark History Server process.
CVE: CVE-2017-9326
Immediate action required: Upgrade to Cloudera Manager 5.11.1.
Addressed in release/refresh/patch: 5.11.1 or higher.
For the latest update on this issue see the Cloudera Knowledge article, TSB 2017-237: Keystore password for the Spark History Server not properly secured.
Cloudera Distribution of Apache Spark 2
This section lists the security bulletins that have been released for Cloudera Distribution of Apache Spark 2.
Unsafe deserialization in Apache Spark launcher API
In Apache Spark 1.6.0 until 2.1.1, the launcher API performs unsafe deserialization of data received by its socket. This makes applications launched programmatically using the SparkLauncher#startApplication() API potentially vulnerable to arbitrary code execution by an attacker with access to any user account on the local machine. It does not affect apps run by spark-submit or spark-shell. The attacker would be able to execute code as the user that ran the Spark application. Users are encouraged to update to Spark version 2.2.0 or later.
Products affected: Cloudera Distribution of Apache Spark 2 and Spark in CDH.
Releases affected:
- CDH: all 5.7.x, all 5.8.x, 5.9.0-5.9.2, 5.10.0-5.10.1, 5.11.0-5.11.1
- Cloudera's Distribution of Apache Spark: 2.0 Release 1, 2.0 Release 2, 2.1 Release 1
Users affected: All
Date/time of detection: June 1, 2017
Detected by: Aditya Sharad (Semmle)
Severity (Low/Medium/High): Medium
Impact: Privilege escalation to the user who ran the Spark application.
CVE: CVE-2017-12612
Immediate action required: Affected customers should upgrade to latest maintenance version with the fix:
- Spark 1.x users: Update to CDH 5.9.3, 5.10.2, 5.11.2, or 5.12.0 or later
- Spark 2.x users: Update to Cloudera's Distribution of Apache Spark 2.1 Release 2 or later, or 2.2 Release 1 or later
- Or, discontinue use of the programmatic launcher API.
Addressed in release/refresh/patch:
- CDH: 5.9.3, CDH 5.10.2, CDH 5.11.2, CDH 5.12.0
- Cloudera's Distribution of Apache Spark: 2.1 Release 2, 2.2 Release 1
Apache ZooKeeper
This section lists the security bulletins that have been released for Apache ZooKeeper.
ZooKeeper JMX did not support TLS when managed by Cloudera Manager
Products affected: ZooKeeper, Cloudera Manager
Releases affected: Cloudera Manager 6.1.0 and lower, Cloudera Manager 5.16 and lower
Users affected: All
Date/time of detection: June 7, 2018
Severity (Low/Medium/High): High
Impact: The ZooKeeper service optionally exposes a JMX port used for reporting and metrics. By default, Cloudera Manager enables this port, but prior to Cloudera Manager 6.1.0, it did not support mutual TLS authentication on this connection. While JMX has a password-based authentication mechanism that Cloudera Manager enables by default, weaknesses have been found in the authentication mechanism, and Oracle now advises JMX connections to enable mutual TLS authentication in addition to password-based authentication. A successful attack may leak data, cause denial of service, or even allow arbitrary code execution on the Java process that exposes a JMX port. Starting in Cloudera Manager 6.1.0, it is possible to configure mutual TLS authentication on ZooKeeper’s JMX port.
CVE: CVE-2018-11744
Immediate action required: Upgrade to Cloudera Manager 6.1.0 and enable TLS for the ZooKeeper JMX port by turning on the configuration settings “Enable TLS/SSL for ZooKeeper JMX” and “Enable TLS client authentication for JMX port” on the ZooKeeper service and configuring the appropriate TLS settings. Alternatively, disable the ZooKeeper JMX port via the configuration setting “Enable JMX Agent” on the ZooKeeper service.
Addressed in release/refresh/patch: Cloudera Manager 6.1.0
Buffer Overflow Vulnerability in ZooKeeper C Command-Line Interface (CLI)
Products affected: ZooKeeper
Releases affected: All CDH 5.x versions lower than CDH 5.9.
Users affected: ZooKeeper users using the C CLI
Date/time of detection: September 21, 2016
Severity (Low/Medium/High): Low
Impact: The ZooKeeper C client shells cli_st and cli_mt have a buffer overflow vulnerability associated with parsing of the input command when using the cmd:<cmd> batch mode syntax. If the command string exceeds 1024 characters, a buffer overflow occurs. There is no known compromise that takes advantage of this vulnerability, and if security is enabled, the attacker is limited by client-level security constraints.
CVE: CVE-2016-5017
Immediate action required: Use the fully featured/supported Java CLI rather than the C CLI. This can be accomplished by executing the zookeeper-client command on hosts running the ZooKeeper server role.
Addressed in release/refresh/patch: CDH 5.9.0