Minimum Required Role:Cluster
Administrator (also provided by Full
Administrator) This feature is not available when using Cloudera
Manager to manage Data Hub clusters.
Note the following before upgrading your clusters:
Cruise Control might fail during an upgrade
Cruise Control does not function properly during the upgrade
Data replication with Streams Replication Manager might stop during a rolling
upgrade
Prometheus instances used by Streams Messaging Manager require reconfiguration
following an upgrade
The integration between Streams Messaging Manager and Streams Replication Manager is
changed
HDFS
Ozone notes
Oozie ShareLib update and authorization
Updating the Oozie ShareLib is considered as an admin operation. If you have configured
authorization in Oozie, then only admin users are able to trigger a ShareLib update.
Hive metastore
If you have created any materialized views or MSSQL indexed views on Hive backend schemas,
such as SYS or INFORMATION_SCHEMA tables, your upgrade process can fail when the upgrade SQL
statements are trying to drop these tables.
You must drop the materialized views before performing an upgrade and then recreate the
views after the upgrade process is complete.
Ensure that you have backed up the Hive metastore (HMS) backend database before
dropping materialized views.
Start a Hive Beeline session and run the following query to identify materialized
views that are created on top of the SYS and INFORMATION_SCHEMA
tables:
SELECT DISTINCT d.DB_LOCATION_URI, d.NAME, t.TBL_NAME, t.TBL_TYPE, t.OWNER, t.VIEW_EXPANDED_TEXT
FROM sys.TBLS t
INNER JOIN sys.DBS d ON t.DB_ID = d.DB_ID
INNER JOIN sys.MV_CREATION_METADATA mv ON mv.TBL_NAME = t.TBL_NAME
INNER JOIN sys.MV_TABLES_USED tu ON mv.MV_CREATION_METADATA_ID = tu.MV_CREATION_METADATA_ID
WHERE tu.TBL_ID IN (SELECT distinct t.TBL_ID
FROM sys.MV_CREATION_METADATA mv
INNER JOIN sys.MV_TABLES_USED tu ON mv.MV_CREATION_METADATA_ID = tu.MV_CREATION_METADATA_ID
INNER JOIN sys.TBLS t ON tu.TBL_ID = t.TBL_ID
INNER JOIN sys.DBS d ON t.DB_ID = d.DB_ID
WHERE lower(d.NAME) IN ('sys', 'information_schema'))
AND upper(t.TBL_TYPE) = 'MATERIALIZED_VIEW';
If the query returns any materialized views, drop each view using the DROP
statement.
DROP MATERIALIZED VIEW [db_name.]materialzed_view_name;
Upgrade the cluster and recreate the views after the upgrade process is complete.
Known issue during 7.1.7 SP2 to 7.1.9 upgrade
OPSAPS-68279: When upgrading CDp 7.1.7 SP2 to CDP 719, sometimes
the command step DeployClientConfig fails due to the following
error:
Error Message:Client configuration generation requires the following additional parcels to be activated:[cdh]
This can be because of the failure of the activation of the
7.1.9 parcels. To verify:
Navigate to the parcels page.
See if the following error is displayed: Error when distributing to
<hostname>: Sc
file/opt/cloudera/parcels/.flood/CDH-7.1.9-1.cdh7.1.9.p0.43968053-el7.parcel/CDH-7.1.
1.cdh7.1.9.0.43968053-el7.parcel does not exist.
Using the error above, identify the host and ssh into the host by running the
command ssh <hostname>.
Navigate to the agent directory by running the command cd
/var/log/cloudera-scm-agent.
Find the following pattern in agent log file(s) Exception: Untar failed
with return code: 2, with tar output: stdout: [b''], stderr: [b'\ngzip: stdin:
invalid compressed data--format violated\ntar: Unexpected EOF in archive\ntar:
Unexpected EOF in archive\ntar: Error is not recoverable: exiting
now\n'].
If the above exception appears, you must restart the agent on that host by running the command systemctl restart cloudera-scm-agent.
After the agent restarts, click resume to continue with the upgrade.
Phoenix
CDPD-67700/CDPQE-30593: While upgrading CDH 6 to 7.1.7 SP3
using a Phoenix parcel, Phoenix drops the Tephra support and causes a classpath
issue
This issue occurs because an old Phoenix parcel is used.
To resolve this issue, deactivate the old Phoenix parcel.