Troubleshooting replication failure in the DAS Event Processor
DAS uses replication as a way to copy database and table metadata information from Hive to the DAS PostgreSQL database. If the replication fails, then you may not be able to see database or table information in DAS.
- Bootstrap dump
- Incremental dump
When the DAS Event Processor is started for the first time, the entire Hive database and the table metadata is copied into DAS. This is known as the bootstrap dump. After this phase, only the differences are copied into DAS at one-minute intervals from the time the last successful dump was run. This is known as an incremental dump.
If the bootstrap dump never succeeded, then you may not see any database or table information in DAS. If the bootstrap dump fails, then the information regarding the failure is captured in the most recent Event Processor log.
If an incremental dump fails, then you may not see any new changes to the databases and tables in DAS. The incremental dump relies on events stored in the Hive metastore because these events take up a lot of space and are only used for replicating data. The events are removed from Hive metastore daily, which can affect DAS.
Fixing incremental dump failures
Required permission/role: You must be an admin user to complete this task.
Notification events are missing in the meta store
”,
then reset the Postgres database using the following
command:curl -H 'X-Requested-By: das' -H 'Cookie: JSESSIONID=<session id cookie>' http(s)://<hostname>:<port>/api/replicationDump/reset
session id cookie
is the cookie value which you have to get for an admin user on the DAS UI, from the browserhostname
is the DAS Webapp hostnameport
is the DAS Webapp port
If the first exception that you hit is an SQLException, then it is a Hive-side failure. Save the HiveServer and the Hive Metastore logs for the time when the exception occurred.
File a bug with Cloudera Support along with the above-mentioned logs.