What's New from Workload XM
- Cluster Report Emails
- Data Warehouse Tables Widget
- Cluster Analytics Page Updates
- Data Engineering Jobs Layout Changes
- Data Warehouse Summary Layout Changes
- File Size Reporting
- Compare a Job with the Previous Run
- Spark RDD Health Check
- Quickly Analyze Workloads with Auto-Generated Workload Views
- Workload Classification for Deep-dive Analysis
- Troubleshoot Issues with the Job Comparison Feature
- Download SQL Commands to Address "Corrupt Table Statistics" and "Missing Table Statistics" Query Health Checks
- New Log and Query Redaction Configuration Properties for Telemetry Publisher
- Proxy Server Support for Telemetry Publisher
- Multiple Usability Improvements
Cluster Report Emails
Enable Cluster Report emails to get daily updates on cluster analytics, which you can use to keep an eye on queries, jobs, and the users that are running queries. These analytics are sent to your email address so you can check yesterday's statistics first thing in the morning without having to login to your cluster.
See Cluster Report Emails for detailed information about what the reports contain and how to enable them.
Data Warehouse Tables Widget
There is a new Tables widget in the Data Warehouse Summary page. The Tables widget gives you a quick overview of the tables that are accessed most often in queries. The Data Read Distribution section of the widget shows the distribution of the amount of data that was read by the queries. If two or more tables are in a single row in the widget, it is because they were joined together. In that case, the Data Read Distribution statistics represent the total data read across all the tables in that row.
For example, in the image below, the parrot.employees table was accessed in 1% of the total queries. 75% of the queries accessed 209.4 MiB or less data, 95% of the queries accessed 395.2 MiB or less data, and the query that accessed the most data accessed 861.5 MiB of data.
Cluster Analytics Page Updates
The landing page of WXM has been changed to the Cluster Analytics page. Previously, there was a Welcome to Cloudera Workload XM page, which has been removed.
The Cluser Analytics page now sorts clusters by the date they were last updated, by default. The Last Updated column shows the date and time that the cluster was last updated. You can also sort by cluster name. The Email Report column indicates whether you are subscribed to Cluster Reports for that cluster, and the Actions menu contains a new Enable Cluster Report Emails option. The image below shows the new Cluster Analytics page:
Data Engineering Jobs Layout Changes
The way that baseline information is presented on the Data Engineering Jobs page has been updated to match the style of the Job Comparison page.
Metrics are sorted by their header, and are in alphabetical order. The new layout is shown in the screenshot below:
Data Warehouse Summary Layout Changes
The Outliers widget has been removed from the Data Warehouse Summary page and has been split into the following separate widgets:
- Usage Analysis
Previously, the information in these widgets was contained in separate tabs within the Outliers widget. The screenshot below shows the new layout of the Data Warehouse Summary page:
File Size Reporting
File size reporting helps you identify databases and tables in which data is stored inefficiently, in small files or partitions. When data is stored inefficinetly, you may experience performance issues.
For information about how to enable file size reporting, see Enabling File Size Reporting.
For information about viewing file size metadata, see File Size Reporting.
Compare a Job with the Previous Run
When a job is flagged as slow, there is a Compare with Previous Run link in the job page that opens the Job Comparison tool and compares the current run of the job with the last run of the job.
The image below shows the location of the link:
For more information about the Job Comparison tool, see Troubleshooting with the Job Comparison Feature.
Spark RDD Health Check
The Spark RDD health check lets you know if you have a redundant RDD cache. Workload XM tells you the location of the cache so that you can remove it to save executor memory.
For more information about health checks, see Data Engineering (Apache Hive, Spark, MapReduce) Health Checks.
Quickly Analyze Workloads with Auto-Generated Workload Views
Now, Workload XM recommends workload views that you can immediately use to analyze workloads on your cluster. Recommendations are based on the following criteria that occur most frequently with queries:
- tables accessed
- resource pools used
- users who initiated the query
To use auto-generated workload recommendations, select Workloads in the left menu under Data Warehouse, and click Define New:
Then, in the Define New drop-down menu, select Select recommended views.
Using auto-generated workload views saves you time because you do not need to perform the initial analysis to determine which criteria to use to create a workload view. For details about how you can use workload views to perform analysis, see Classifying Workloads for Analysis with Workload Views.
Workload Classification for Deep-dive Analysis
Break down workloads by specific criteria to perform deep-dive analysis on the queries. For example, you can use the Workload Classification feature to determine which users are executing workloads that do not adhere to SLAs. You can also examine how queries being sent to specific databases or that use specific pools are performing against SLAs. For details about how to use this feature, see Classifying Workloads for Analysis with Workload Views.
To access this feature, select Workloads under Data Warehouse in the left menu:
Troubleshoot Issues with the Job Comparison Feature
The Job Comparison feature makes it easy to compare two different runs of the same Data Engineering job. This is especially useful when you notice that something changes unexpectedly. For example, if you have a job that consistently completes within a specific amount of time and then it starts taking longer, you want to know why. The Job Comparison feature makes it easy to quickly see the difference between two runs of the same job so you can troubleshoot the cause. For details about how to use this feature, see Troubleshooting with the Job Comparison Feature.
To access this feature, select Jobs under Data Engineering in the left menu:
Download SQL Commands to Address "Corrupt Table Statistics" and "Missing Table Statistics" Query Health Checks
If your queries trigger the Corrupt Table Statistics or the Missing Table Statistics health checks, Workload XM generates the SQL code you can copy and run on your cluster to address these issues.
To download SQL code for creating or repairing table statistics:
- Under Data Warehouse, select Queries.
- On the Queries page, select the time period you want to investigate for the Range column.
- In the Health Check column, select either Corrupt Table Statistics or Missing Table Statistics. This filters out queries that do not trigger these health checks.
- Click the query to view its details.
- In the Performance Issues region of the query details page, click the Health Check Violations tab. This lists the health checks that were triggered for this query. It is here you see the SQL code that you can copy and run to repair the table statistics issues.
New Log and Query Redaction Configuration Properties for Telemetry Publisher
In Cloudera Manager 5.16, you can now configure log and query redaction for the Telemetry Publisher service in Cloudera Manager. By default this configuration is enabled. For more information, see Log and Query Redaction for the Telemetry Publisher Service.
Proxy Server Support for Telemetry Publisher
In Cloudera Manager 5.16, you can now configure the Telemetry Publisher service to send metrics as well as configuration and log files to Workload XM by way of a proxy server for database and Altus metrics uploads. For more information, see Configuring Telemetry Publisher to Use a Proxy Server
Multiple Usability Improvements
The Workload XM team is constantly improving usability. Here are some of our recent upgrades to the user experience:
Support for parsing Spark 2.3 application history logs.
Job history files and Spark event logs are now available to download from the Execution Detail tab in the Job detail page:Download Job History Files
Download Spark Event Logs
Additions to the Query Detail page. Now you can download the query profile for Impala queries and view the total number of joins performed for a specific query:
New Concurrency chart added to the Data Warehouse Summary page. This chart shows query concurrency in the cluster during a selected time range. You can use this chart to gain insight, such as identifying potential resource contention in the cluster or using it to identify the busiest time of day on your cluster.