Table and Column Statistics
Impala can do better optimization for complex or multi-table queries when statistics are available, to better understand the volume of data and how the values are distributed, and use this information to help parallelize and distribute the work for a query. The following sections describe the categories of statistics Impala can work with, and how to produce them and keep them up to date.
Originally, Impala relied on the Hive mechanism for collecting statistics, through the Hive ANALYZE TABLE statement which initiates a MapReduce job. For better user-friendliness and reliability, Impala implements its own COMPUTE STATS statement in Impala 1.2.2 and higher, along with the SHOW TABLE STATS and SHOW COLUMN STATS statements.
Overview of Table Statistics
The Impala query planner can make use of statistics about entire tables and partitions. This information includes physical characteristics such as the number of rows, number of data files, the total size of the data files, and the file format. For partitioned tables, the numbers are calculated per partition, and as totals for the whole table. This metadata is stored in the metastore database, and can be updated by either Impala or Hive. If a number is not available, the value -1 is used as a placeholder. Some numbers, such as number and total sizes of data files, are always kept up to date because they can be calculated cheaply, as part of gathering HDFS block metadata.
The following example shows table stats for an unpartitioned Parquet table. The values for the number and sizes of files are always available. Initially, the number of rows is not known, because it requires a potentially expensive scan through the entire table, and so that value is displayed as -1. The COMPUTE STATS statement fills in any unknown table stats values.
show table stats parquet_snappy; +-------+--------+---------+--------------+-------------------+---------+... | #Rows | #Files | Size | Bytes Cached | Cache Replication | Format |... +-------+--------+---------+--------------+-------------------+---------+... | -1 | 96 | 23.35GB | NOT CACHED | NOT CACHED | PARQUET |... +-------+--------+---------+--------------+-------------------+---------+... compute stats parquet_snappy; +-----------------------------------------+ | summary | +-----------------------------------------+ | Updated 1 partition(s) and 6 column(s). | +-----------------------------------------+ show table stats parquet_snappy; +------------+--------+---------+--------------+-------------------+---------+... | #Rows | #Files | Size | Bytes Cached | Cache Replication | Format |... +------------+--------+---------+--------------+-------------------+---------+... | 1000000000 | 96 | 23.35GB | NOT CACHED | NOT CACHED | PARQUET |... +------------+--------+---------+--------------+-------------------+---------+...
Impala performs some optimizations using this metadata on its own, and other optimizations by using a combination of table and column statistics.
To gather table statistics after loading data into a table or partition, use one of the following techniques:
- Issue the statement COMPUTE STATS in Impala. This statement, available in Impala 1.2.2 and higher, is the preferred method because:
- It gathers table statistics and statistics for all partitions and columns in a single operation.
- It does not rely on any special Hive settings, metastore configuration, or separate database to hold the statistics.
- If you need to adjust statistics incrementally for an existing table, such as after adding a partition or inserting new data, you can use an ALTER TABLE statement such as:
alter table analysis_data set tblproperties('numRows'='new_value', 'STATS_GENERATED_VIA_STATS_TASK' = 'true');
to update that one value numeric property rather than re-processing the whole table. (The requirement to include the STATS_GENERATED_VIA_STATS_TASK property is relatively new, as a result of the issue HIVE-8648 for the Hive metastore.)
- Load the data through the INSERT OVERWRITE statement in Hive, while the Hive setting hive.stats.autogather is enabled.
- Issue an ANALYZE TABLE statement in Hive, for the entire table or a specific partition.
ANALYZE TABLE tablename [PARTITION(partcol1[=val1], partcol2[=val2], ...)] COMPUTE STATISTICS [NOSCAN];
For example, to gather statistics for a non-partitioned table:ANALYZE TABLE customer COMPUTE STATISTICS;
To gather statistics for a store table partitioned by state and city, and both of its partitions:ANALYZE TABLE store PARTITION(s_state, s_county) COMPUTE STATISTICS;
To gather statistics for the store table and only the partitions for California:ANALYZE TABLE store PARTITION(s_state='CA', s_county) COMPUTE STATISTICS;
To check that table statistics are available for a table, and see the details of those statistics, use the statement SHOW TABLE STATS table_name. See SHOW Statement for details.
If you use the Hive-based methods of gathering statistics, see the Hive wiki for information about the required configuration on the Hive side. Cloudera recommends using the Impala COMPUTE STATS statement to avoid potential configuration and scalability issues with the statistics-gathering process.
If you run the Hive statement ANALYZE TABLE COMPUTE STATISTICS FOR COLUMNS, Impala can only use the resulting column statistics if the table is unpartitioned. Impala cannot use Hive-generated column statistics for a partitioned table.
Overview of Column Statistics
The Impala query planner can make use of statistics about individual columns when that metadata is available in the metastore database. This technique is most valuable for columns compared across tables in join queries, to help estimate how many rows the query will retrieve from each table. These statistics are also important for correlated subqueries using the EXISTS() or IN() operators, which are processed internally the same way as join queries.
The following example shows column stats for an unpartitioned Parquet table. The values for the maximum and average sizes of some types are always available, because those figures are constant for numeric and other fixed-size types. Initially, the number of distinct values is not known, because it requires a potentially expensive scan through the entire table, and so that value is displayed as -1. The same applies to maximum and average sizes of variable-sized types, such as STRING. The COMPUTE STATS statement fills in most unknown column stats values. (It does not record the number of NULL values, because currently Impala does not use that figure for query optimization.)
show column stats parquet_snappy; +-------------+----------+------------------+--------+----------+----------+ | Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size | +-------------+----------+------------------+--------+----------+----------+ | id | BIGINT | -1 | -1 | 8 | 8 | | val | INT | -1 | -1 | 4 | 4 | | zerofill | STRING | -1 | -1 | -1 | -1 | | name | STRING | -1 | -1 | -1 | -1 | | assertion | BOOLEAN | -1 | -1 | 1 | 1 | | location_id | SMALLINT | -1 | -1 | 2 | 2 | +-------------+----------+------------------+--------+----------+----------+ compute stats parquet_snappy; +-----------------------------------------+ | summary | +-----------------------------------------+ | Updated 1 partition(s) and 6 column(s). | +-----------------------------------------+ show column stats parquet_snappy; +-------------+----------+------------------+--------+----------+-------------------+ | Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size | +-------------+----------+------------------+--------+----------+-------------------+ | id | BIGINT | 183861280 | -1 | 8 | 8 | | val | INT | 139017 | -1 | 4 | 4 | | zerofill | STRING | 101761 | -1 | 6 | 6 | | name | STRING | 145636240 | -1 | 22 | 13.00020027160645 | | assertion | BOOLEAN | 2 | -1 | 1 | 1 | | location_id | SMALLINT | 339 | -1 | 2 | 2 | +-------------+----------+------------------+--------+----------+-------------------+
To check whether column statistics are available for a particular set of columns, use the SHOW COLUMN STATS table_name statement, or check the extended EXPLAIN output for a query against that table that refers to those columns. See SHOW Statement and EXPLAIN Statement for details.
If you run the Hive statement ANALYZE TABLE COMPUTE STATISTICS FOR COLUMNS, Impala can only use the resulting column statistics if the table is unpartitioned. Impala cannot use Hive-generated column statistics for a partitioned table.
How Table and Column Statistics Work for Partitioned Tables
When you use Impala for "big data", you are highly likely to use partitioning for your biggest tables, the ones representing data that can be logically divided based on dates, geographic regions, or similar criteria. The table and column statistics are especially useful for optimizing queries on such tables. For example, a query involving one year might involve substantially more or less data than a query involving a different year, or a range of several years. Each query might be optimized differently as a result.
The following examples show how table and column stats work with a partitioned table. The table for this example is partitioned by year, month, and day. For simplicity, the sample data consists of 5 partitions, all from the same year and month. Table stats are collected independently for each partition. (In fact, the SHOW PARTITIONS statement displays exactly the same information as SHOW TABLE STATS for a partitioned table.) Column stats apply to the entire table, not to individual partitions. Because the partition key column values are represented as HDFS directories, their characteristics are typically known in advance, even when the values for non-key columns are shown as -1.
show partitions year_month_day; +-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+... | year | month | day | #Rows | #Files | Size | Bytes Cached | Cache Replication | Format |... +-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+... | 2013 | 12 | 1 | -1 | 1 | 2.51MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 2 | -1 | 1 | 2.53MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 3 | -1 | 1 | 2.52MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 4 | -1 | 1 | 2.51MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 5 | -1 | 1 | 2.52MB | NOT CACHED | NOT CACHED | PARQUET |... | Total | | | -1 | 5 | 12.58MB | 0B | | |... +-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+... show table stats year_month_day; +-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+... | year | month | day | #Rows | #Files | Size | Bytes Cached | Cache Replication | Format |... +-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+... | 2013 | 12 | 1 | -1 | 1 | 2.51MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 2 | -1 | 1 | 2.53MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 3 | -1 | 1 | 2.52MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 4 | -1 | 1 | 2.51MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 5 | -1 | 1 | 2.52MB | NOT CACHED | NOT CACHED | PARQUET |... | Total | | | -1 | 5 | 12.58MB | 0B | | |... +-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+... show column stats year_month_day; +-----------+---------+------------------+--------+----------+----------+ | Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size | +-----------+---------+------------------+--------+----------+----------+ | id | INT | -1 | -1 | 4 | 4 | | val | INT | -1 | -1 | 4 | 4 | | zfill | STRING | -1 | -1 | -1 | -1 | | name | STRING | -1 | -1 | -1 | -1 | | assertion | BOOLEAN | -1 | -1 | 1 | 1 | | year | INT | 1 | 0 | 4 | 4 | | month | INT | 1 | 0 | 4 | 4 | | day | INT | 5 | 0 | 4 | 4 | +-----------+---------+------------------+--------+----------+----------+ compute stats year_month_day; +-----------------------------------------+ | summary | +-----------------------------------------+ | Updated 5 partition(s) and 5 column(s). | +-----------------------------------------+ show table stats year_month_day; +-------+-------+-----+--------+--------+---------+--------------+-------------------+---------+... | year | month | day | #Rows | #Files | Size | Bytes Cached | Cache Replication | Format |... +-------+-------+-----+--------+--------+---------+--------------+-------------------+---------+... | 2013 | 12 | 1 | 93606 | 1 | 2.51MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 2 | 94158 | 1 | 2.53MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 3 | 94122 | 1 | 2.52MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 4 | 93559 | 1 | 2.51MB | NOT CACHED | NOT CACHED | PARQUET |... | 2013 | 12 | 5 | 93845 | 1 | 2.52MB | NOT CACHED | NOT CACHED | PARQUET |... | Total | | | 469290 | 5 | 12.58MB | 0B | | |... +-------+-------+-----+--------+--------+---------+--------------+-------------------+---------+... show column stats year_month_day; +-----------+---------+------------------+--------+----------+-------------------+ | Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size | +-----------+---------+------------------+--------+----------+-------------------+ | id | INT | 511129 | -1 | 4 | 4 | | val | INT | 364853 | -1 | 4 | 4 | | zfill | STRING | 311430 | -1 | 6 | 6 | | name | STRING | 471975 | -1 | 22 | 13.00160026550293 | | assertion | BOOLEAN | 2 | -1 | 1 | 1 | | year | INT | 1 | 0 | 4 | 4 | | month | INT | 1 | 0 | 4 | 4 | | day | INT | 5 | 0 | 4 | 4 | +-----------+---------+------------------+--------+----------+-------------------+
Keeping Statistics Up to Date
When the contents of a table or partition change significantly, recompute the stats for the relevant table or partition. The degree of change that qualifies as "significant" varies, depending on the absolute and relative sizes of the tables. Typically, if you add more than 30% more data to a table, it is worthwhile to recompute stats, because the differences in number of rows and number of distinct values might cause Impala to choose a different join order when that table is used in join queries. This guideline is most important for the largest tables. For example, adding 30% new data to a table containing 1 TB has a greater effect on join order than adding 30% to a table containing only a few megabytes, and the larger table has a greater effect on query performance if Impala chooses a suboptimal join order as a result of outdated statistics.
If you reload a complete new set of data for a table, but the number of rows and number of distinct values for each column is relatively unchanged from before, you do not need to recompute stats for the table.
If the statistics for a table are out of date, and the table's large size makes it impractical to recompute new stats immediately, you can use the DROP STATS statement to remove the obsolete statistics, making it easier to identify tables that need a new COMPUTE STATS operation.
Setting Statistics Manually through ALTER TABLE
The most crucial piece of data in all the statistics is the number of rows in the table (for an unpartitioned table) or for each partition (for a partitioned table). The COMPUTE STATS statement always gathers statistics about all columns, as well as overall table statistics. If it is not practical to do an entire COMPUTE STATS operation after adding a partition or inserting data, or if you can see that Impala would produce a more efficient plan if the number of rows was different, you can manually set the number of rows through an ALTER TABLE statement:
create table analysis_data stored as parquet as select * from raw_data; Inserted 1000000000 rows in 181.98s compute stats analysis_data; insert into analysis_data select * from smaller_table_we_forgot_before; Inserted 1000000 rows in 15.32s -- Now there are 1001000000 rows. We can update this single data point in the stats. alter table analysis_data set tblproperties('numRows'='1001000000');
For a partitioned table, update both the per-partition number of rows and the number of rows for the whole table:
-- If the table originally contained 1000000 rows, and we add another partition, -- change the numRows property for the partition and the overall table. alter table partitioned_data partition(year=2009, month=4) set tblproperties ('numRows'='30000'); alter table partitioned_data set tblproperties ('numRows'='1030000');
In practice, the COMPUTE STATS statement should be fast enough that this technique is not needed. It is most useful as a workaround for in case of performance issues where you might adjust the numRows value higher or lower to produce the ideal join order.
Examples of Using Table and Column Statistics with Impala
The following examples walk through a sequence of SHOW TABLE STATS, SHOW COLUMN STATS, ALTER TABLE, and SELECT and INSERT statements to illustrate various aspects of how Impala uses statistics to help optimize queries.
This example shows table and column statistics for the STORE column used in the TPC-DS benchmarks for decision support systems. It is a tiny table holding data for 12 stores. Initially, before any statistics are gathered by a COMPUTE STATS statement, most of the numeric fields show placeholder values of -1, indicating that the figures are unknown. The figures that are filled in are values that are easily countable or deducible at the physical level, such as the number of files, total data size of the files, and the maximum and average sizes for data types that have a constant size such as INT, FLOAT, and TIMESTAMP.
[localhost:21000] > show table stats store; +-------+--------+--------+--------+ | #Rows | #Files | Size | Format | +-------+--------+--------+--------+ | -1 | 1 | 3.08KB | TEXT | +-------+--------+--------+--------+ Returned 1 row(s) in 0.03s [localhost:21000] > show column stats store; +--------------------+-----------+------------------+--------+----------+----------+ | Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size | +--------------------+-----------+------------------+--------+----------+----------+ | s_store_sk | INT | -1 | -1 | 4 | 4 | | s_store_id | STRING | -1 | -1 | -1 | -1 | | s_rec_start_date | TIMESTAMP | -1 | -1 | 16 | 16 | | s_rec_end_date | TIMESTAMP | -1 | -1 | 16 | 16 | | s_closed_date_sk | INT | -1 | -1 | 4 | 4 | | s_store_name | STRING | -1 | -1 | -1 | -1 | | s_number_employees | INT | -1 | -1 | 4 | 4 | | s_floor_space | INT | -1 | -1 | 4 | 4 | | s_hours | STRING | -1 | -1 | -1 | -1 | | s_manager | STRING | -1 | -1 | -1 | -1 | | s_market_id | INT | -1 | -1 | 4 | 4 | | s_geography_class | STRING | -1 | -1 | -1 | -1 | | s_market_desc | STRING | -1 | -1 | -1 | -1 | | s_market_manager | STRING | -1 | -1 | -1 | -1 | | s_division_id | INT | -1 | -1 | 4 | 4 | | s_division_name | STRING | -1 | -1 | -1 | -1 | | s_company_id | INT | -1 | -1 | 4 | 4 | | s_company_name | STRING | -1 | -1 | -1 | -1 | | s_street_number | STRING | -1 | -1 | -1 | -1 | | s_street_name | STRING | -1 | -1 | -1 | -1 | | s_street_type | STRING | -1 | -1 | -1 | -1 | | s_suite_number | STRING | -1 | -1 | -1 | -1 | | s_city | STRING | -1 | -1 | -1 | -1 | | s_county | STRING | -1 | -1 | -1 | -1 | | s_state | STRING | -1 | -1 | -1 | -1 | | s_zip | STRING | -1 | -1 | -1 | -1 | | s_country | STRING | -1 | -1 | -1 | -1 | | s_gmt_offset | FLOAT | -1 | -1 | 4 | 4 | | s_tax_percentage | FLOAT | -1 | -1 | 4 | 4 | +--------------------+-----------+------------------+--------+----------+----------+ Returned 29 row(s) in 0.04s
With the Hive ANALYZE TABLE statement for column statistics, you had to specify each column for which to gather statistics. The Impala COMPUTE STATS statement automatically gathers statistics for all columns, because it reads through the entire table relatively quickly and can efficiently compute the values for all the columns. This example shows how after running the COMPUTE STATS statement, statistics are filled in for both the table and all its columns:
[localhost:21000] > compute stats store; +------------------------------------------+ | summary | +------------------------------------------+ | Updated 1 partition(s) and 29 column(s). | +------------------------------------------+ Returned 1 row(s) in 1.88s [localhost:21000] > show table stats store; +-------+--------+--------+--------+ | #Rows | #Files | Size | Format | +-------+--------+--------+--------+ | 12 | 1 | 3.08KB | TEXT | +-------+--------+--------+--------+ Returned 1 row(s) in 0.02s [localhost:21000] > show column stats store; +--------------------+-----------+------------------+--------+----------+-------------------+ | Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size | +--------------------+-----------+------------------+--------+----------+-------------------+ | s_store_sk | INT | 12 | -1 | 4 | 4 | | s_store_id | STRING | 6 | -1 | 16 | 16 | | s_rec_start_date | TIMESTAMP | 4 | -1 | 16 | 16 | | s_rec_end_date | TIMESTAMP | 3 | -1 | 16 | 16 | | s_closed_date_sk | INT | 3 | -1 | 4 | 4 | | s_store_name | STRING | 8 | -1 | 5 | 4.25 | | s_number_employees | INT | 9 | -1 | 4 | 4 | | s_floor_space | INT | 10 | -1 | 4 | 4 | | s_hours | STRING | 2 | -1 | 8 | 7.083300113677979 | | s_manager | STRING | 7 | -1 | 15 | 12 | | s_market_id | INT | 7 | -1 | 4 | 4 | | s_geography_class | STRING | 1 | -1 | 7 | 7 | | s_market_desc | STRING | 10 | -1 | 94 | 55.5 | | s_market_manager | STRING | 7 | -1 | 16 | 14 | | s_division_id | INT | 1 | -1 | 4 | 4 | | s_division_name | STRING | 1 | -1 | 7 | 7 | | s_company_id | INT | 1 | -1 | 4 | 4 | | s_company_name | STRING | 1 | -1 | 7 | 7 | | s_street_number | STRING | 9 | -1 | 3 | 2.833300113677979 | | s_street_name | STRING | 12 | -1 | 11 | 6.583300113677979 | | s_street_type | STRING | 8 | -1 | 9 | 4.833300113677979 | | s_suite_number | STRING | 11 | -1 | 9 | 8.25 | | s_city | STRING | 2 | -1 | 8 | 6.5 | | s_county | STRING | 1 | -1 | 17 | 17 | | s_state | STRING | 1 | -1 | 2 | 2 | | s_zip | STRING | 2 | -1 | 5 | 5 | | s_country | STRING | 1 | -1 | 13 | 13 | | s_gmt_offset | FLOAT | 1 | -1 | 4 | 4 | | s_tax_percentage | FLOAT | 5 | -1 | 4 | 4 | +--------------------+-----------+------------------+--------+----------+-------------------+ Returned 29 row(s) in 0.04s
The following example shows how statistics are represented for a partitioned table. In this case, we have set up a table to hold the world's most trivial census data, a single STRING field, partitioned by a YEAR column. The table statistics include a separate entry for each partition, plus final totals for the numeric fields. The column statistics include some easily deducible facts for the partitioning column, such as the number of distinct values (the number of partition subdirectories).
localhost:21000] > describe census; +------+----------+---------+ | name | type | comment | +------+----------+---------+ | name | string | | | year | smallint | | +------+----------+---------+ Returned 2 row(s) in 0.02s [localhost:21000] > show table stats census; +-------+-------+--------+------+---------+ | year | #Rows | #Files | Size | Format | +-------+-------+--------+------+---------+ | 2000 | -1 | 0 | 0B | TEXT | | 2004 | -1 | 0 | 0B | TEXT | | 2008 | -1 | 0 | 0B | TEXT | | 2010 | -1 | 0 | 0B | TEXT | | 2011 | 0 | 1 | 22B | TEXT | | 2012 | -1 | 1 | 22B | TEXT | | 2013 | -1 | 1 | 231B | PARQUET | | Total | 0 | 3 | 275B | | +-------+-------+--------+------+---------+ Returned 8 row(s) in 0.02s [localhost:21000] > show column stats census; +--------+----------+------------------+--------+----------+----------+ | Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size | +--------+----------+------------------+--------+----------+----------+ | name | STRING | -1 | -1 | -1 | -1 | | year | SMALLINT | 7 | -1 | 2 | 2 | +--------+----------+------------------+--------+----------+----------+ Returned 2 row(s) in 0.02s
The following example shows how the statistics are filled in by a COMPUTE STATS statement in Impala.
[localhost:21000] > compute stats census; +-----------------------------------------+ | summary | +-----------------------------------------+ | Updated 3 partition(s) and 1 column(s). | +-----------------------------------------+ Returned 1 row(s) in 2.16s [localhost:21000] > show table stats census; +-------+-------+--------+------+---------+ | year | #Rows | #Files | Size | Format | +-------+-------+--------+------+---------+ | 2000 | -1 | 0 | 0B | TEXT | | 2004 | -1 | 0 | 0B | TEXT | | 2008 | -1 | 0 | 0B | TEXT | | 2010 | -1 | 0 | 0B | TEXT | | 2011 | 4 | 1 | 22B | TEXT | | 2012 | 4 | 1 | 22B | TEXT | | 2013 | 1 | 1 | 231B | PARQUET | | Total | 9 | 3 | 275B | | +-------+-------+--------+------+---------+ Returned 8 row(s) in 0.02s [localhost:21000] > show column stats census; +--------+----------+------------------+--------+----------+----------+ | Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size | +--------+----------+------------------+--------+----------+----------+ | name | STRING | 4 | -1 | 5 | 4.5 | | year | SMALLINT | 7 | -1 | 2 | 2 | +--------+----------+------------------+--------+----------+----------+ Returned 2 row(s) in 0.02s
For examples showing how some queries work differently when statistics are available, see Examples of Join Order Optimization. You can see how Impala executes a query differently in each case by observing the EXPLAIN output before and after collecting statistics. Measure the before and after query times, and examine the throughput numbers in before and after SUMMARY or PROFILE output, to verify how much the improved plan speeds up performance.