Understanding Impala Query Performance - EXPLAIN Plans and Query Profiles
To understand the high-level performance considerations for Impala queries, read the output of the EXPLAIN statement for the query. You can get the EXPLAIN plan without actually running the query itself.
To understand the detailed performance characteristics for a query, issue the PROFILE statement in impala-shell immediately after executing a query. This low-level information includes physical details about memory, CPU, I/O, and network usage, and thus is only available after the query is actually run.
Also, see Performance Considerations for the Impala-HBase Integration for examples of interpreting EXPLAIN plans for queries against HBase tables.
Using the EXPLAIN Plan for Performance Tuning
The EXPLAIN statement gives you an outline of the logical steps that a query will perform, such as how the work will be distributed among the nodes and how intermediate results will be combined to produce the final result set. You can see these details before actually running the query. You can use this information to check that the query will not operate in some very unexpected or inefficient way.
[impalad-host:21000] > explain select count(*) from customer_address; +----------------------------------------------------------+ | Explain String | +----------------------------------------------------------+ | Estimated Per-Host Requirements: Memory=42.00MB VCores=1 | | | | 03:AGGREGATE [MERGE FINALIZE] | | | output: sum(count(*)) | | | | | 02:EXCHANGE [PARTITION=UNPARTITIONED] | | | | | 01:AGGREGATE | | | output: count(*) | | | | | 00:SCAN HDFS [default.customer_address] | | partitions=1/1 size=5.25MB | +----------------------------------------------------------+
- The last part of the plan shows the low-level details such as the expected amount of data that will be read, where you can judge the effectiveness of your partitioning strategy and estimate how long it will take to scan a table based on total data size and the size of the cluster.
- As you work your way up, next you see the operations that will be parallelized and performed on each Impala node.
- At the higher levels, you see how data flows when intermediate result sets are combined and transmitted from one node to another.
- See EXPLAIN_LEVEL for details about the EXPLAIN_LEVEL query option, which lets you customize how much detail to show in the EXPLAIN plan depending on whether you are doing high-level or low-level tuning, dealing with logical or physical aspects of the query.
The EXPLAIN plan is also printed at the beginning of the query profile report described in Using the Query Profile for Performance Tuning, for convenience in examining both the logical and physical aspects of the query side-by-side.
The amount of detail displayed in the EXPLAIN output is controlled by the EXPLAIN_LEVEL query option. You typically increase this setting from normal to verbose (or from 0 to 1) when doublechecking the presence of table and column statistics during performance tuning, or when estimating query resource usage in conjunction with the resource management features in CDH 5.
Using the Query Profile for Performance Tuning
The PROFILE statement, available in the impala-shell interpreter, produces a detailed low-level report showing how the most recent query was executed. Unlike the EXPLAIN plan described in Using the EXPLAIN Plan for Performance Tuning, this information is only available after the query has finished. It shows physical details such as the number of bytes read, maximum memory usage, and so on for each node. You can use this information to determine if the the query is I/O-bound or CPU-bound, whether some network condition is imposing a bottleneck, whether a slowdown is affecting some nodes but not others, and to check that recommended configuration settings such as short-circuit local reads are in effect.
The EXPLAIN plan is also printed at the beginning of the query profile report, for convenience in examining both the logical and physical aspects of the query side-by-side. The EXPLAIN_LEVEL query option also controls the verbosity of the EXPLAIN output printed by the PROFILE command.
Here is an example of a query profile, from a relatively straightforward query on a single-node pseudo-distributed cluster to keep the output relatively brief.
[localhost:21000] > profile; Query Runtime Profile: Query (id=6540a03d4bee0691:4963d6269b210ebd): Summary: Session ID: ea4a197f1c7bf858:c74e66f72e3a33ba Session Type: BEESWAX Start Time: 2013-12-02 17:10:30.263067000 End Time: 2013-12-02 17:10:50.932044000 Query Type: QUERY Query State: FINISHED Query Status: OK Impala Version: impalad version 1.2.1 RELEASE (build edb5af1bcad63d410bc5d47cc203df3a880e9324) User: cloudera Network Address: 127.0.0.1:49161 Default Db: stats_testing Sql Statement: select t1.s, t2.s from t1 join t2 on (t1.id = t2.parent) Plan: ---------------- Estimated Per-Host Requirements: Memory=2.09GB VCores=2 PLAN FRAGMENT 0 PARTITION: UNPARTITIONED 4:EXCHANGE cardinality: unavailable per-host memory: unavailable tuple ids: 0 1 PLAN FRAGMENT 1 PARTITION: RANDOM STREAM DATA SINK EXCHANGE ID: 4 UNPARTITIONED 2:HASH JOIN | join op: INNER JOIN (BROADCAST) | hash predicates: | t1.id = t2.parent | cardinality: unavailable | per-host memory: 2.00GB | tuple ids: 0 1 | |----3:EXCHANGE | cardinality: unavailable | per-host memory: 0B | tuple ids: 1 | 0:SCAN HDFS table=stats_testing.t1 #partitions=1/1 size=33B table stats: unavailable column stats: unavailable cardinality: unavailable per-host memory: 32.00MB tuple ids: 0 PLAN FRAGMENT 2 PARTITION: RANDOM STREAM DATA SINK EXCHANGE ID: 3 UNPARTITIONED 1:SCAN HDFS table=stats_testing.t2 #partitions=1/1 size=960.00KB table stats: unavailable column stats: unavailable cardinality: unavailable per-host memory: 96.00MB tuple ids: 1 ---------------- Query Timeline: 20s670ms - Start execution: 2.559ms (2.559ms) - Planning finished: 23.587ms (21.27ms) - Rows available: 666.199ms (642.612ms) - First row fetched: 668.919ms (2.719ms) - Unregister query: 20s668ms (20s000ms) ImpalaServer: - ClientFetchWaitTimer: 19s637ms - RowMaterializationTimer: 167.121ms Execution Profile 6540a03d4bee0691:4963d6269b210ebd:(Active: 837.815ms, % non-child: 0.00%) Per Node Peak Memory Usage: impala-1.example.com:22000(7.42 MB) - FinalizationTimer: 0ns Coordinator Fragment:(Active: 195.198ms, % non-child: 0.00%) MemoryUsage(500.0ms): 16.00 KB, 7.42 MB, 7.33 MB, 7.10 MB, 6.94 MB, 6.71 MB, 6.56 MB, 6.40 MB, 6.17 MB, 6.02 MB, 5.79 MB, 5.63 MB, 5.48 MB, 5.25 MB, 5.09 MB, 4.86 MB, 4.71 MB, 4.47 MB, 4.32 MB, 4.09 MB, 3.93 MB, 3.78 MB, 3.55 MB, 3.39 MB, 3.16 MB, 3.01 MB, 2.78 MB, 2.62 MB, 2.39 MB, 2.24 MB, 2.08 MB, 1.85 MB, 1.70 MB, 1.54 MB, 1.31 MB, 1.16 MB, 948.00 KB, 790.00 KB, 553.00 KB, 395.00 KB, 237.00 KB ThreadUsage(500.0ms): 1 - AverageThreadTokens: 1.00 - PeakMemoryUsage: 7.42 MB - PrepareTime: 36.144us - RowsProduced: 98.30K (98304) - TotalCpuTime: 20s449ms - TotalNetworkWaitTime: 191.630ms - TotalStorageWaitTime: 0ns CodeGen:(Active: 150.679ms, % non-child: 77.19%) - CodegenTime: 0ns - CompileTime: 139.503ms - LoadTime: 10.7ms - ModuleFileSize: 95.27 KB EXCHANGE_NODE (id=4):(Active: 194.858ms, % non-child: 99.83%) - BytesReceived: 2.33 MB - ConvertRowBatchTime: 2.732ms - DataArrivalWaitTime: 191.118ms - DeserializeRowBatchTimer: 14.943ms - FirstBatchArrivalWaitTime: 191.117ms - PeakMemoryUsage: 7.41 MB - RowsReturned: 98.30K (98304) - RowsReturnedRate: 504.49 K/sec - SendersBlockedTimer: 0ns - SendersBlockedTotalTimer(*): 0ns Averaged Fragment 1:(Active: 442.360ms, % non-child: 0.00%) split sizes: min: 33.00 B, max: 33.00 B, avg: 33.00 B, stddev: 0.00 completion times: min:443.720ms max:443.720ms mean: 443.720ms stddev:0ns execution rates: min:74.00 B/sec max:74.00 B/sec mean:74.00 B/sec stddev:0.00 /sec num instances: 1 - AverageThreadTokens: 1.00 - PeakMemoryUsage: 6.06 MB - PrepareTime: 7.291ms - RowsProduced: 98.30K (98304) - TotalCpuTime: 784.259ms - TotalNetworkWaitTime: 388.818ms - TotalStorageWaitTime: 3.934ms CodeGen:(Active: 312.862ms, % non-child: 70.73%) - CodegenTime: 2.669ms - CompileTime: 302.467ms - LoadTime: 9.231ms - ModuleFileSize: 95.27 KB DataStreamSender (dst_id=4):(Active: 80.63ms, % non-child: 18.10%) - BytesSent: 2.33 MB - NetworkThroughput(*): 35.89 MB/sec - OverallThroughput: 29.06 MB/sec - PeakMemoryUsage: 5.33 KB - SerializeBatchTime: 26.487ms - ThriftTransmitTime(*): 64.814ms - UncompressedRowBatchSize: 6.66 MB HASH_JOIN_NODE (id=2):(Active: 362.25ms, % non-child: 3.92%) - BuildBuckets: 1.02K (1024) - BuildRows: 98.30K (98304) - BuildTime: 12.622ms - LoadFactor: 0.00 - PeakMemoryUsage: 6.02 MB - ProbeRows: 3 - ProbeTime: 3.579ms - RowsReturned: 98.30K (98304) - RowsReturnedRate: 271.54 K/sec EXCHANGE_NODE (id=3):(Active: 344.680ms, % non-child: 77.92%) - BytesReceived: 1.15 MB - ConvertRowBatchTime: 2.792ms - DataArrivalWaitTime: 339.936ms - DeserializeRowBatchTimer: 9.910ms - FirstBatchArrivalWaitTime: 199.474ms - PeakMemoryUsage: 156.00 KB - RowsReturned: 98.30K (98304) - RowsReturnedRate: 285.20 K/sec - SendersBlockedTimer: 0ns - SendersBlockedTotalTimer(*): 0ns HDFS_SCAN_NODE (id=0):(Active: 13.616us, % non-child: 0.00%) - AverageHdfsReadThreadConcurrency: 0.00 - AverageScannerThreadConcurrency: 0.00 - BytesRead: 33.00 B - BytesReadLocal: 33.00 B - BytesReadShortCircuit: 33.00 B - NumDisksAccessed: 1 - NumScannerThreadsStarted: 1 - PeakMemoryUsage: 46.00 KB - PerReadThreadRawHdfsThroughput: 287.52 KB/sec - RowsRead: 3 - RowsReturned: 3 - RowsReturnedRate: 220.33 K/sec - ScanRangesComplete: 1 - ScannerThreadsInvoluntaryContextSwitches: 26 - ScannerThreadsTotalWallClockTime: 55.199ms - DelimiterParseTime: 2.463us - MaterializeTupleTime(*): 1.226us - ScannerThreadsSysTime: 0ns - ScannerThreadsUserTime: 42.993ms - ScannerThreadsVoluntaryContextSwitches: 1 - TotalRawHdfsReadTime(*): 112.86us - TotalReadThroughput: 0.00 /sec Averaged Fragment 2:(Active: 190.120ms, % non-child: 0.00%) split sizes: min: 960.00 KB, max: 960.00 KB, avg: 960.00 KB, stddev: 0.00 completion times: min:191.736ms max:191.736ms mean: 191.736ms stddev:0ns execution rates: min:4.89 MB/sec max:4.89 MB/sec mean:4.89 MB/sec stddev:0.00 /sec num instances: 1 - AverageThreadTokens: 0.00 - PeakMemoryUsage: 906.33 KB - PrepareTime: 3.67ms - RowsProduced: 98.30K (98304) - TotalCpuTime: 403.351ms - TotalNetworkWaitTime: 34.999ms - TotalStorageWaitTime: 108.675ms CodeGen:(Active: 162.57ms, % non-child: 85.24%) - CodegenTime: 3.133ms - CompileTime: 148.316ms - LoadTime: 12.317ms - ModuleFileSize: 95.27 KB DataStreamSender (dst_id=3):(Active: 70.620ms, % non-child: 37.14%) - BytesSent: 1.15 MB - NetworkThroughput(*): 23.30 MB/sec - OverallThroughput: 16.23 MB/sec - PeakMemoryUsage: 5.33 KB - SerializeBatchTime: 22.69ms - ThriftTransmitTime(*): 49.178ms - UncompressedRowBatchSize: 3.28 MB HDFS_SCAN_NODE (id=1):(Active: 118.839ms, % non-child: 62.51%) - AverageHdfsReadThreadConcurrency: 0.00 - AverageScannerThreadConcurrency: 0.00 - BytesRead: 960.00 KB - BytesReadLocal: 960.00 KB - BytesReadShortCircuit: 960.00 KB - NumDisksAccessed: 1 - NumScannerThreadsStarted: 1 - PeakMemoryUsage: 869.00 KB - PerReadThreadRawHdfsThroughput: 130.21 MB/sec - RowsRead: 98.30K (98304) - RowsReturned: 98.30K (98304) - RowsReturnedRate: 827.20 K/sec - ScanRangesComplete: 15 - ScannerThreadsInvoluntaryContextSwitches: 34 - ScannerThreadsTotalWallClockTime: 189.774ms - DelimiterParseTime: 15.703ms - MaterializeTupleTime(*): 3.419ms - ScannerThreadsSysTime: 1.999ms - ScannerThreadsUserTime: 44.993ms - ScannerThreadsVoluntaryContextSwitches: 118 - TotalRawHdfsReadTime(*): 7.199ms - TotalReadThroughput: 0.00 /sec Fragment 1: Instance 6540a03d4bee0691:4963d6269b210ebf (host=impala-1.example.com:22000):(Active: 442.360ms, % non-child: 0.00%) Hdfs split stats (<volume id>:<# splits>/<split lengths>): 0:1/33.00 B MemoryUsage(500.0ms): 69.33 KB ThreadUsage(500.0ms): 1 - AverageThreadTokens: 1.00 - PeakMemoryUsage: 6.06 MB - PrepareTime: 7.291ms - RowsProduced: 98.30K (98304) - TotalCpuTime: 784.259ms - TotalNetworkWaitTime: 388.818ms - TotalStorageWaitTime: 3.934ms CodeGen:(Active: 312.862ms, % non-child: 70.73%) - CodegenTime: 2.669ms - CompileTime: 302.467ms - LoadTime: 9.231ms - ModuleFileSize: 95.27 KB DataStreamSender (dst_id=4):(Active: 80.63ms, % non-child: 18.10%) - BytesSent: 2.33 MB - NetworkThroughput(*): 35.89 MB/sec - OverallThroughput: 29.06 MB/sec - PeakMemoryUsage: 5.33 KB - SerializeBatchTime: 26.487ms - ThriftTransmitTime(*): 64.814ms - UncompressedRowBatchSize: 6.66 MB HASH_JOIN_NODE (id=2):(Active: 362.25ms, % non-child: 3.92%) ExecOption: Build Side Codegen Enabled, Probe Side Codegen Enabled, Hash Table Built Asynchronously - BuildBuckets: 1.02K (1024) - BuildRows: 98.30K (98304) - BuildTime: 12.622ms - LoadFactor: 0.00 - PeakMemoryUsage: 6.02 MB - ProbeRows: 3 - ProbeTime: 3.579ms - RowsReturned: 98.30K (98304) - RowsReturnedRate: 271.54 K/sec EXCHANGE_NODE (id=3):(Active: 344.680ms, % non-child: 77.92%) - BytesReceived: 1.15 MB - ConvertRowBatchTime: 2.792ms - DataArrivalWaitTime: 339.936ms - DeserializeRowBatchTimer: 9.910ms - FirstBatchArrivalWaitTime: 199.474ms - PeakMemoryUsage: 156.00 KB - RowsReturned: 98.30K (98304) - RowsReturnedRate: 285.20 K/sec - SendersBlockedTimer: 0ns - SendersBlockedTotalTimer(*): 0ns HDFS_SCAN_NODE (id=0):(Active: 13.616us, % non-child: 0.00%) Hdfs split stats (<volume id>:<# splits>/<split lengths>): 0:1/33.00 B Hdfs Read Thread Concurrency Bucket: 0:0% 1:0% File Formats: TEXT/NONE:1 ExecOption: Codegen enabled: 1 out of 1 - AverageHdfsReadThreadConcurrency: 0.00 - AverageScannerThreadConcurrency: 0.00 - BytesRead: 33.00 B - BytesReadLocal: 33.00 B - BytesReadShortCircuit: 33.00 B - NumDisksAccessed: 1 - NumScannerThreadsStarted: 1 - PeakMemoryUsage: 46.00 KB - PerReadThreadRawHdfsThroughput: 287.52 KB/sec - RowsRead: 3 - RowsReturned: 3 - RowsReturnedRate: 220.33 K/sec - ScanRangesComplete: 1 - ScannerThreadsInvoluntaryContextSwitches: 26 - ScannerThreadsTotalWallClockTime: 55.199ms - DelimiterParseTime: 2.463us - MaterializeTupleTime(*): 1.226us - ScannerThreadsSysTime: 0ns - ScannerThreadsUserTime: 42.993ms - ScannerThreadsVoluntaryContextSwitches: 1 - TotalRawHdfsReadTime(*): 112.86us - TotalReadThroughput: 0.00 /sec Fragment 2: Instance 6540a03d4bee0691:4963d6269b210ec0 (host=impala-1.example.com:22000):(Active: 190.120ms, % non-child: 0.00%) Hdfs split stats (<volume id>:<# splits>/<split lengths>): 0:15/960.00 KB - AverageThreadTokens: 0.00 - PeakMemoryUsage: 906.33 KB - PrepareTime: 3.67ms - RowsProduced: 98.30K (98304) - TotalCpuTime: 403.351ms - TotalNetworkWaitTime: 34.999ms - TotalStorageWaitTime: 108.675ms CodeGen:(Active: 162.57ms, % non-child: 85.24%) - CodegenTime: 3.133ms - CompileTime: 148.316ms - LoadTime: 12.317ms - ModuleFileSize: 95.27 KB DataStreamSender (dst_id=3):(Active: 70.620ms, % non-child: 37.14%) - BytesSent: 1.15 MB - NetworkThroughput(*): 23.30 MB/sec - OverallThroughput: 16.23 MB/sec - PeakMemoryUsage: 5.33 KB - SerializeBatchTime: 22.69ms - ThriftTransmitTime(*): 49.178ms - UncompressedRowBatchSize: 3.28 MB HDFS_SCAN_NODE (id=1):(Active: 118.839ms, % non-child: 62.51%) Hdfs split stats (<volume id>:<# splits>/<split lengths>): 0:15/960.00 KB Hdfs Read Thread Concurrency Bucket: 0:0% 1:0% File Formats: TEXT/NONE:15 ExecOption: Codegen enabled: 15 out of 15 - AverageHdfsReadThreadConcurrency: 0.00 - AverageScannerThreadConcurrency: 0.00 - BytesRead: 960.00 KB - BytesReadLocal: 960.00 KB - BytesReadShortCircuit: 960.00 KB - NumDisksAccessed: 1 - NumScannerThreadsStarted: 1 - PeakMemoryUsage: 869.00 KB - PerReadThreadRawHdfsThroughput: 130.21 MB/sec - RowsRead: 98.30K (98304) - RowsReturned: 98.30K (98304) - RowsReturnedRate: 827.20 K/sec - ScanRangesComplete: 15 - ScannerThreadsInvoluntaryContextSwitches: 34 - ScannerThreadsTotalWallClockTime: 189.774ms - DelimiterParseTime: 15.703ms - MaterializeTupleTime(*): 3.419ms - ScannerThreadsSysTime: 1.999ms - ScannerThreadsUserTime: 44.993ms - ScannerThreadsVoluntaryContextSwitches: 118 - TotalRawHdfsReadTime(*): 7.199ms - TotalReadThroughput: 0.00 /sec
<< Tuning Impala for Performance | Partitioning >> | |