Impala Tutorial
This section includes tutorial scenarios that demonstrate how to begin using Impala once the software is installed. It focuses on techniques for loading data, because once you have some data in tables and can query that data, you can quickly progress to more advanced Impala features.
Where practical, the tutorials take you from
Before trying these tutorial lessons, install Impala:
- If you already have a CDH environment set up and just need to add Impala to it, follow the installation process described in Impala Installation . Make sure to also install Hive and its associated metastore database if you do not already have Hive configured.
- To set up Impala and all its prerequisites at once, in a minimal configuration that you can use for experiments and then discard, set up the Cloudera QuickStart VM, which includes CDH and Impala on CentOS 6.3 (64-bit). For more information, see the Cloudera QuickStart VM.
Tutorials for Getting Started
These tutorials demonstrate the basics of using Impala. They are intended for first-time users, and for trying out Impala on any new cluster to make sure the major components are working correctly.
Set Up Some Basic .csv Tables
This scenario illustrates how to create some very small tables, suitable for first-time users to experiment with Impala SQL features. TAB1 and TAB2 are loaded with data from files in HDFS. A subset of data is copied from TAB1 into TAB3.
Populate HDFS with the data you want to query. To begin this process, create one or more new subdirectories underneath your user directory in HDFS. The data for each table resides in a separate subdirectory. Substitute your own user name for cloudera where appropriate. This example uses the -p option with the mkdir operation to create any necessary parent directories if they do not already exist.
$ whoami cloudera $ hdfs dfs -ls /user Found 3 items drwxr-xr-x - cloudera cloudera 0 2013-04-22 18:54 /user/cloudera drwxrwx--- - mapred mapred 0 2013-03-15 20:11 /user/history drwxr-xr-x - hue supergroup 0 2013-03-15 20:10 /user/hive $ hdfs dfs -mkdir -p /user/cloudera/sample_data/tab1 /user/cloudera/sample_data/tab2
Here is some sample data, for two tables named TAB1 and TAB2.
Copy the following content to .csv files in your local filesystem:
tab1.csv:
1,true,123.123,2012-10-24 08:55:00 2,false,1243.5,2012-10-25 13:40:00 3,false,24453.325,2008-08-22 09:33:21.123 4,false,243423.325,2007-05-12 22:32:21.33454 5,true,243.325,1953-04-22 09:11:33
tab2.csv:
1,true,12789.123 2,false,1243.5 3,false,24453.325 4,false,2423.3254 5,true,243.325 60,false,243565423.325 70,true,243.325 80,false,243423.325 90,true,243.325
Put each .csv file into a separate HDFS directory using commands like the following, which use paths available in the Impala Demo VM:
$ hdfs dfs -put tab1.csv /user/cloudera/sample_data/tab1 $ hdfs dfs -ls /user/cloudera/sample_data/tab1 Found 1 items -rw-r--r-- 1 cloudera cloudera 192 2013-04-02 20:08 /user/cloudera/sample_data/tab1/tab1.csv $ hdfs dfs -put tab2.csv /user/cloudera/sample_data/tab2 $ hdfs dfs -ls /user/cloudera/sample_data/tab2 Found 1 items -rw-r--r-- 1 cloudera cloudera 158 2013-04-02 20:09 /user/cloudera/sample_data/tab2/tab2.csv
The name of each data file is not significant. In fact, when Impala examines the contents of the data directory for the first time, it considers all files in the directory to make up the data of the table, regardless of how many files there are or what the files are named.
To understand what paths are available within your own HDFS filesystem and what the permissions are for the various directories and files, issue hdfs dfs -ls / and work your way down the tree doing -ls operations for the various directories.
Use the impala-shell command to create tables, either interactively or through a SQL script.
The following example shows creating three tables. For each table, the example shows creating columns with various attributes such as Boolean or integer types. The example also includes commands that provide information about how the data is formatted, such as rows terminating with commas, which makes sense in the case of importing data from a .csv file. Where we already have .csv files containing data in the HDFS directory tree, we specify the location of the directory containing the appropriate .csv file. Impala considers all the data from all the files in that directory to represent the data for the table.
DROP TABLE IF EXISTS tab1; -- The EXTERNAL clause means the data is located outside the central location for Impala data files -- and is preserved when the associated Impala table is dropped. We expect the data to already -- exist in the directory specified by the LOCATION clause. CREATE EXTERNAL TABLE tab1 ( id INT, col_1 BOOLEAN, col_2 DOUBLE, col_3 TIMESTAMP ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION '/user/cloudera/sample_data/tab1'; DROP TABLE IF EXISTS tab2; -- TAB2 is an external table, similar to TAB1. CREATE EXTERNAL TABLE tab2 ( id INT, col_1 BOOLEAN, col_2 DOUBLE ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION '/user/cloudera/sample_data/tab2'; DROP TABLE IF EXISTS tab3; -- Leaving out the EXTERNAL clause means the data will be managed -- in the central Impala data directory tree. Rather than reading -- existing data files when the table is created, we load the -- data after creating the table. CREATE TABLE tab3 ( id INT, col_1 BOOLEAN, col_2 DOUBLE, month INT, day INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
- Make sure you followed the installation instructions closely, in Impala Installation .
- Make sure the hive.metastore.warehouse.dir property points to a directory that Impala can write to. The ownership should be hive:hive, and the impala user should also be a member of the hive group.
-
If the value of hive.metastore.warehouse.dir is
different in the Cloudera Manager dialogs and in the Hive shell,
you might need to
designate
the hosts running impalad with the
" gateway" role for Hive, and deploy the client configuration files to those hosts.
Point an Impala Table at Existing Data Files
A convenient way to set up data for Impala to access is to use an external table, where the data already exists in a set of HDFS files and you just point the Impala table at the directory containing those files. For example, you might run in impala-shell a *.sql file with contents similar to the following, to create an Impala table that accesses an existing data file used by Hive.
In early beta Impala releases, the examples in this tutorial relied on the Hive CREATE TABLE command. The CREATE TABLE statement is available in Impala for 0.7 and higher, so now the tutorial uses the native Impala CREATE TABLE.
The following examples set up 2 tables, referencing the paths and sample data supplied with the Impala Demo VM. For historical reasons, the data physically resides in an HDFS directory tree under /user/hive , although this particular data is entirely managed by Impala rather than Hive. When we create an external table, we specify the directory containing one or more data files, and Impala queries the combined content of all the files inside that directory. Here is how we examine the directories and files within the HDFS filesystem:
$ hdfs dfs -ls /user/hive/tpcds/customer Found 1 items -rw-r--r-- 1 cloudera supergroup 13209372 2013-03-22 18:09 /user/hive/tpcds/customer/customer.dat $ hdfs dfs -cat /user/hive/tpcds/customer/customer.dat | more 1|AAAAAAAABAAAAAAA|980124|7135|32946|2452238|2452208|Mr.|Javier|Lewis|Y|9|12|1936|CHILE||Javier. Lewis@VFAxlnZEvOx.org|2452508| 2|AAAAAAAACAAAAAAA|819667|1461|31655|2452318|2452288|Dr.|Amy|Moses|Y|9|4|1966|TOGO||Amy.Moses@Ov k9KjHH.com|2452318| 3|AAAAAAAADAAAAAAA|1473522|6247|48572|2449130|2449100|Miss|Latisha|Hamilton|N|18|9|1979|NIUE||La tisha.Hamilton@V.com|2452313| 4|AAAAAAAAEAAAAAAA|1703214|3986|39558|2450030|2450000|Dr.|Michael|White|N|7|6|1983|MEXICO||Micha el.White@i.org|2452361| 5|AAAAAAAAFAAAAAAA|953372|4470|36368|2449438|2449408|Sir|Robert|Moran|N|8|5|1956|FIJI||Robert.Mo ran@Hh.edu|2452469| ...
Here is the SQL script we might save as customer_setup.sql:
-- -- store_sales fact table and surrounding dimension tables only -- create database tpcds; use tpcds; drop table if exists customer; create external table customer ( c_customer_sk int, c_customer_id string, c_current_cdemo_sk int, c_current_hdemo_sk int, c_current_addr_sk int, c_first_shipto_date_sk int, c_first_sales_date_sk int, c_salutation string, c_first_name string, c_last_name string, c_preferred_cust_flag string, c_birth_day int, c_birth_month int, c_birth_year int, c_birth_country string, c_login string, c_email_address string, c_last_review_date string ) row format delimited fields terminated by '|' location '/user/hive/tpcds/customer'; drop table if exists customer_address; create external table customer_address ( ca_address_sk int, ca_address_id string, ca_street_number string, ca_street_name string, ca_street_type string, ca_suite_number string, ca_city string, ca_county string, ca_state string, ca_zip string, ca_country string, ca_gmt_offset float, ca_location_type string ) row format delimited fields terminated by '|' location '/user/hive/tpcds/customer_address';
impala-shell -i localhost -f customer_setup.sql
Currently, the impala-shell interpreter requires that any command entered interactively be a single line, so if you experiment with these commands yourself, either save to a .sql file and use the -f option to run the script, or wrap each command onto one line before pasting into the shell.
Describe the Impala Table
Now that you have updated the database metadata that Impala caches, you can confirm that the expected tables are accessible by Impala and examine the attributes of one of the tables. We created these tables in the database named default. If the tables were in a database other than the default, we would issue a command use db_name to switch to that database before examining or querying its tables. We could also qualify the name of a table by prepending the database name, for example default.customer and default.customer_name.
[impala-host:21000] > show databases Query finished, fetching results ... default Returned 1 row(s) in 0.00s [impala-host:21000] > show tables Query finished, fetching results ... customer customer_address Returned 2 row(s) in 0.00s [impala-host:21000] > describe customer_address +------------------+--------+---------+ | name | type | comment | +------------------+--------+---------+ | ca_address_sk | int | | | ca_address_id | string | | | ca_street_number | string | | | ca_street_name | string | | | ca_street_type | string | | | ca_suite_number | string | | | ca_city | string | | | ca_county | string | | | ca_state | string | | | ca_zip | string | | | ca_country | string | | | ca_gmt_offset | float | | | ca_location_type | string | | +------------------+--------+---------+ Returned 13 row(s) in 0.01
Query the Impala Table
You can query data contained in the tables. Impala coordinates the query execution across a single node or multiple nodes depending on your configuration, without the overhead of running MapReduce jobs to perform the intermediate processing.
There are a variety of ways to execute queries on Impala:
-
Using the impala-shell command in interactive mode:
$ impala-shell -i impala-host Connected to localhost:21000 [impala-host:21000] > select count(*) from customer_address; 50000 Returned 1 row(s) in 0.37s
-
Passing a set of commands contained in a file:
$ impala-shell -i impala-host -f myquery.sql Connected to localhost:21000 50000 Returned 1 row(s) in 0.19s
-
Passing a single command to the impala-shell command. The query
is executed, the results are returned, and the shell exits. Make sure to quote the command, preferably with
single quotation marks to avoid shell expansion of characters such as
*.
$ impala-shell -i impala-host -q 'select count(*) from customer_address' Connected to localhost:21000 50000 Returned 1 row(s) in 0.29s
Data Loading and Querying Examples
This section describes how to create some sample tables and load data into them. These tables can then be queried using the Impala shell.
Loading Data
Loading data involves:
- Establishing a data set. The example below uses .csv files.
- Creating tables to which to load data.
- Loading the data into the tables you created.
Sample Queries
To run these sample queries, create a SQL query file query.sql, copy and paste each query into the query file, and then run the query file using the shell. For example, to run query.sql on impala-host, you might use the command:
impala-shell.sh -i impala-host -f query.sql
The examples and results below assume you have loaded the sample data into the tables as described above.
Example: Examining Contents of Tables
Let's start by verifying that the tables do contain the data we expect. Because Impala often deals with tables containing millions or billions of rows, when examining tables of unknown size, include the LIMIT clause to avoid huge amounts of unnecessary output, as in the final query. (If your interactive query starts displaying an unexpected volume of data, press Ctrl-C in impala-shell to cancel the query.)
SELECT * FROM tab1; SELECT * FROM tab2; SELECT * FROM tab2 LIMIT 5;
Results:
+----+-------+------------+-------------------------------+ | id | col_1 | col_2 | col_3 | +----+-------+------------+-------------------------------+ | 1 | true | 123.123 | 2012-10-24 08:55:00 | | 2 | false | 1243.5 | 2012-10-25 13:40:00 | | 3 | false | 24453.325 | 2008-08-22 09:33:21.123000000 | | 4 | false | 243423.325 | 2007-05-12 22:32:21.334540000 | | 5 | true | 243.325 | 1953-04-22 09:11:33 | +----+-------+------------+-------------------------------+ +----+-------+---------------+ | id | col_1 | col_2 | +----+-------+---------------+ | 1 | true | 12789.123 | | 2 | false | 1243.5 | | 3 | false | 24453.325 | | 4 | false | 2423.3254 | | 5 | true | 243.325 | | 60 | false | 243565423.325 | | 70 | true | 243.325 | | 80 | false | 243423.325 | | 90 | true | 243.325 | +----+-------+---------------+ +----+-------+-----------+ | id | col_1 | col_2 | +----+-------+-----------+ | 1 | true | 12789.123 | | 2 | false | 1243.5 | | 3 | false | 24453.325 | | 4 | false | 2423.3254 | | 5 | true | 243.325 | +----+-------+-----------+
Example: Aggregate and Join
SELECT tab1.col_1, MAX(tab2.col_2), MIN(tab2.col_2) FROM tab2 JOIN tab1 USING (id) GROUP BY col_1 ORDER BY 1 LIMIT 5;
Results:
+-------+-----------------+-----------------+ | col_1 | max(tab2.col_2) | min(tab2.col_2) | +-------+-----------------+-----------------+ | false | 24453.325 | 1243.5 | | true | 12789.123 | 243.325 | +-------+-----------------+-----------------+
Example: Subquery, Aggregate and Joins
SELECT tab2.* FROM tab2, (SELECT tab1.col_1, MAX(tab2.col_2) AS max_col2 FROM tab2, tab1 WHERE tab1.id = tab2.id GROUP BY col_1) subquery1 WHERE subquery1.max_col2 = tab2.col_2;
Results:
+----+-------+-----------+ | id | col_1 | col_2 | +----+-------+-----------+ | 1 | true | 12789.123 | | 3 | false | 24453.325 | +----+-------+-----------+
Example: INSERT Query
INSERT OVERWRITE TABLE tab3 SELECT id, col_1, col_2, MONTH(col_3), DAYOFMONTH(col_3) FROM tab1 WHERE YEAR(col_3) = 2012;
Query TAB3 to check the result:
SELECT * FROM tab3;
Results:
+----+-------+---------+-------+-----+ | id | col_1 | col_2 | month | day | +----+-------+---------+-------+-----+ | 1 | true | 123.123 | 10 | 24 | | 2 | false | 1243.5 | 10 | 25 | +----+-------+---------+-------+-----+
Advanced Tutorials
These tutorials walk you through advanced scenarios or specialized features.
Attaching an External Partitioned Table to an HDFS Directory Structure
This tutorial shows how you might set up a directory tree in HDFS, put data files into the lowest-level subdirectories, and then use an Impala external table to query the data files from their original locations.
The tutorial uses a table with web log data, with separate subdirectories for the year, month, day, and host. For simplicity, we use a tiny amount of CSV data, loading the same data into each partition.
First, we make an Impala partitioned table for CSV data, and look at the underlying HDFS directory structure to understand the directory structure to re-create elsewhere in HDFS. The columns field1, field2, and field3 correspond to the contents of the CSV data files. The year, month, day, and host columns are all represented as subdirectories within the table structure, and are not part of the CSV files. We use STRING for each of these columns so that we can produce consistent subdirectory names, with leading zeros for a consistent length.
create database external_partitions; use external_partitions; create table logs (field1 string, field2 string, field3 string) partitioned by (year string, month string , day string, host string) row format delimited fields terminated by ','; insert into logs partition (year="2013", month="07", day="28", host="host1") values ("foo","foo","foo"); insert into logs partition (year="2013", month="07", day="28", host="host2") values ("foo","foo","foo"); insert into logs partition (year="2013", month="07", day="29", host="host1") values ("foo","foo","foo"); insert into logs partition (year="2013", month="07", day="29", host="host2") values ("foo","foo","foo"); insert into logs partition (year="2013", month="08", day="01", host="host1") values ("foo","foo","foo");
Back in the Linux shell, we examine the HDFS directory structure. (Your Impala data directory might be in a different location; for historical reasons, it is sometimes under the HDFS path /user/hive/warehouse.) We use the hdfs dfs -ls command to examine the nested subdirectories corresponding to each partitioning column, with separate subdirectories at each level (with = in their names) representing the different values for each partitioning column. When we get to the lowest level of subdirectory, we use the hdfs dfs -cat command to examine the data file and see CSV-formatted data produced by the INSERT statement in Impala.
$ hdfs dfs -ls /user/impala/warehouse/external_partitions.db Found 1 items drwxrwxrwt - impala hive 0 2013-08-07 12:24 /user/impala/warehouse/external_partitions.db/logs $ hdfs dfs -ls /user/impala/warehouse/external_partitions.db/logs Found 1 items drwxr-xr-x - impala hive 0 2013-08-07 12:24 /user/impala/warehouse/external_partitions.db/logs/year=2013 $ hdfs dfs -ls /user/impala/warehouse/external_partitions.db/logs/year=2013 Found 2 items drwxr-xr-x - impala hive 0 2013-08-07 12:23 /user/impala/warehouse/external_partitions.db/logs/year=2013/month=07 drwxr-xr-x - impala hive 0 2013-08-07 12:24 /user/impala/warehouse/external_partitions.db/logs/year=2013/month=08 $ hdfs dfs -ls /user/impala/warehouse/external_partitions.db/logs/year=2013/month=07 Found 2 items drwxr-xr-x - impala hive 0 2013-08-07 12:22 /user/impala/warehouse/external_partitions.db/logs/year=2013/month=07/day=28 drwxr-xr-x - impala hive 0 2013-08-07 12:23 /user/impala/warehouse/external_partitions.db/logs/year=2013/month=07/day=29 $ hdfs dfs -ls /user/impala/warehouse/external_partitions.db/logs/year=2013/month=07/day=28 Found 2 items drwxr-xr-x - impala hive 0 2013-08-07 12:21 /user/impala/warehouse/external_partitions.db/logs/year=2013/month=07/day=28/host=host1 drwxr-xr-x - impala hive 0 2013-08-07 12:22 /user/impala/warehouse/external_partitions.db/logs/year=2013/month=07/day=28/host=host2 $ hdfs dfs -ls /user/impala/warehouse/external_partitions.db/logs/year=2013/month=07/day=28/host=host1 Found 1 items -rw-r--r-- 3 impala hive 12 2013-08-07 12:21 /user/impala/warehouse/external_partiti ons.db/logs/year=2013/month=07/day=28/host=host1/3981726974111751120--8907184999369517436_822630111_data.0 $ hdfs dfs -cat /user/impala/warehouse/external_partitions.db/logs/year=2013/month=07/day=28/\ host=host1/3981726974111751120--8 907184999369517436_822630111_data.0 foo,foo,foo
Still in the Linux shell, we use hdfs dfs -mkdir to create several data directories outside the HDFS directory tree that Impala controls (/user/impala/warehouse in this example, maybe different in your case). Depending on your configuration, you might need to log in as a user with permission to write into this HDFS directory tree; for example, the commands shown here were run while logged in as the hdfs user.
$ hdfs dfs -mkdir -p /user/impala/data/logs/year=2013/month=07/day=28/host=host1 $ hdfs dfs -mkdir -p /user/impala/data/logs/year=2013/month=07/day=28/host=host2 $ hdfs dfs -mkdir -p /user/impala/data/logs/year=2013/month=07/day=28/host=host1 $ hdfs dfs -mkdir -p /user/impala/data/logs/year=2013/month=07/day=29/host=host1 $ hdfs dfs -mkdir -p /user/impala/data/logs/year=2013/month=08/day=01/host=host1
We make a tiny CSV file, with values different than in the INSERT statements used earlier, and put a copy within each subdirectory that we will use as an Impala partition.
$ cat >dummy_log_data bar,baz,bletch $ hdfs dfs -mkdir -p /user/impala/data/external_partitions/year=2013/month=08/day=01/host=host1 $ hdfs dfs -mkdir -p /user/impala/data/external_partitions/year=2013/month=07/day=28/host=host1 $ hdfs dfs -mkdir -p /user/impala/data/external_partitions/year=2013/month=07/day=28/host=host2 $ hdfs dfs -mkdir -p /user/impala/data/external_partitions/year=2013/month=07/day=29/host=host1 $ hdfs dfs -put dummy_log_data /user/impala/data/logs/year=2013/month=07/day=28/host=host1 $ hdfs dfs -put dummy_log_data /user/impala/data/logs/year=2013/month=07/day=28/host=host2 $ hdfs dfs -put dummy_log_data /user/impala/data/logs/year=2013/month=07/day=29/host=host1 $ hdfs dfs -put dummy_log_data /user/impala/data/logs/year=2013/month=08/day=01/host=host1
Back in the impala-shell interpreter, we move the original Impala-managed table aside, and create a new external table with a LOCATION clause pointing to the directory under which we have set up all the partition subdirectories and data files.
use external_partitions; alter table logs rename to logs_original; create external table logs (field1 string, field2 string, field3 string) partitioned by (year string, month string, day string, host string) row format delimited fields terminated by ',' location '/user/impala/data/logs';
Because partition subdirectories and data files come and go during the data lifecycle, you must identify each of the partitions through an ALTER TABLE statement before Impala recognizes the data files they contain.
alter table logs add partition (year="2013",month="07",day="28",host="host1") alter table log_type add partition (year="2013",month="07",day="28",host="host2"); alter table log_type add partition (year="2013",month="07",day="29",host="host1"); alter table log_type add partition (year="2013",month="08",day="01",host="host1");
We issue a REFRESH statement for the table, always a safe practice when data files have been manually added, removed, or changed. Then the data is ready to be queried. The SELECT * statement illustrates that the data from our trivial CSV file was recognized in each of the partitions where we copied it. Although in this case there are only a few rows, we include a LIMIT clause on this test query just in case there is more data than we expect.
refresh log_type; select * from log_type limit 100; +--------+--------+--------+------+-------+-----+-------+ | field1 | field2 | field3 | year | month | day | host | +--------+--------+--------+------+-------+-----+-------+ | bar | baz | bletch | 2013 | 07 | 28 | host1 | | bar | baz | bletch | 2013 | 08 | 01 | host1 | | bar | baz | bletch | 2013 | 07 | 29 | host1 | | bar | baz | bletch | 2013 | 07 | 28 | host2 | +--------+--------+--------+------+-------+-----+-------+
Switching Back and Forth Between Impala and Hive
Sometimes, you might find it convenient to switch to the Hive shell to perform some data loading or transformation operation, particularly on file formats such as RCFile, SequenceFile, and Avro that Impala currently can query but not write to.
Whenever you create, drop, or alter a table or other kind of object through Hive, the next time you switch back to the impala-shell interpreter, issue a one-time INVALIDATE METADATA statement so that Impala recognizes the new or changed object.
Whenever you load, insert, or change data in an existing table through Hive (or even through manual HDFS operations such as the hdfs command), the next time you switch back to the impala-shell interpreter, issue a one-time REFRESH table_name statement so that Impala recognizes the new or changed data.
For examples showing how this process works for the REFRESH statement, look at the examples of creating RCFile and SequenceFile tables in Impala, loading data through Hive, and then querying the data through Impala. See Using the RCFile File Format with Impala Tables and Using the SequenceFile File Format with Impala Tables for those examples.
For examples showing how this process works for the INVALIDATE METADATA statement, look at the example of creating and loading an Avro table in Hive, and then querying the data through Impala. See Using the Avro File Format with Impala Tables for that example.
Originally, Impala did not support UDFs, but this feature is available in Impala starting in Impala 1.2. Some INSERT ... SELECT transformations that you originally did through Hive can now be done through Impala. See User-Defined Functions (UDFs) for details.
Prior to Impala 1.2, the REFRESH and INVALIDATE METADATA statements needed to be issued on each Impala node to which you connected and issued queries. In Impala 1.2 and higher, when you issue either of those statements on any Impala node, the results are broadcast to all the Impala nodes in the cluster, making it truly a one-step operation after each round of DDL or ETL operations in Hive.
Cross Joins and Cartesian Products with the CROSS JOIN Operator
Originally, Impala restricted join queries so that they had to include at least one equality comparison between the columns of the tables on each side of the join operator. With the huge tables typically processed by Impala, any miscoded query that produced a full Cartesian product as a result set could consume a huge amount of cluster resources.
In Impala 1.2.2 and higher, this restriction is lifted when you use the CROSS JOIN operator in the query. You still cannot remove all WHERE clauses from a query like SELECT * FROM t1 JOIN t2 to produce all combinations of rows from both tables. But you can use the CROSS JOIN operator to explicitly request such a Cartesian product. Typically, this operation is applicable for smaller tables, where the result set still fits within the memory of a single Impala node.
The following example sets up data for use in a series of comic books where characters battle each other. At first, we use an equijoin query, which only allows characters from the same time period and the same planet to meet.
[localhost:21000] > create table heroes (name string, era string, planet string); [localhost:21000] > create table villains (name string, era string, planet string); [localhost:21000] > insert into heroes values > ('Tesla','20th century','Earth'), > ('Pythagoras','Antiquity','Earth'), > ('Zopzar','Far Future','Mars'); Inserted 3 rows in 2.28s [localhost:21000] > insert into villains values > ('Caligula','Antiquity','Earth'), > ('John Dillinger','20th century','Earth'), > ('Xibulor','Far Future','Venus'); Inserted 3 rows in 1.93s [localhost:21000] > select concat(heroes.name,' vs. ',villains.name) as battle > from heroes join villains > where heroes.era = villains.era and heroes.planet = villains.planet; +--------------------------+ | battle | +--------------------------+ | Tesla vs. John Dillinger | | Pythagoras vs. Caligula | +--------------------------+ Returned 2 row(s) in 0.47s
Readers demanded more action, so we added elements of time travel and space travel so that any hero could face any villain. Prior to Impala 1.2.2, this type of query was impossible because all joins had to reference matching values between the two tables:
[localhost:21000] > -- Cartesian product not possible in Impala 1.1. > select concat(heroes.name,' vs. ',villains.name) as battle from heroes join villains; ERROR: NotImplementedException: Join between 'heroes' and 'villains' requires at least one conjunctive equality predicate between the two tables
With Impala 1.2.2, we rewrite the query slightly to use CROSS JOIN rather than JOIN, and now the result set includes all combinations:
[localhost:21000] > -- Cartesian product available in Impala 1.2.2 with the CROSS JOIN syntax. > select concat(heroes.name,' vs. ',villains.name) as battle from heroes cross join villains; +-------------------------------+ | battle | +-------------------------------+ | Tesla vs. Caligula | | Tesla vs. John Dillinger | | Tesla vs. Xibulor | | Pythagoras vs. Caligula | | Pythagoras vs. John Dillinger | | Pythagoras vs. Xibulor | | Zopzar vs. Caligula | | Zopzar vs. John Dillinger | | Zopzar vs. Xibulor | +-------------------------------+ Returned 9 row(s) in 0.33s
The full combination of rows from both tables is known as the Cartesian product. This type of result set is often used for creating grid data structures. You can also filter the result set by including WHERE clauses that do not explicitly compare columns between the two tables. The following example shows how you might produce a list of combinations of year and quarter for use in a chart, and then a shorter list with only selected quarters.
[localhost:21000] > create table x_axis (x int); [localhost:21000] > create table y_axis (y int); [localhost:21000] > insert into x_axis values (1),(2),(3),(4); Inserted 4 rows in 2.14s [localhost:21000] > insert into y_axis values (2010),(2011),(2012),(2013),(2014); Inserted 5 rows in 1.32s [localhost:21000] > select y as year, x as quarter from x_axis cross join y_axis; +------+---------+ | year | quarter | +------+---------+ | 2010 | 1 | | 2011 | 1 | | 2012 | 1 | | 2013 | 1 | | 2014 | 1 | | 2010 | 2 | | 2011 | 2 | | 2012 | 2 | | 2013 | 2 | | 2014 | 2 | | 2010 | 3 | | 2011 | 3 | | 2012 | 3 | | 2013 | 3 | | 2014 | 3 | | 2010 | 4 | | 2011 | 4 | | 2012 | 4 | | 2013 | 4 | | 2014 | 4 | +------+---------+ Returned 20 row(s) in 0.38s [localhost:21000] > select y as year, x as quarter from x_axis cross join y_axis where x in (1,3); +------+---------+ | year | quarter | +------+---------+ | 2010 | 1 | | 2011 | 1 | | 2012 | 1 | | 2013 | 1 | | 2014 | 1 | | 2010 | 3 | | 2011 | 3 | | 2012 | 3 | | 2013 | 3 | | 2014 | 3 | +------+---------+ Returned 10 row(s) in 0.39s
<< Guidelines for Designing Impala Schemas | Impala Administration >> | |