INSERT Statement
Impala supports inserting into tables and partitions that you create with the Impala CREATE TABLE statement, or pre-defined tables and partitions created through Hive.
Syntax:
[with_clause] INSERT { INTO | OVERWRITE } [TABLE] table_name [(column_list)] [ PARTITION (partition_clause)] { [hint_clause] select_statement | VALUES (value [, value ...]) [, (value [, value ...]) ...] } partition_clause ::= col_name [= constant] [, col_name [= constant] ...] hint_clause ::= [SHUFFLE] | [NOSHUFFLE] (Note: the square brackets are part of the syntax.)
Usage notes:
Impala currently supports:
- INSERT INTO to append data to a table.
- INSERT OVERWRITE to replace the data in a table.
- Copy data from another table using SELECT query. In Impala 1.2.1 and higher, you can combine CREATE TABLE and INSERT operations into a single step with the CREATE TABLE AS SELECT syntax, which bypasses the actual INSERT keyword.
- An optional WITH clause before the INSERT keyword, to define a subquery referenced in the SELECT portion.
- Create one or more new rows using constant expressions through VALUES clause. (The VALUES clause was added in Impala 1.0.1.)
- Specify the names or order of columns to be inserted, different than the columns of the table being queried by the INSERT statement. (This feature was added in Impala 1.1.)
- An optional hint clause immediately before the SELECT keyword, to fine-tune the behavior when doing an INSERT ... SELECT operation into partitioned Parquet tables. The hint keywords are [SHUFFLE] and [NOSHUFFLE], including the square brackets. Inserting into partitioned Parquet tables can be a resource-intensive operation because it potentially involves many files being written to HDFS simultaneously, and separate 1 GB memory buffers being allocated to buffer the data for each partition. For usage details, see Loading Data into Parquet Tables.
- Insert commands that partition or add files result in changes to Hive metadata. Because Impala uses Hive metadata, such changes may necessitate a Hive metadata refresh. For more information, see the REFRESH function.
- Currently, Impala can only insert data into tables that use the TEXT and Parquet formats. For other file formats, insert the data using Hive and use Impala to query it.
Statement type: DML (but still affected by SYNC_DDL query option)
Usage notes:
When you insert the results of an expression, particularly of a built-in function call, into a small numeric column such as INT, SMALLINT, TINYINT, or FLOAT, you might need to use a CAST() expression to coerce values into the appropriate type. Impala does not automatically convert from a larger type to a smaller one. For example, to insert cosine values into a FLOAT column, write CAST(COS(angle) AS FLOAT) in the INSERT statement to make the conversion explicit.
Any INSERT statement for a Parquet table requires enough free space in the HDFS filesystem to write one block. Because Parquet data files use a block size of 1 GB by default, an INSERT might fail (even for a very small amount of data) if your HDFS is running low on space.
If you connect to different Impala nodes within an impala-shell session for load-balancing purposes, you can enable the SYNC_DDL query option to make each DDL statement wait before returning, until the new or changed metadata has been received by all the Impala nodes. See SYNC_DDL for details.
Examples:
The following example sets up new tables with the same definition as the TAB1 table from the Tutorial section, using different file formats, and demonstrates inserting data into the tables created with the STORED AS TEXTFILE and STORED AS PARQUET clauses:
CREATE DATABASE IF NOT EXISTS file_formats; USE file_formats; DROP TABLE IF EXISTS text_table; CREATE TABLE text_table ( id INT, col_1 BOOLEAN, col_2 DOUBLE, col_3 TIMESTAMP ) STORED AS TEXTFILE; DROP TABLE IF EXISTS parquet_table; CREATE TABLE parquet_table ( id INT, col_1 BOOLEAN, col_2 DOUBLE, col_3 TIMESTAMP ) STORED AS PARQUET;
With the INSERT INTO TABLE syntax, each new set of inserted rows is appended to any existing data in the table. This is how you would record small amounts of data that arrive continuously, or ingest new batches of data alongside the existing data. For example, after running 2 INSERT INTO TABLE statements with 5 rows each, the table contains 10 rows total:
[localhost:21000] > insert into table text_table select * from default.tab1; Inserted 5 rows in 0.41s [localhost:21000] > insert into table text_table select * from default.tab1; Inserted 5 rows in 0.46s [localhost:21000] > select count(*) from text_table; +----------+ | count(*) | +----------+ | 10 | +----------+ Returned 1 row(s) in 0.26s
With the INSERT OVERWRITE TABLE syntax, each new set of inserted rows replaces any existing data in the table. This is how you load data to query in a data warehousing scenario where you analyze just the data for a particular day, quarter, and so on, discarding the previous data each time. You might keep the entire set of data in one raw table, and transfer and transform certain rows into a more compact and efficient form to perform intensive analysis on that subset.
For example, here we insert 5 rows into a table using the INSERT INTO clause, then replace the data by inserting 3 rows with the INSERT OVERWRITE clause. Afterward, the table only contains the 3 rows from the final INSERT statement.
[localhost:21000] > insert into table parquet_table select * from default.tab1; Inserted 5 rows in 0.35s [localhost:21000] > insert overwrite table parquet_table select * from default.tab1 limit 3; Inserted 3 rows in 0.43s [localhost:21000] > select count(*) from parquet_table; +----------+ | count(*) | +----------+ | 3 | +----------+ Returned 1 row(s) in 0.43s
The VALUES clause lets you insert one or more rows by specifying constant values for all the columns. The number, types, and order of the expressions must match the table definition.
The following example shows how to insert one row or multiple rows, with expressions of different types, using literal values, expressions, and function return values:
create table val_test_1 (c1 int, c2 float, c3 string, c4 boolean, c5 timestamp); insert into val_test_1 values (100, 99.9/10, 'abc', true, now()); create table val_test_2 (id int, token string); insert overwrite val_test_2 values (1, 'a'), (2, 'b'), (-1,'xyzzy');
These examples show the type of
DROP TABLE IF EXISTS sequence_table; CREATE TABLE sequence_table ( id INT, col_1 BOOLEAN, col_2 DOUBLE, col_3 TIMESTAMP ) STORED AS SEQUENCEFILE; DROP TABLE IF EXISTS rc_table; CREATE TABLE rc_table ( id INT, col_1 BOOLEAN, col_2 DOUBLE, col_3 TIMESTAMP ) STORED AS RCFILE; [localhost:21000] > insert into table rc_table select * from default.tab1; Remote error Backend 0:RC_FILE not implemented. [localhost:21000] > insert into table sequence_table select * from default.tab1; Remote error Backend 0:SEQUENCE_FILE not implemented.
Inserting data into partitioned tables requires slightly different syntax that divides the partitioning columns from the others:
create table t1 (i int) partitioned by (x int, y string); -- Select an INT column from another table. -- All inserted rows will have the same x and y values, as specified in the INSERT statement. -- This technique of specifying all the partition key values is known as static partitioning. insert into t1 partition(x=10, y='a') select c1 from some_other_table; -- Select two INT columns from another table. -- All inserted rows will have the same y value, as specified in the INSERT statement. -- Values from c2 go into t1.x. -- Any partitioning columns whose value is not specified are filled in -- from the columns specified last in the SELECT list. -- This technique of omitting some partition key values is known as dynamic partitioning. insert into t1 partition(x, y='b') select c1, c2 from some_other_table; -- Select an INT and a STRING column from another table. -- All inserted rows will have the same x value, as specified in the INSERT statement. -- Values from c3 go into t1.y. insert into t1 partition(x=20, y) select c1, c3 from some_other_table;
-- Start with 2 identical tables. create table t1 (c1 int, c2 int); create table t2 like t1; -- If there is no () part after the destination table name, -- all columns must be specified, either as * or by name. insert into t2 select * from t1; insert into t2 select c1, c2 from t1; -- With the () notation following the destination table name, -- you can omit columns (all values for that column are NULL -- in the destination table), and/or reorder the values -- selected from the source table. This is the "column permutation" feature. insert into t2 (c1) select c1 from t1; insert into t2 (c2, c1) select c1, c2 from t1; -- The column names can be entirely different in the source and destination tables. -- You can copy any columns, not just the corresponding ones, from the source table. -- But the number and type of selected columns must match the columns mentioned in the () part. alter table t2 replace columns (x int, y int); insert into t2 (y) select c1 from t1; -- For partitioned tables, all the partitioning columns must be mentioned in the () column list -- or a PARTITION clause; these columns cannot be defaulted to NULL. create table pt1 (x int, y int) partitioned by (z int); -- The values from c1 are copied into the column x in the new table, -- all in the same partition based on a constant value for z. -- The values of y in the new table are all NULL. insert into pt1 (x) partition (z=5) select c1 from t1; -- Again we omit the values for column y so they are all NULL. -- The inserted x values can go into different partitions, based on -- the different values inserted into the partitioning column z. insert into pt1 (x,z) select x, z from t2;
Sorting considerations: Although you can specify an ORDER BY clause in an INSERT ... SELECT statement, any ORDER BY clause is ignored and the results are not necessarily sorted. An INSERT ... SELECT operation potentially creates many different data files, prepared on different data nodes, and therefore the notion of the data being stored in sorted order is impractical.
Concurrency considerations: Each INSERT operation creates new data files with unique names, so you can run multiple INSERT INTO statements simultaneously without filename conflicts. While data is being inserted into an Impala table, the data is staged temporarily in a subdirectory inside the data directory; during this period, you cannot issue queries against that table in Hive. If an INSERT operation fails, the temporary data file and the subdirectory could be left behind in the data directory. If so, remove the relevant subdirectory and any data files it contains manually, by issuing an hdfs dfs -rm -r command, specifying the full path of the work subdirectory, whose name ends in _dir.
VALUES Clause
The VALUES clause is a general-purpose way to specify the columns of one or more rows, typically within an INSERT statement.
The following examples illustrate:
- How to insert a single row using a VALUES clause.
- How to insert multiple rows using a VALUES clause.
- How the row or rows from a VALUES clause can be appended to a table through INSERT INTO, or replace the contents of the table through INSERT OVERWRITE.
- How the entries in a VALUES clause can be literals, function results, or any other kind of expression. See Literals for the notation to use for literal values, especially String Literals for quoting and escaping conventions for strings. See SQL Operators and Built-in Functions for other things you can include in expressions with the VALUES clause.
[localhost:21000] > describe val_example; Query: describe val_example Query finished, fetching results ... +-------+---------+---------+ | name | type | comment | +-------+---------+---------+ | id | int | | | col_1 | boolean | | | col_2 | double | | +-------+---------+---------+ [localhost:21000] > insert into val_example values (1,true,100.0); Inserted 1 rows in 0.30s [localhost:21000] > select * from val_example; +----+-------+-------+ | id | col_1 | col_2 | +----+-------+-------+ | 1 | true | 100 | +----+-------+-------+ [localhost:21000] > insert overwrite val_example values (10,false,pow(2,5)), (50,true,10/3); Inserted 2 rows in 0.16s [localhost:21000] > select * from val_example; +----+-------+-------------------+ | id | col_1 | col_2 | +----+-------+-------------------+ | 10 | false | 32 | | 50 | true | 3.333333333333333 | +----+-------+-------------------+
When used in an INSERT statement, the Impala VALUES clause can specify some or all of the columns in the destination table, and the columns can be specified in a different order than they actually appear in the table. To specify a different set or order of columns than in the table, use the syntax:
INSERT INTO destination (col_x, col_y, col_z) VALUES (val_x, val_y, val_z);
Any columns in the table that are not listed in the INSERT statement are set to NULL.
To use a VALUES clause like a table in other statements, wrap it in parentheses and use AS clauses to specify aliases for the entire object and any columns you need to refer to:
[localhost:21000] > select * from (values(4,5,6),(7,8,9)) as t; +---+---+---+ | 4 | 5 | 6 | +---+---+---+ | 4 | 5 | 6 | | 7 | 8 | 9 | +---+---+---+ [localhost:21000] > select * from (values(1 as c1, true as c2, 'abc' as c3),(100,false,'xyz')) as t; +-----+-------+-----+ | c1 | c2 | c3 | +-----+-------+-----+ | 1 | true | abc | | 100 | false | xyz | +-----+-------+-----+
For example, you might use a tiny table constructed like this from constant literals or function return values as part of a longer statement involving joins or UNION ALL.
HBase considerations:
You can use the INSERT statement with HBase tables as follows:
-
You can insert a single row or a small set of rows into an HBase table with the INSERT ... VALUES syntax. This is a good use case for HBase tables with Impala, because HBase tables are not subject to the same kind of fragmentation from many small insert operations as HDFS tables are.
-
You can insert any number of rows at once into an HBase table using the INSERT ... SELECT syntax.
-
If more than one inserted row has the same value for the HBase key column, only the last inserted row with that value is visible to Impala queries. You can take advantage of this fact with INSERT ... VALUES statements to effectively update rows one at a time, by inserting new rows with the same key values as existing rows. Be aware that after an INSERT ... SELECT operation copying from an HDFS table, the HBase table might contain fewer rows than were inserted, if the key column in the source table contained duplicate values.
-
You cannot INSERT OVERWRITE into an HBase table. New rows are always appended.
-
When you create an Impala or Hive table that maps to an HBase table, the column order you specify with the INSERT statement might be different than the order you declare with the CREATE TABLE statement. Behind the scenes, HBase arranges the columns based on how they are divided into column families. This might cause a mismatch during insert operations, especially if you use the syntax INSERT INTO hbase_table SELECT * FROM hdfs_table. Before inserting data, verify the column order by issuing a DESCRIBE statement for the table, and adjust the order of the select list in the INSERT statement.
See Using Impala to Query HBase Tables for more details about using Impala with HBase.
Cancellation: Can be cancelled. To cancel this statement, use Ctrl-C from the impala-shell interpreter, the Cancel button from the Watch page in Hue, Actions > Cancel from the Queries list in Cloudera Manager, or Cancel from the list of in-flight queries (for a particular node) on the Queries tab in the Impala web UI (port 25000).
By default, if an INSERT statement creates any new subdirectories underneath a partitioned table, those subdirectories are assigned default HDFS permissions for the impala user. To make each subdirectory have the same permissions as its parent directory in HDFS, specify the --insert_inherit_permissions startup option for the impalad daemon.
<< EXPLAIN Statement | INVALIDATE METADATA Statement >> | |