Create, use, and drop an external table

You use an external table, which is a table that Hive does not manage, to import data from a file on a file system, into Hive. In contrast to the Hive managed table, an external table keeps its data outside the Hive metastore. Hive metastore stores only the schema metadata of the external table. Hive does not manage, or restrict access, to the actual external data.

CDP Public Cloud:
  • Obtain access to S3 to store a comma-separated values (CSV) file that will serve as the data source for the external table.
  • Set up S3Guard to prevent data inconsistency caused by the S3 eventual consistency model.
  • Set up Hive policies in Ranger to include S3 URLs.

CDP Private Cloud Base:

You need to set up access to external tables in HDFS using one of the following methods.

  • Set up Hive HDFS policy in Ranger (recommended) to include the paths to external table data.
  • Put an HDFS ACL in place (see link below). Store a comma-separated values (CSV) file in HDFS that will serve as the data source for the external table.

In this task, you create an external table from CSV (comma-separated values) data stored on the file system, depicted in the diagram below. Next, you want Hive to manage and store the actual data in the metastore. You create a managed table.

You insert the external table data into the managed table.

This task demonstrates the following Hive principles:

  • The LOCATION clause in the CREATE TABLE specifies the location of external table data.
  • A major difference between an external and a managed (internal) table: the persistence of table data on the files system after a DROP TABLE statement.
    • External table drop: Hive drops only the metadata, consisting mainly of the schema.
    • Managed table drop: Hive deletes the data and the metadata stored in the Hive warehouse.

After dropping an external table, the data is not gone. To retrieve it, you issue another CREATE EXTERNAL TABLE statement to load the data from the file system.

  1. Create a text file named students.csv that contains the following lines.
    1,jane,doe,senior,mathematics 2,john,smith,junior,engineering
  2. Move the file to HDFS/S3 in a directory/bucket called andrena, and put students.csv in the directory.
  3. Start the Hive shell.
    For example, substitute the URI of your HiveServer: beeline -u jdbc:hive2:// -n hive -p
  4. Create an external table schema definition that specifies the text format, loads data from students.csv in s3a://andrena.
    student_ID INT,
    FirstName STRING, 
    LastName STRING, 
    year STRING, 
    Major STRING) 
    STORED AS TEXTFILE LOCATION 's3a://andrena';
  5. Verify that the Hive warehouse stores the student names in the external table.
    SELECT * FROM names_text;
  6. Create the schema for a managed table.
    student_ID INT, 
    FirstName STRING, 
    LastName STRING, 
    year STRING, 
    Major STRING) 
    COMMENT 'Student Names';
  7. Move the external table data to the managed table.
  8. Verify that the data from the external table resides in the managed table, and drop the external table, and verify that the data still resides in the managed table.
    SELECT * from Names; DROP TABLE names_text; SELECT * from Names;
    The results from the managed table Names appears.
  9. Verify that the external table schema definition is lost.
    SELECT * from names_text;
    Selecting all from names_text returns no results because the external table schema is lost.
  10. Check that the students.csv file on HDFS or S3 remains intact.