Data compaction
As administrator, you need to manage compaction of delta files that accumulate during data ingestion. Compaction is a process that performs critical housekeeping of files.
Hive creates a set of delta files for each transaction that alters a table or partition and stores them in a separate delta directory. By default, Hive automatically compacts delta and base files at regular intervals. Compaction is a consolidation of files. You can configure automatic compactions, as well as perform manual compactions of base and delta files. To submit compaction Jobs, Hive uses Tez as the execution engine, and uses MapReduce algorithms in the Stack. Compactions occur in the background without affecting concurrent reads and writes.
The compactor initiator should run on only one HMS instance.
There are two types of compaction:
- Minor
Rewrites a set of delta files to a single delta file for a bucket.
- Major
Rewrites one or more delta files and the base file as a new base file for a bucket.