An open source, column-oriented binary file format for Hadoop that supports very efficient compression and encoding schemes. Parquet allows compression schemes to be specified on a per-column level, and allows adding more encodings as they are invented and implemented. Encoding and compression are separated, allowing Parquet consumers to implement operators that work directly on encoded data without paying a decompression and decoding penalty, when possible.
*
C
I
P
S
U