The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
The project includes these modules:
Hadoop Common: The set of common utilities that support other Hadoop modules
Hadoop Distributed File System (HDFS): A distributed file system that provides high-throughput access to application data
Hadoop YARN: Framework for job scheduling and cluster resource management
Hadoop MapReduce: A YARN-based system for parallel processing of large data sets
To learn more about Apache Hadoop, see Apache Hadoop.
In this section: