Chapter 1. Before You Begin

The Hortonworks Data Platform consists of three layers:

  • Core Hadoop 2: The basic components of Apache Hadoop version 2.x.

    • Hadoop Distributed File System (HDFS): A special purpose file system designed to provide high-throughput access to data in a highly distributed environment.

    • YARN: A resource negotiator for managing high volume distributed data processing. Previously part of the first version of MapReduce.

    • MapReduce 2 (MR2): A set of client libraries for computation using the MapReduce programming paradigm and a History Server for logging job and task information. Previously part of the first version of MapReduce.

  • Essential Hadoop: A set of Apache components designed to ease working with Core Hadoop.

    • Apache Pig: A platform for creating higher level data flow programs that can be compiled into sequences of MapReduce programs, using Pig Latin, the platform’s native language.

    • Apache Hive: A tool for creating higher level SQL-like queries using HiveQL, the tool’s native language, that can be compiled into sequences of MapReduce programs.

    • Apache HCatalog: A metadata abstraction layer that insulates users and scripts from how and where data is physically stored.

    • WebHCat (Templeton): A component that provides a set of REST-like APIs for HCatalog and related Hadoop components.

    • Apache HBase: A distributed, column-oriented database that provides the ability to access and manipulate data randomly in the context of the large blocks that make up HDFS.

    • Apache ZooKeeper: A centralized tool for providing services to highly distributed systems. ZooKeeper is necessary for HBase installations.

  • Supporting Components: A set of components that allow you to monitor your Hadoop installation and to connect Hadoop with your larger compute environment.

    • Apache Oozie: A server based workflow engine optimized for running workflows that execute Hadoop jobs.

    • Apache Sqoop: A component that provides a mechanism for moving data between HDFS and external structured datastores. Can be integrated with Oozie workflows.

    • Apache Flume: A log aggregator. This component must be installed manually.

    • Apache Mahout: A scalable machine learning library that implements several different approaches to machine learning.

    • Apache Knox: A REST API gateway for interacting with Apache Hadoop clusters. The gateway provides a single access point for all REST interactions with Hadoop clusters.

For more information on the structure of the HDP, see "Understanding the Hadoop Ecosystem" in the Getting Started Guide for Windows.

While it is possible to deploy all of HDP on a single host, a single-node installation is appropriate only for initial evaluation (see the Cluster Planning Guide for Windows). In general you should use at least three hosts: one master host and two slaves.


loading table of contents...