Command Line Installation
Also available as:
PDF
loading table of contents...

Abstract

The Hortonworks Data Platform, powered by Apache Hadoop, is a massively scalable and 100% open source platform for storing, processing and analyzing large volumes of data. It is designed to deal with data from many sources and formats in a very quick, easy and cost-effective manner. The Hortonworks Data Platform consists of the essential set of Apache Software Foundation projects that focus on the storage and processing of Big Data, along with operations, security, and governance for the resulting system. This includes Apache Hadoop -- which includes MapReduce, Hadoop Distributed File System (HDFS), and Yet Another Resource Negotiator (YARN) -- along with Ambari, Falcon, Flume, HBase, Hive, Kafka, Knox, Oozie, Phoenix, Pig, Ranger, Slider, Spark, Sqoop, Storm, Tez, and ZooKeeper. Hortonworks is the major contributor of code and patches to many of these projects. These projects have been integrated and tested as part of the Hortonworks Data Platform release process and installation and configuration tools have also been included.

Unlike other providers of platforms built using Apache Hadoop, Hortonworks contributes 100% of our code back to the Apache Software Foundation. The Hortonworks Data Platform is Apache-licensed and completely open source. We sell only expert technical support, training and partner-enablement services. All of our technology is, and will remain, free and open source.

Please visit the Hortonworks Data Platform page for more information on Hortonworks technology. For more information on Hortonworks services, please visit either the Support or Training page. Feel free to contact us directly to discuss your specific needs.


Contents

1. Preparing to Manually Install HDP
Meeting Minimum System Requirements
Hardware Recommendations
Operating System Requirements
Software Requirements
JDK Requirements
Metastore Database Requirements
Virtualization and Cloud Platforms
Configuring Remote Repositories
Deciding on a Deployment Type
Collect Information
Prepare the Environment
Enable NTP on Your Cluster
Disable SELinux
Disable IPTables
Download Companion Files
Define Environment Parameters
Creating System Users and Groups
Determining HDP Memory Configuration Settings
Running the YARN Utility Script
Calculating YARN and MapReduce Memory Requirements
Configuring NameNode Heap Size
Allocating Adequate Log Space for HDP
Downloading the HDP Maven Artifacts
2. Installing Apache ZooKeeper
Install the ZooKeeper Package
Securing ZooKeeper with Kerberos (optional)
Securing ZooKeeper Access
ZooKeeper Configuration
YARN Configuration
HDFS Configuration
Set Directories and Permissions
Set Up the Configuration Files
Start ZooKeeper
3. Installing HDFS, YARN, and MapReduce
Set Default File and Directory Permissions
Install the Hadoop Packages
Install Compression Libraries
Install Snappy
Install LZO
Create Directories
Create the NameNode Directories
Create the SecondaryNameNode Directories
Create DataNode and YARN NodeManager Local Directories
Create the Log and PID Directories
Symlink Directories with hdp-select
4. Setting Up the Hadoop Configuration
5. Validating the Core Hadoop Installation
Format and Start HDFS
Smoke Test HDFS
Configure YARN and MapReduce
Start YARN
Start MapReduce JobHistory Server
Smoke Test MapReduce
6. Deploying HDP In Production Data Centers With Firewalls
Terminology
Mirroring or Proxying
Considerations for choosing a Mirror or Proxy solution
Recommendations for Deploying HDP
Detailed Instructions for Creating Mirrors and Proxies
Option I - Mirror server has no access to the Internet
Option II - Mirror server has temporary or continuous access to the Internet
Set up a trusted proxy server
7. Installing Apache HBase
Install the HBase Package
Set Directories and Permissions
Set Up the Configuration Files
Add Configuration Parameters for Bulk Load Support
Validate the Installation
Starting the HBase Thrift and REST Servers
8. Installing Apache Phoenix
Installing the Phoenix Package
Configuring HBase for Phoenix
Configuring Phoenix to Run in a Secure Cluster
Validating the Phoenix Installation
Troubleshooting Phoenix
9. Installing and Configuring Apache Tez
Prerequisites
Installing the Tez Package
Configuring Tez
Setting Up Tez for the Tez UI
Setting Up Tez for the Tez UI
Deploying the Tez UI
Additional Steps for the Application Timeline Server
Creating a New Tez View Instance
Validating the Tez Installation
Troubleshooting
10. Installing Apache Hive and Apache HCatalog
Installing the Hive-HCatalog Package
Setting Up the Hive/HCatalog Configuration Files
HDP-Utility script
Configure Hive and HiveServer2 for Tez
Setting Up the Database for the Hive Metastore
Setting up RDBMS for use with Hive Metastore
Enabling Tez for Hive Queries
Disabling Tez for Hive Queries
Configuring Tez with the Capacity Scheduler
Validating Hive-on-Tez Installation
Installing Apache Hive LLAP
LLAP Prerequisites
Preparing to Install LLAP
Installing LLAP on an Unsecured Cluster
Installing LLAP on a Secured Cluster
Prerequisites
Installing LLAP on a Secured Cluster
Validating the Installation on a Secured Cluster
Stopping the LLAP Service
Tuning LLAP for Performance
11. Installing Apache Pig
Install the Pig Package
Validate the Installation
12. Installing Apache WebHCat
Install the WebHCat Package
Upload the Pig, Hive and Sqoop tarballs to HDFS
Set Directories and Permissions
Modify WebHCat Configuration Files
Set Up HDFS User and Prepare WebHCat Directories
Validate the Installation
13. Installing Apache Oozie
Install the Oozie Package
Set Directories and Permissions
Set Up the Oozie Configuration Files
For Derby
For MySQL
For PostgreSQL
For Oracle
Configure Your Database for Oozie
Set up the Sharelib
Validate the Installation
Stop and Start Oozie
14. Installing Apache Ranger
Installation Prerequisites
Installing Policy Manager
Install the Ranger Policy Manager
Install the Ranger Policy Administration Service
Start the Ranger Policy Administration Service
Configuring the Ranger Policy Administration Authentication Mode
Configuring Ranger Policy Administration High Availability
Installing UserSync
Using the LDAP Connection Check Tool
Install UserSync and Start the Service
Installing Ranger Plug-ins
Installing the Ranger HDFS Plug-in
Installing the Ranger YARN Plug-in
Installing the Ranger Kafka Plug-in
Installing the Ranger HBase Plug-in
Installing the Ranger Hive Plug-in
Installing the Ranger Knox Plug-in
Installing the Ranger Storm Plug-in
Installing Ranger in a Kerberized Environment
Creating Keytab and Principals
Installing Ranger Services
Manually Installing and Enabling the Ranger Plug-ins
Verifying the Installation
15. Installing Hue
Before You Begin
Configure HDP to Support Hue
Install the Hue Packages
Configure Hue to Communicate with the Hadoop Components
Configure the Web Server
Configure Hadoop
Configure Hue for Databases
Using Hue with Oracle
Using Hue with MySQL
Using Hue with PostgreSQL
Start, Stop, and Restart Hue
Validate the Hue Installation
16. Installing Apache Sqoop
Install the Sqoop Package
Set Up the Sqoop Configuration
Validate the Sqoop Installation
17. Installing Apache Mahout
Install Mahout
Validate Mahout
18. Installing and Configuring Apache Flume
Installing Flume
Configuring Flume
Starting Flume
19. Installing and Configuring Apache Storm
Install the Storm Package
Configure Storm
Configure a Process Controller
(Optional) Configure Kerberos Authentication for Storm
(Optional) Configuring Authorization for Storm
Validate the Installation
20. Installing and Configuring Apache Spark
Spark Prerequisites
Installing Spark
Configuring Spark
(Optional) Starting the Spark Thrift Server
(Optional) Configuring Dynamic Resource Allocation
(Optional) Installing and Configuring Livy
Installing Livy
Configuring Livy
Starting, Stopping, and Restarting Livy
Granting Livy the Ability to Impersonate
(Optional) Configuring Zeppelin to Interact with Livy
Validating Spark
21. Installing and Configuring Apache Spark 2
Spark 2 Prerequisites
Installing Spark 2
Configuring Spark 2
(Optional) Starting the Spark 2 Thrift Server
(Optional) Configuring Dynamic Resource Allocation
(Optional) Installing and Configuring Livy
Installing Livy
Configuring Livy
Starting, Stopping, and Restarting Livy
Granting Livy the Ability to Impersonate
(Optional) Configuring Zeppelin to Interact with Livy
Validating Spark 2
22. Installing and Configuring Apache Kafka
Install Kafka
Configure Kafka
Validate Kafka
23. Installing and Configuring Zeppelin
Installation Prerequisites
Installing the Zeppelin Package
Configuring Zeppelin
Starting, Stopping, and Restarting Zeppelin
Validating Zeppelin
Accessing the Zeppelin UI
24. Installing Apache Accumulo
Installing the Accumulo Package
Configuring Accumulo
Configuring the "Hosts" Files
Validating Accumulo
Smoke Testing Accumulo
25. Installing Apache Falcon
Installing the Falcon Package
Setting Directories and Permissions
Configuring Proxy Settings
Configuring Falcon Entities
Configuring Oozie for Falcon
Configuring Hive for Falcon
Configuring for Secure Clusters
Validate Falcon
26. Installing Apache Knox
Install the Knox Package on the Knox Server
Set up and Validate the Knox Gateway Installation
Configuring Knox Single Sign-on (SSO)
27. Installing Apache Slider
28. Setting Up Kerberos Security for Manual Installs
29. Uninstalling HDP

List of Tables

1.1. Directories Needed to Install Core Hadoop
1.2. Directories Needed to Install Ecosystem Components
1.3. Define Users and Groups for Systems
1.4. Typical System Users and Groups
1.5. yarn-utils.py Options
1.6. Reserved Memory Recommendations
1.7. Recommended Container Size Values
1.8. YARN and MapReduce Configuration Values
1.9. Example Value Calculations Without HBase
1.10. Example Value Calculations with HBase
1.11. Recommended NameNode Heap Size Settings
6.1. Terminology
6.2. Hortonworks Yum Repositories
6.3. HDP Component Options
6.4. Yum Client Options
6.5. Yum Client Configuration Commands
6.6. $OS Parameter Values
9.1. Tez Configuration Parameters
10.1. Hive Configuration Parameters
10.2.
10.3.
10.4. LLAP Properties to Set in hive-site.xml
10.5. HiveServer2 Properties to Set in hive-site.xml to Enable Concurrent Queries with LLAP
10.6. Properties to Set in hive-site.xml for Secured Clusters
10.7. Properties to Set in ssl-server.xml for LLAP on Secured Clusters
10.8. LLAP Package Parameters
12.1. Hadoop core-site.xml File Properties
14.1. install.properties Entries
14.2. Properties to Update in the install.properties File
14.3. Properties to Edit in the install.properties File
14.4. Properties to Edit in the install.properties File
14.5. Properties to Edit in the install.properties File
14.6. HBase Properties to Edit in the install.properties file
14.7. Hive-Related Properties to Edit in the install.properties File
14.8. Knox-Related Properties to Edit in the install.properties File
14.9. Storm-Related Properties to Edit in the install.properties file
14.10. install.properties Property Values
14.11. install.properties Property Values
14.12. install.properties Property Values
14.13. install.properties Property Values
14.14. install.properties Property Values
14.15. install.properties Property Values
14.16. install.properties Property Values
14.17. install.properties Property Values
14.18. install.properties Property Values
14.19. install.properties Property Values
14.20. install.properties Property Values
18.1. Flume 1.5.2 Dependencies
19.1. Required jaas.conf Sections for Cluster Nodes
19.2. Supported Authorizers
19.3. storm.yaml Configuration File Properties
19.4. worker-launcher.cfg File Configuration Properties
19.5. multitenant-scheduler.yaml Configuration File Properties
20.1. Prerequisites for running Spark 1.6
21.1. Prerequisites for running Spark 2
22.1. Kafka Configuration Properties
23.1. Installation Prerequisites