7. References
Sidebar
Prev
|
Up
|
Next
Docs
Hortonworks Data Platform
7. References
High Availability for Hadoop using VMWare
Legal notices
Contents
Search
1. High Availability for Hadoop
2. High Availability for HDFS NameNode Using VMware
1. High Availability for Hadoop Using VMware
2. Use Cases and Fail Over Scenarios
3. Supported Operating Systems
4. Configuration For Physical Servers
5. Software Configuration
5.1. Configure a vSphere HA cluster
5.2. Install HDP
5.3. Configure NameNode for automatic fail over
5.4. Validate autostart configuration
5.5. Enable vSphere for HA
5.6. Validate NameNode High Availability
5.6.1. Install and configure HAM
5.6.2. Invoke HAM application
5.6.3. Validate the fail over behavior
6. Administration Guide for Highly Available NameNode
6.1. NameNode shutdown for planned maintenance
6.2. Starting the NameNode
6.3. Reconfiguring HDFS
6.4. Tuning parameters for your environment
7. References
3. High Availability for Hadoop Using Red Hat
1. High Availability for Hadoop Using Red Hat
2. Use Cases and Fail Over Scenarios
2.1. Supported use cases
2.2. Supported fail over scenarios
3. Typical HDP HA Cluster
4. Prerequisites
4.1. Hardware prerequisites
4.1.1. Shared Storage
4.1.2. Power Fencing Device
4.1.3. IP Fail Over
4.1.4. Hardware Requirements for RHEL HA Cluster
4.2. Software prerequisites
4.2.1. Configure RHEL HA Cluster
4.2.2. Validate Configurations for RHEL HA Cluster
5. Install HDP Hadoop Core Packages
6. Deploy HDP HA Configurations
7. Configure NameNode HA for RHEL Cluster
7.1. Install NameNode monitoring component
7.2. Configure NameNode service in clustering configuration
8. Distribute Cluster Configuration
9. Validate Cluster Fail Over
9.1. Validate NameNode restart on primary machine
9.2. Validate NameNode fail over during soft reboot
9.3. Validate NameNode fail over during hard reboot
4. High Availability for Hive Metastore
1. Use Cases and Fail Over Scenarios
2. Software Configuration
2.1. Install HDP
2.2. Validate configuration
5. Upgrade HDP Manually
1. Getting Ready to Upgrade
2. Upgrade Hadoop
3. Upgrade ZooKeeper and HBase
4. Upgrade Hive and HCatalog
5. Upgrade Oozie
6. Upgrade WebHCat (Templeton)
7. Upgrade Pig
8. Upgrade Sqoop
9. Upgrade Flume
6. Manually Add Slave Nodes to HDP Cluster
1. Prerequisites
2. Add DataNodes or TaskTrackers
3. Add HBase RegionServer
4. Optional - Configure Monitoring Using Ganglia
5. Optional - Configure Cluster Alerting Using Nagios
7. Decommission Slave Nodes
1. Prerequisites
2. Decommission DataNodes or TaskTrackers
2.1. Decommission DataNodes
2.2. Decommission TaskTrackers
3. Decommission HBase RegionServers
Search
Search Highlighter (On/Off)