6. References
Sidebar
Prev
|
Up
|
Next
Docs
Hortonworks Data Platform
6. References
High Availability for Hadoop using VMWare
Legal notices
Contents
Search
1. Configuring Rack Awareness on HDP
1. Create a Rack Topology Script
2. Add Properties to core-site.xml
3. Restart HDFS and MapReduce
4. Verify Rack Awareness
2. High Availability for Hadoop
3. High Availability for HDP Master Services Using VMware
1. Use Cases and Fail Over Scenarios
2. Supported Operating Systems
3. Configuration For Physical Servers
4. Software Configuration
4.1. Configure a vSphere HA cluster
4.2. Install HDP
4.3. Configure NameNode for automatic fail over
4.4. Validate autostart configuration
4.5. Configure JobTracker for automatic fail over
4.6. Validate JobTracker autostart configuration
4.7. Enable vSphere for HA
4.8. Validate NameNode High Availability
4.8.1. Install and configure HAM
4.8.2. Invoke HAM application
4.8.3. Validate the fail over behavior
5. Administration Guide for Highly Available NameNode
5.1. NameNode shutdown for planned maintenance
5.2. Starting the NameNode
5.3. Reconfiguring HDFS
5.4. Tuning parameters for your environment
6. References
4. High Availability for HDP Master Services Using Red Hat
1. Use Cases and Fail Over Scenarios
1.1. Supported use cases
1.2. Supported fail over scenarios
2. Typical HDP HA Cluster
3. Prerequisites
3.1. Hardware Prerequisites
3.2. Software Prerequisites
3.2.1. Configure RHEL HA Cluster
3.2.2. Validate Configurations for RHEL HA Cluster
4. Install HDP Hadoop Core Packages
5. Deploy HDP HA Configurations
6. Configure HA for NameNode Service
6.1. Install NameNode monitoring component
6.2. Configure NameNode service in clustering configuration
7. Configure JobTracker HA for RHEL Cluster
7.1. Install JobTracker monitoring component
7.2. Configure HDP JobTracker in clustering configuration
8. Distribute Cluster Configuration
9. Validate Cluster Fail Over
9.1. Validate NameNode restart on primary machine
9.2. Validate NameNode fail over during soft reboot
9.3. Validate NameNode fail over during hard reboot
5. High Availability for Hive Metastore
1. Use Cases and Fail Over Scenarios
2. Software Configuration
2.1. Install HDP
2.2. Validate configuration
6. Upgrade HDP Manually
1. Getting Ready to Upgrade
2. Upgrade Hadoop
3. Upgrade ZooKeeper and HBase
4. Upgrade Hive and HCatalog
5. Upgrade Oozie
6. Upgrade WebHCat (Templeton)
7. Upgrade Pig
8. Upgrade Sqoop
9. Upgrade Flume
9.1. Validate Flume
10. Upgrade Mahout
10.1. Mahout Validation
7. Manually Add Slave Nodes to HDP Cluster
1. Prerequisites
2. Add DataNodes or TaskTrackers
3. Add HBase RegionServer
4. Optional - Configure Monitoring Using Ganglia
5. Optional - Configure Cluster Alerting Using Nagios
8. Decommission Slave Nodes
1. Prerequisites
2. Decommission DataNodes or TaskTrackers
2.1. Decommission DataNodes
2.2. Decommission TaskTrackers
3. Decommission HBase RegionServers
9. HDP Logfiles
1. HDP Log Locations
2. HDP Log Format
3. HDP Backups
10. WebHDFS Administrator Guide
Search
Search Highlighter (On/Off)