Hortonworks Docs
»
Hortonworks Data Platform 3.1.5
»
Administering HDFS
Administering HDFS
Also available as:
Cluster Maintenance
Decommissioning slave nodes
Prerequisites to decommission slave nodes
Decommission DataNodes or NodeManagers
Decommission DataNodes
Decommission NodeManagers
Decommission HBase RegionServers
Manually add slave nodes to an HDP cluster
Prerequisites to manually add slave nodes
Add slave nodes
Add HBase RegionServer
Using DistCp to Copy Files
Using DistCp
Command Line Options
Update and Overwrite
DistCp and Security Settings
Secure-to-Secure: Kerberos Principal Name
Secure-to-Secure: ResourceManager mapping rules
DistCp between HA clusters
DistCp and HDP version
DistCp data copy matrix
Copying Data from HDP-2.x to HDP-1.x Clusters
DistCp Architecture
DistCp Driver
Copy-listing Generator
InputFormats and MapReduce Components
DistCp Frequently Asked Questions
DistCp additional considerations
Ports and Services Reference
Configuring ports
Accumulo service ports
Atlas service ports
Druid service ports
Flume service ports
HBase service ports
HDFS service ports
Hive service ports
Hue service port
Kafka service ports
Kerberos service ports
Knox service ports
MapReduce service ports
MySQL service ports
Oozie service ports
Ranger service ports
Sqoop service ports
Storm service ports
Tez ports
YARN service ports
Zeppelin service port
ZooKeeper service ports
Controlling HDP services manually
Starting HDP services
Stopping HDP services
Controlling HDP services manually
You must follow the precise order while starting and stopping the various HDP services.
Starting HDP services
Make sure to start the Hadoop services in the prescribed order.
Stopping HDP services
Before performing any upgrades or uninstalling software, stop all of the Hadoop services in the prescribed order.
Parent topic:
Ports and Services Reference
© 2012–2019, Hortonworks, Inc.
Document licensed under the
Creative Commons Attribution ShareAlike 4.0 License
.
Hortonworks.com
|
Documentation
|
Support
|
Community