Physical component list

Review the minimum configuration of physical components recommended for deploying a CDP Private Cloud Base cluster.

Component Configuration Description Quantity
Physical servers Two-socket, 8-16 cores per socket, > 2 GHz; minimum 128 GB RAM. Hosts the various cluster components. Based on cluster design.
NICs 10 Gbps Ethernet NICs (minimum required). Provides the data network services for the cluster. At least one per server, although two NICs can be bonded for additional throughput.
Internal HDDs 500 GB HDD or SSD/NVMe recommended for operating system and logs; HDD for data disks (size varies with data volume requirements). SSD/NVMe can be used for YARN local directories. Ensures continuity of service on server resets and contains the cluster data. 10-24 disks per physical server. The largest storage density currently supported is 100 TB per DataNode.
Ethernet ToR/leaf switches Minimum 10 Gbps switches with sufficient port density to accommodate the cluster. These require enough ports to create a realistic spine-leaf topology providing ISL bandwidth above a 1:4 oversubscription ratio (preferably 1:1). Although most enterprises have mature data network practices, consider building a dedicated data network for the Hadoop cluster. At least two per rack.
Ethernet spine switches Minimum 40 Gbps switches with sufficient port density to accommodate incoming ISL links and ensure required throughput over the spine (for inter-rack traffic). Same considerations as for ToR switches. Depends on the number of racks.