DSSD D5 Installation Path C - Manual Installation Using Cloudera Manager Tarballs
This topic describes how to install Cloudera Manager and CDH on a cluster that uses the EMC® DSSD™ D5™ storage appliance as the storage for Hadoop DataNodes. To install clusters that do not use the DSSD D5, see Installing Cloudera Manager and CDH.
In this procedure, you install the Oracle JDK, Cloudera Manager Server, and Cloudera Manager Agent software using tarballs and then you use Cloudera Manager to automate installation of CDH and managed service software using parcels. For a full discussion of deployment options, see Installing Cloudera Manager and CDH.
The general steps in the procedure for Installation Path C follow.
- DSSD D5 Pre-Installation Tasks
- Before You Begin
- Install the Cloudera Manager Server and Agents
- Create Parcel Directories
- Start the Cloudera Manager Server
- Start the Cloudera Manager Agents
- Install Package Dependencies
- Start and Log into the Cloudera Manager Admin Console
- Enable DSSD Mode and Configure Cloudera Manager for the DSSD D5
- Choose Cloudera Manager Edition
- Choose Cloudera Manager Hosts
- Install CDH and Managed Service Software
- Add Services
- Configure Database Settings
- Review and Finish the DSSD D5 Configuration
- (Optional) Disable Short Circuit Reads for HBase and Impala
- (Optional) Change the Cloudera Manager User
- Change the Default Administrator Password
- Configure Oozie Data Purge Settings
- (Optional) Install Multiple DSSD D5 Appliances in a Cluster
- Test the Installation
DSSD D5 Pre-Installation Tasks
- Installing and racking the DSSD D5 Storage Appliance.
- Installing the DSSD D5 PCI cards in the DataNode hosts.
- Connecting the DataNode hosts to the DSSD D5.
- Installing and configuring the DSSD D5 drivers.
- Installing and configuring the DSSD D5 client software.
- Creating a volume on the DSSD D5 for the DataNodes.
- Identifying CPUs and NUMA nodes. See the EMC document DSSD Hadoop Plugin Installation Guide for more information. You use the information from this task in a later step to configure the Libflood CPU ID parameter during the initial configuration of Cloudera Manager.
See the EMC DSSD D5 document DSSD D5 Installation and Service Guide for more information about these tasks.
- Host names of all the hosts in your cluster.
- The DSSD D5 volume name for the DataNodes.
- If you are not using the entire capacity of the DSSD D5 for this cluster, the DSSD Amount of Usable Capacity as assigned in the DSSD D5. For most deployments, the default value (100 TB) is correct. See the DSSD Hadoop Plugin Installation Guide for more information on setting this property.
- The value for the Libflood CPU ID. See “Identify CPUs and NUMA Nodes” in the DSSD Hadoop Plugin Installation Guide for more information.
Before You Begin
Perform Configuration Required by Single User Mode
If you are creating a Cloudera Manager deployment that employs single user mode, perform the configuration steps described in Single User Mode Requirements.Install and Configure External Databases
Read Cloudera Manager and Managed Service Datastores. Install and configure an external database for services or Cloudera Management Service roles using the instructions in External Databases for Oozie Server, Sqoop Server, Activity Monitor, Reports Manager, Hive Metastore Server, Sentry Server, Cloudera Navigator Audit Server, and Cloudera Navigator Metadata Server.
Cloudera Manager also requires a database. Prepare the Cloudera Manager Server database as described in Preparing a Cloudera Manager Server External Database.
Install the Cloudera Manager Server and Agents
$ sudo mkdir /opt/cloudera-manager
$ sudo tar xzf cloudera-manager*.tar.gz -C /opt/cloudera-manager
The files are extracted to a subdirectory named according to the Cloudera Manager version being extracted. For example, files could be extracted to /opt/cloudera-manager/cm-5.0/. This full path is needed later and is referred to as tarball_root directory.
Perform Configuration Required by Single User Mode
If you are creating a Cloudera Manager deployment that employs single user mode, perform the configuration steps described in Single User Mode Requirements.Create Users
The Cloudera Manager Server and managed services require a user account to complete tasks. When installing Cloudera Manager from tarballs, you must create this user account on all hosts manually. Because Cloudera Manager Server and managed services are configured to use the user account cloudera-scm by default, creating a user with this name is the simplest approach. This created user, is used automatically after installation is complete.
$ sudo useradd --system --home=/opt/cloudera-manager/cm-5.6.0/run/cloudera-scm-server --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scmEnsure the --home argument path matches your environment. This argument varies according to where you place the tarball, and the version number varies among releases. For example, the --home location could be /opt/cm-5.6.0/run/cloudera-scm-server.
Create the Cloudera Manager Server Local Data Storage Directory
- Create the following directory: /var/lib/cloudera-scm-server.
- Change the owner of the directory so that the cloudera-scm user and group have ownership of the directory. For example:
$ sudo mkdir /var/lib/cloudera-scm-server $ sudo chown cloudera-scm:cloudera-scm /var/lib/cloudera-scm-server
Configure Cloudera Manager Agents
- On every Cloudera Manager Agent host, configure the Cloudera Manager Agent to point to the Cloudera Manager Server by setting the following properties in the tarball_root/etc/cloudera-scm-agent/config.ini configuration file:
Property Description server_host Name of the host where Cloudera Manager Server is running. server_port Port on the host where Cloudera Manager Server is running. - By default, a tarball installation has a var subdirectory where state is stored. In a non-tarball installation, state is stored in /var. Cloudera recommends that you reconfigure the tarball installation to use an external directory as the /var equivalent (/var or any other directory outside the tarball) so that when you upgrade Cloudera Manager, the new tarball installation can access this state. Configure the installation to use an external directory for storing state by editing tarball_root/etc/default/cloudera-scm-agent and setting the CMF_VAR variable to the location of the /var equivalent. If you do not reuse the state directory between different tarball installations, duplicate Cloudera Manager Agent entries can occur in the Cloudera Manager database.
Configuring for a Custom Cloudera Manager User and Custom Directories
- /var/log/cloudera-scm-headlamp
- /var/log/cloudera-scm-firehose
- /var/log/cloudera-scm-alertpublisher
- /var/log/cloudera-scm-eventserver
- /var/lib/cloudera-scm-headlamp
- /var/lib/cloudera-scm-firehose
- /var/lib/cloudera-scm-alertpublisher
- /var/lib/cloudera-scm-eventserver
- /var/lib/cloudera-scm-server
- Change ownership of existing directories:
- Use the chown command to change ownership of all existing directories to the Cloudera Manager user. If the Cloudera Manager username and group are
cloudera-scm, to change the ownership of the headlamp log directory, you issue a command similar to the following:
$ sudo chown -R cloudera-scm:cloudera-scm /var/log/cloudera-scm-headlamp
- Use the chown command to change ownership of all existing directories to the Cloudera Manager user. If the Cloudera Manager username and group are
cloudera-scm, to change the ownership of the headlamp log directory, you issue a command similar to the following:
- Use alternate directories:
- If the directories you plan to use do not exist, create them. For example, to create /var/cm_logs/cloudera-scm-headlamp for use by the cloudera-scm user, you can use the following commands:
mkdir /var/cm_logs/cloudera-scm-headlamp chown cloudera-scm /var/cm_logs/cloudera-scm-headlamp
- Connect to the Cloudera Manager Admin Console.
- Select
- Select .
- Click the Configuration tab.
- Enter a term in the Search field to find the settings to be changed. For example, you might enter /var or directory.
- Update each value with the new locations for Cloudera Manager to use.
- Click Save Changes to commit the changes.
- If the directories you plan to use do not exist, create them. For example, to create /var/cm_logs/cloudera-scm-headlamp for use by the cloudera-scm user, you can use the following commands:
Create Parcel Directories
- On the Cloudera Manager Server host, create a parcel repository directory:
$ sudo mkdir -p /opt/cloudera/parcel-repo
- Change the directory ownership to be the username you are using to run Cloudera Manager:
$ sudo chown username:groupname /opt/cloudera/parcel-repo
where username and groupname are the user and group names (respectively) you are using to run Cloudera Manager. For example, if you use the default username cloudera-scm, you would run the command:$ sudo chown cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo
- On each cluster host, create a parcels directory:
$ sudo mkdir -p /opt/cloudera/parcels
- Change the directory ownership to be the username you are using to run Cloudera Manager:
$ sudo chown username:groupname /opt/cloudera/parcels
where username and groupname are the user and group names (respectively) you are using to run Cloudera Manager. For example, if you use the default username cloudera-scm, you would run the command:$ sudo chown cloudera-scm:cloudera-scm /opt/cloudera/parcels
Start the Cloudera Manager Server
- As root:
$ sudo tarball_root/etc/init.d/cloudera-scm-server start
- As another user. If you run as another user, ensure the user you created for Cloudera Manager owns the location to which you extracted the tarball including the newly created database
files. If you followed the earlier examples and created the directory /opt/cloudera-manager and the user cloudera-scm, you could use the
following command to change ownership of the directory:
$ sudo chown -R cloudera-scm:cloudera-scm /opt/cloudera-manager
Once you have established ownership of directory locations, you can start Cloudera Manager Server using the user account you chose. For example, you might run the Cloudera Manager Server as cloudera-service. In this case, you have the following options:
- Run the following command:
$ sudo -u cloudera-service tarball_root/etc/init.d/cloudera-scm-server start
- Edit the configuration files so the script internally changes the user. Then run the script as root:
- Remove the following line from tarball_root/etc/default/cloudera-scm-server:
export CMF_SUDO_CMD=" "
- Change the user and group in tarball_root/etc/init.d/cloudera-scm-server to the user
you want the server to run as. For example, to run as cloudera-service, change the user and group as follows:
USER=cloudera-service GROUP=cloudera-service
- Run the server script as root:
$ sudo tarball_root/etc/init.d/cloudera-scm-server start
- Remove the following line from tarball_root/etc/default/cloudera-scm-server:
- Run the following command:
- To start the Cloudera Manager Server automatically after a reboot:
- Run the following commands on the Cloudera Manager Server host:
- RHEL-compatible and SLES (only RHEL is supported for DSSD D5 DataNodes)
$ cp tarball_root/etc/init.d/cloudera-scm-server /etc/init.d/cloudera-scm-server $ chkconfig cloudera-scm-server on
- Debian/Ubuntu (not supported for DSSD D5 DataNodes)
$ cp tarball_root/etc/init.d/cloudera-scm-server /etc/init.d/cloudera-scm-server $ update-rc.d cloudera-scm-server defaults
- RHEL-compatible and SLES (only RHEL is supported for DSSD D5 DataNodes)
- On the Cloudera Manager Server host, open the /etc/init.d/cloudera-scm-server file and change the value of CMF_DEFAULTS from ${CMF_DEFAULTS:-/etc/default} to tarball_root/etc/default.
- Run the following commands on the Cloudera Manager Server host:
If the Cloudera Manager Server does not start, see Troubleshooting Installation and Upgrade Problems.
Start the Cloudera Manager Agents
- To start the Cloudera Manager Agent, run this command on each Agent host:
$ sudo tarball_root/etc/init.d/cloudera-scm-agent start
When the Agent starts, it contacts the Cloudera Manager Server. - If you are running single user mode, start
Cloudera Manager Agent using the user account you chose. For example, to run the Cloudera Manager Agent as cloudera-scm, you have the following options:
- Run the following command:
$ sudo -u cloudera-scm tarball_root/etc/init.d/cloudera-scm-agent start
- Edit the configuration files so the script internally changes the user, and then run the script as root:
- Remove the following line from tarball_root/etc/default/cloudera-scm-agent:
export CMF_SUDO_CMD=" "
- Change the user and group in tarball_root/etc/init.d/cloudera-scm-agent to the user you
want the Agent to run as. For example, to run as cloudera-scm, change the user and group as follows:
USER=cloudera-scm GROUP=cloudera-scm
- Run the Agent script as root:
$ sudo tarball_root/etc/init.d/cloudera-scm-agent start
- Remove the following line from tarball_root/etc/default/cloudera-scm-agent:
- Run the following command:
- To start the Cloudera Manager Agents automatically after a reboot:
- Run the following commands on each Agent host:
- RHEL-compatible and SLES (only RHEL is supported for DSSD D5 DataNodes)
$ cp tarball_root/etc/init.d/cloudera-scm-agent /etc/init.d/cloudera-scm-agent $ chkconfig cloudera-scm-agent on
- Debian/Ubuntu (not supported for DSSD D5 DataNodes)
$ cp tarball_root/etc/init.d/cloudera-scm-agent /etc/init.d/cloudera-scm-agent $ update-rc.d cloudera-scm-agent defaults
- RHEL-compatible and SLES (only RHEL is supported for DSSD D5 DataNodes)
- On each Agent, open the tarball_root/etc/init.d/cloudera-scm-agent file and change the value of CMF_DEFAULTS from ${CMF_DEFAULTS:-/etc/default} to tarball_root/etc/default.
- Run the following commands on each Agent host:
Install Package Dependencies
When you install with tarballs and parcels, some services may require additional dependencies that are not provided by Cloudera. On each host, install the required packages:
- bind-utils
- chkconfig
- cyrus-sasl-gssapi
- cyrus-sasl-plain
- fuse
- fuse-libs
- gcc
- httpd
- init-functions
- libxslt
- mod_ssl
- MySQL-python
- openssl
- openssl-devel
- openssl-devel
- perl
- portmap
- postgresql-server >= 8.4
- psmisc
- python >= 2.4.3-43
- python-devel >= 2.4.3-43
- python-psycopg2
- python-setuptools
- sed
- service
- sqlite
- swig
- useradd
- zlib
- apache2
- bind-utils
- chkconfig
- cyrus-sasl-gssapi
- cyrus-sasl-plain
- fuse
- gcc
- libfuse2
- libxslt
- openssl
- openssl-devel
- perl
- portmap
- postgresql-server >= 8.4
- psmisc
- python >= 2.4.3-43
- python-devel >= 2.4.3-43
- python-mysql
- python-setuptools
- python-xml
- sed
- service
- sqlite
- swig
- useradd
- zlib
- ant
- apache2
- bash
- chkconfig
- debhelper (>= 7)
- fuse-utils | fuse
- gcc
- libfuse2
- libsasl2-modules
- libsasl2-modules-gssapi-mit
- libsqlite3-0
- libssl-dev
- libxslt1.1
- lsb-base
- make
- openssl
- perl
- postgresql-client@@PG_PKG_VERSION@@
- postgresql@@PG_PKG_VERSION@@
- psmisc
- python-dev (>=2.4)
- python-mysqldb
- python-psycopg2
- python-setuptools
- rpcbind
- sed
- swig
- useradd
- zlib1g
Start and Log into the Cloudera Manager Admin Console
- Wait several minutes for the Cloudera Manager Server to start. To observe the startup process, run tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log on the Cloudera Manager Server host. If the Cloudera Manager Server does not start, see Troubleshooting Installation and Upgrade Problems.
- In a web browser, enter http://Server host:7180, where Server host is the fully qualified domain name or IP address of the host where the Cloudera Manager Server is running.
The login screen for Cloudera Manager Admin Console displays.
- Log into Cloudera Manager Admin Console. The default credentials are: Username: admin Password: admin. Cloudera Manager does not support changing the admin username for the installed account. You can change the password using Cloudera Manager after you run the installation wizard. Although you cannot change the admin username, you can add a new user, assign administrative privileges to the new user, and then delete the default admin account.
- After logging in, the Cloudera Manager End User License Terms and Conditions page displays. Read the terms and conditions and then select Yes to accept them.
- Click Continue.
The Welcome to Cloudera Manager page displays.
Enable DSSD Mode and Configure Cloudera Manager for the DSSD D5
- Click the Cloudera Manager logo to open the Home page.
- Click .
- Type DSSD in the Search box.
- Select the DSSD Mode property.
- Click Save Changes to commit the changes.
Cloudera Manager reconfigures the system for DSSD mode, which may take several minutes.
- Click the Cloudera Manager logo to open the Home page.
- Click Add Cluster to continue with the installation.
- The Cloudera Manager End User License Terms and Conditions page displays. Read the terms and conditions and then select Yes to accept them.
- Click Continue.
- The EMC Software License Agreement page displays. Read the terms and conditions and then select Yes to accept them.
- Click Continue.
The Welcome to Cloudera Manager page displays.
Choose Cloudera Manager Edition
From the Welcome to Cloudera Manager page, you can select the edition of Cloudera Manager to install and, optionally, install a license:
- Choose which edition to install:
- Cloudera Express, which does not require a license, but provides a limited set of features.
- Cloudera Enterprise Enterprise Data Hub Edition Trial, which does not require a license, but expires after 60 days and cannot be renewed.
- Cloudera Enterprise with one of the following license types:
- Basic Edition
- Flex Edition
- Enterprise Data Hub Edition
- If you elect Cloudera Enterprise, install a license:
- Click Upload License.
- Click the document icon to the left of the Select a License File text field.
- Go to the location of your license file, click the file, and click Open.
- Click Upload.
- Information is displayed indicating what the CDH installation includes. At this point, you can click the Support drop-down menu to access online Help or the Support Portal.
- Click Continue to proceed with the installation.
Choose Cloudera Manager Hosts
- Click the Currently Managed Hosts tab.
- Choose the hosts to add to the cluster.
- Click Continue.
The Cluster Installation Select Repository screen displays.
Install CDH and Managed Service Software
- Choose the CDH and managed service version:
- Choose the parcels to install. The choices depend on the repositories you have chosen; a repository can contain multiple parcels. Only the parcels
for the latest supported service versions are configured by default. Select the following parcels:
- CDH 5
- DSSD version 1.2
- DSSD_SCR version 1.2 - This parcel enables short-circuit reads for HBase and Impala. Select this parcel even if you intend to disable short-circuit reads. (See DSSD D5 and Short-Circuit Reads.)
- Any additional parcels required for your deployment (for example: Accumulo, Spark, or Keytrustee) .
You can add additional parcels for previous versions by specifying custom repositories. For example, you can find the locations of the previous CDH 5 parcels at https://archive.cloudera.com/cdh5/parcels/.- To specify the parcel directory, specify the local parcel repository, add a parcel repository, or specify the properties of a proxy server through
which parcels are downloaded, click the More Options button and do one or more of the following:
- Parcel Directory and Local Parcel Repository Path - Specify the location of parcels on
cluster hosts and the Cloudera Manager Server host. If you change the default value for Parcel Directory and have already installed and started Cloudera Manager Agents,
restart the Agents:
sudo service cloudera-scm-agent restart
- Parcel Repository - In the Remote Parcel Repository URLs field, click the button and enter the URL of the repository. The URL you specify is added to the list of repositories listed in the Configuring Cloudera Manager Server Parcel Settings page and a parcel is added to the list of parcels on the Select Repository page. If you have multiple repositories configured, you see all the unique parcels contained in all your repositories.
- Proxy Server - Specify the properties of a proxy server.
- Parcel Directory and Local Parcel Repository Path - Specify the location of parcels on
cluster hosts and the Cloudera Manager Server host. If you change the default value for Parcel Directory and have already installed and started Cloudera Manager Agents,
restart the Agents:
- Click OK.
- If you are using Cloudera Manager to install software, select the release of Cloudera Manager Agent. You can choose either the version that matches the Cloudera Manager Server you are currently using or specify a version in a custom repository. If you opted to use custom repositories for installation files, you can provide a GPG key URL that applies for all repositories.
- Choose the parcels to install. The choices depend on the repositories you have chosen; a repository can contain multiple parcels. Only the parcels
for the latest supported service versions are configured by default. Select the following parcels:
- Click Continue. Cloudera Manager installs the CDH and managed service parcels. During parcel installation, progress is indicated for the phases of the parcel installation process in separate progress bars. If you are installing multiple parcels, you see progress bars for each parcel. When the Continue button at the bottom of the screen turns blue, the installation process is completed. Click Continue.
- Click Continue.
The Host Inspector runs to validate the installation and provides a summary of what it finds, including all the versions of the installed components. If the validation is successful, click Finish.
Add Services
- In the first page of the Add Services wizard, choose the combination of services to install and whether to install Cloudera Navigator:
- Select the combination of services to install:
- Core Hadoop - HDFS, YARN (includes MapReduce 2), ZooKeeper, Oozie, Hive, and Hue
- Core with HBase
- Core with Impala
- Core with Search
- Core with Spark
- All Services - HDFS, YARN (includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, HBase, Impala, Solr, Spark, and Key-Value Store Indexer
- Custom Services - Any combination of services.
- Some services depend on other services; for example, HBase requires HDFS and ZooKeeper. Cloudera Manager tracks dependencies and installs the correct combination of services.
- In a Cloudera Manager deployment of a CDH 4 cluster, the MapReduce service is the default MapReduce computation framework. Choose Custom Services to install YARN, or use the Add Service functionality to add YARN after installation completes.
- In a Cloudera Manager deployment of a CDH 5 cluster, the YARN service is the default MapReduce computation framework. Choose Custom Services to install MapReduce, or use the Add Service functionality to add MapReduce after installation completes.
- The Flume service can be added only after your cluster has been set up.
- If you have chosen Enterprise Data Hub Edition Trial or Cloudera Enterprise, optionally select the Include Cloudera Navigator checkbox to enable Cloudera Navigator. See Cloudera Navigator 2 Overview.
- Select the combination of services to install:
- Click Continue.
- Customize the assignment of role instances to hosts. The wizard evaluates the hardware configurations of the hosts to determine the best hosts for each
role. The DataNode role is only assigned to hosts that are connected to the DSSD D5. The wizard assigns all worker roles to the same set of hosts to which the HDFS DataNode role is assigned. You can
reassign role instances if necessary.
Click a field below a role to display a dialog box containing a list of hosts. If you click a field containing multiple hosts, you can also select All Hosts to assign the role to all hosts, or Custom to display the pageable hosts dialog box.
The following shortcuts for specifying hostname patterns are supported:- Range of hostnames (without the domain portion)
Range Definition Matching Hosts 10.1.1.[1-4] 10.1.1.1, 10.1.1.2, 10.1.1.3, 10.1.1.4 host[1-3].company.com host1.company.com, host2.company.com, host3.company.com host[07-10].company.com host07.company.com, host08.company.com, host09.company.com, host10.company.com - IP addresses
- Rack name
Click the View By Host button for an overview of the role assignment by hostname ranges.
- Range of hostnames (without the domain portion)
- When you are satisfied with the assignments, click Continue.
Configure Database Settings
- Enter the database host, database type, database name, username, and password for the database that you created when you set up the database.
- Click Test Connection to confirm that Cloudera Manager can communicate with the database using the information you have supplied. If the test succeeds
in all cases, click Continue; otherwise, check and correct the information you have provided for the database and then try the test again. (For some servers, if you
are using the embedded database, you will see a message saying the database will be created at a later step in the installation process.)
The Review Changes screen displays.
Review and Finish the DSSD D5 Configuration
From the Cluster Setup Review Changes page:
- Review the configuration changes to be applied. Confirm the settings entered for file system paths. The file paths required vary based on the services to be
installed. If you chose to add the Sqoop service, indicate whether to use the default Derby database or the embedded PostgreSQL database. If the latter, type the database name, host, and user
credentials that you specified when you created the database.
The configuration properties that display on this page are somewhat different from those that display when configuring non-DSSD D5 DataNodes. Some properties, such as the DataNode directory have been removed because they do not apply to a cluster that uses DSSD D5 DataNodes. Other properties, such as the Flood Volume Name are specific to the DSSD D5 DataNode role.
- (Required) In the Flood Volume Name field, enter the name of the Flood Volume as configured in the DSSD D5 appliance. If you are deploying multiple DSSD D5 appliances, note that you must specify this property for each appliance using a Role Group.
- (Optional) If you are not using the entire capacity of the DSSD D5 for this cluster, set the Usable Capacity property. For most deployments, the default value (100 TB) is correct. See the EMC document DSSD Hadoop Plugin Installation Guide for more information on setting this property.
- (Optional) Set the value of the HDFS Block Size parameter. The default value for this parameter is 512 MB when in DSSD Mode. You may want to change this for some types of work loads. See Tuning the HDFS Block Size for DSSD Mode.
- Click Continue.
The wizard starts the services.
- When all of the services are started, click Continue.
You see a success message indicating that your cluster has been successfully started.
- Click Finish to proceed to the Cloudera Manager Admin Console Home Page.
- If you see a message indicating that you need to restart Cloudera Management Services, restart the Cloudera Management Service:
- Do one of the following:
-
- Select .
- Select .
- On the Cloudera Management Service and select Restart. tab, click to the right of
-
- Click Restart to confirm. The Command Details window shows the progress of stopping and then starting the roles.
- When Command completed with n/n successful subcommands appears, the task is complete. Click Close.
- Do one of the following:
- Choose
See the Cloudera Manager 5.8 Configuration Properties configuration reference for descriptions of these properties.
See the EMC document DSSD Hadoop Plugin Installation Guide for information about setting these properties.
and then in the filter section, select to view the DSSD D5
DataNode-specific properties.
- (Recommended for best performance) Set the Libflood CPU ID property.
The value to use for this parameter should have been determined during the set up of the DSSD D5 appliance. See “Identify CPUs and NUMA Nodes” in the EMC document DSSD Hadoop Plugin Installation Guide. The value you set for this parameter can effect the performance of your cluster.
- (Optional) Set the following properties to tune the performance of your cluster:
- Libflood Command Queues
- Libflood Command Queue Depth
- (Optional) Set the Java heap size for the NameNode.
- Choose .
- Type Java heap in the search box.
- Set the Java Heap Size of NameNode in Bytes parameter:
Cloudera Manager automatically sets the value of this parameter to 4 GB (If there are not adequate resources in the cluster, Cloudera Manager may set a smaller value.) Cloudera recommends that you manually set the value of this parameter by calculating the number of HDFS blocks in the cluster and including 1 GB of Java heap for each 1 million HDFS blocks. For more information on HDFS block size and the DSSD D5, see Tuning the HDFS Block Size for DSSD Mode.
- Set the Java Heap Size of Secondary NameNode in Bytes parameter to the same value as the Java Heap Size of NameNode in Bytes parameter.
- Restart the NameNode:
- Choose .
- In the table of roles, select the NameNode (Active) and SecondaryNameNode role types.
- Click .
(Optional) Disable Short Circuit Reads for HBase and Impala
Short-circuit reads are enabled for HBase and Impala by default. To disable short-circuit reads for use with DSSD D5 DataNodes:
- In the Cloudera Manager Admin Console, select .
- Type “short” in the Search box.
A set of short-circuit read parameters for HBase display.
- Clear the Enable DSSD Short-Circuit Read property.
- Click Save Changes to commit the changes.
The Admin console indicates that there is a stale configuration.
- Restart the stale services as indicated. See Stale Configurations.
- In the Cloudera Manager Admin Console, select .
- Type “short” in the Search box.
A set of short-circuit read parameters for Impala display.
- Clear the Enable DSSD Short-Circuit Read property.
- Click Save Changes to commit the changes.
The Admin console now indicates that there is a stale configuration.
- Restart the stale services as indicated. See Stale Configurations.
(Optional) Change the Cloudera Manager User
- Connect to the Cloudera Manager Admin Console.
- Do one of the following:
- Select .
- On the Cloudera Management Service table, click the Cloudera Management Service link. tab, in
- Click the Configuration tab.
- Use the search box to find the property to change. For example, you might enter "system" to find the System User and System Group properties.
- Make any changes required to the System User and System Group to ensure Cloudera Manager uses the proper user accounts.
- Click Save Changes.
- Start the Cloudera Management Service roles.
Change the Default Administrator Password
- Click the logged-in username at the far right of the top navigation bar and select Change Password.
- Enter the current password and a new password twice, and then click OK.
Configure Oozie Data Purge Settings
If you added an Oozie service, you can change your Oozie configuration to control when data is purged to improve performance, cut down on database disk usage, or to keep the history for a longer period of time. Limiting the size of the Oozie database can also improve performance during upgrades. See Configuring Oozie Data Purge Settings Using Cloudera Manager.
(Optional) Install Multiple DSSD D5 Appliances in a Cluster
To increase capacity and performance, you can configure a cluster that uses multiple DSSD D5 storage appliances. You configure the cluster by assigning all hosts connected to a DSSD D5 appliance to a single "rack" and select one of three modes to provide policies used by the NameNode to satisfy the configured replication factor. If you are only configuring a single DSSD D5 appliance, skip this section.
You can also move hosts between appliances. See Moving Existing Hosts to a New DSSD D5
- Stop the HDFS service. Go to the HDFS service and select .
- Assign the hosts attached to each DSSD D5 to a single rack ID. All hosts attached to a D5 should have the same rack assignment and each DSSD D5 should have a unique rack ID. See Specifying Racks for Hosts.
- Go to the HDFS service, select the Configuration tab, and search for the Block Replica Placement Policy property.
- Set the value of the Block Replica Placement Policy property to one of the following values:
- HDFS Default
- Places the first replica on the node where the client process writing the block resides, the second replica on a randomly-chosen remote rack, and a third on a randomly-chosen host in the same remote rack (assuming a replication factor of 3). This ordering is fixed.
- Maximize Capacity
- Places all replicas on the same rack and uses all the capacity of the DSSD D5 for HDFS. If there are fewer DataNode hosts than the configured replication factor, blocks are under-replicated. To avoid under-replication, make sure that there are more DataNodes than the replication factor.
- Maximize Availability
- Places replicas in as many racks as needed to meet the configured replication factor. After replicas have been placed on all available racks, additional replicas are placed randomly across the available racks. If there are fewer DataNode hosts than the configured replication factor, blocks are under-replicated. To avoid under-replication, make sure that there are more DataNodes than the replication factor.
- Perform a Rolling Restart on the cluster. Select .
Test the Installation
You can test the installation following the instructions in Testing the Installation.