2. Resolving Ambari Install and Setup Problems

Try the recommended solution for each of the following problems.

 2.1. Problem: Browser crashed before Install Wizard completes

Your browser crashes or you accidentally close your browser before the Install Wizard completes.

 2.1.1. Solution

The response to a browser closure depends on where you are in the process:

  • The browser closes before you press the Deploy button.

    Re-launch the same browser and continue the install process. Using a different browser forces you to re-start the entire process.

  • The browser closes after you press Deploy, while or after the Install, Start, and Test screen opens.

    Re-launch the same browser and continue the process, or log in again, using a different browser. When the Install, Start, and Test displays, proceed.

 2.2. Problem: Install Wizard reports that the cluster install has failed

The Install, Start, and Test screen reports that the cluster install has failed.

 2.2.1. Solution

The response to a report of install failure depends on the cause of the failure:

  • The failure is due to intermittent network connection errors during software package installs.

    Use the Retry button on the Install, Start, and Test screen.

  • The failure is due to misconfiguration or other setup errors.

    1. Use the left navigation bar to go back to the appropriate screen. For example, Customize Services.

    2. Make your changes.

    3. Continue in the normal way.

  • The failure occurs during the start/test sequence.

    1. Click Next and Complete, then proceed to the Monitoring Dashboard.

    2. Use the Services View to make your changes.

    3. Re-start the service using Service Actions.

  • The failure is due to something else.

    1. Open an SSH connection to the Ambari Server host.

    2. Clear the database. At the command line, type:

      ambari-server reset

    3. Clear your browser cache.

    4. Re-run the Install Wizard.

 2.3. Problem: Ambari Agents May Fail to Register with Ambari Server.

When deploying HDP using Ambari 1.4.x or later on RHEL CentOS 6.5, click the “Failed” link on the Confirm Hosts page in the Cluster Install wizard to display the Agent logs. The following log entry indicates the SSL connection between the Agent and Server failed during registration:

INFO 2014-04-02 04:25:22,669 NetUtil.py:55 - Failed to connect to https://{ambari-server}:8440/cert/ca due to [Errno 1] _ssl.c:492: error:100AE081:elliptic curve routines:EC_GROUP_new_by_curve_name:unknown group

For more detailed information about this OpenSSL issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1025598

 2.3.1. Solution:

In certain recent Linux distributions, such as RHEL/Centos/Oracle Linux 6.x, the default value of nproc is lower than the value required to deploy the HBase service successfully. If you are deploying HBase, change the value of nproc:

  1. Check the OpenSSL library version installed on your host(s):

    rpm -qa | grepopenssl openssl-1.0.1e-15.el6.x86_64

  2. If the output reads openssl-1.0.1e-15.x86_64 (1.0.1 build 15), you must upgrade the OpenSSL library. To upgrade the OpenSSL library, run the following command:

    yum upgrade openssl

  3. Verify you have the newer version of OpenSSL (1.0.1 build 16):

    rpm -qa | grep opensslopenssl-1.0.1e-16.el6.x86_64

  4. Restart Ambari Agent(s) and click Retry -> Failed in the wizard user interface.

 2.4. Problem: The “yum install ambari-server” Command Fails

You are unable to get the initial install command to run.

 2.4.1. Solution:

You may have incompatible versions of some software components in your environment. See Meet Minimum System Requirements in Installing HDP Using Ambari for more information, then make any necessary changes.

 2.5. Problem: HDFS Smoke Test Fails

If your DataNodes are incorrectly configured, the smoke tests fail and you get this error message in the DataNode logs:

DisallowedDataNodeException org.apache.hadoop.hdfs.server.protocol. DisallowedDatanodeException

 2.5.1. Solution:

  1. Make sure that reverse DNS look-up is properly configured for all nodes in your cluster.

  2. Make sure you have the correct FQDNs when specifying the hosts for your cluster. Do not use IP addresses - they are not supported.

  3. Restart the installation process.

 2.6. Problem: yum Fails on Free Disk Space Check

If you boot your Hadoop DataNodes with/as a ramdisk, you must disable the free space check for yum before doing the install. If you do not disable the free space check, yum will fail with the following error:

Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install unzip' returned 1. Error Downloading Packages: unzip-6.0-1.el6.x86_64: Insufficient space in download directory /var/cache/yum/x86_64/6/base/packages * free 0 * needed 149 k

 2.6.1. Solution:

To disable free space check, update the DataNode image with a directive in /etc/yum.conf:

diskspacecheck=0


loading table of contents...