3. Resolving Cluster Deployment Problems

Try the recommended solution for each of the following problems:.

 3.1. Problem: YARN ATS Component May Fail to Start When Installing HDP 2.1.3 with Ambari

If you install HDP 2.1.3 using Ambari, you must change the following YARN confguration property.

 3.1.1. Solution:

Browse to Ambari Web > Services > YARN > Configs and modify the following property:

yarn.timeline-service.store-class=org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore

 3.2. Problem: Trouble Starting Ambari on System Reboot

If you reboot your cluster, you must restart the Ambari Server and all the Ambari Agents manually.

 3.2.1. Solution:

Log in to each machine in your cluster separately:

  1. On the Ambari Server host machine:

    ambari-server start
  2. On each host in your cluster:

    ambari-agent start

 3.3. Problem: Metrics and Host information display incorrectly in Ambari Web

Charts appear incorrectly or not at all despite being available in the native Ganglia interface or Host health status is displayed incorrectly.

 3.3.1. Solution:

All the hosts in your cluster and the machine from which you browse to Ambari Web must be in sync with each other. The easiest way to assure this is to enable NTP.

 3.4. Problem: On SUSE 11 Ambari Agent crashes within the first 24 hours

SUSE 11 ships with Python version 2.6.0-8.12.2 which contains a known defect that causes this crash.

 3.4.1. Solution:

Upgrade to Python version 2.6.8-0.15.1 .

 3.5. Problem: Attempting to Start HBase REST server causes either REST server or Ambari Web to fail

As an option you can start the HBase REST server manually after the install process is complete. It can be started on any host that has the HBase Master or the Region Server installed. If you install the REST server on the same host as the Ambari server, the http ports will conflict.

 3.5.1. Solution

In starting the REST server, use the -p option to set a custom port. Use the following command to start the REST server.

/usr/lib/hbase/bin/hbase-daemon.sh start rest -p <custom_port_number>

 3.6. Problem: Multiple Ambari Agent processes are running, causing re-register

On a cluster host ps aux | grep ambari-agent shows more than one agent process running. This causes Ambari Server to get incorrect ids from the host and forces Agent to restart and re-register.

 3.6.1. Solution

On the affected host, kill the processes and restart.

  1. Kill the Agent processes and remove the Agent PID files found here: /var/run/ambari-agent/ambari-agent.pid.

  2. Restart the Agent process:

    ambari-agent start

 3.7. Problem: Some graphs do not show a complete hour of data until the cluster has been running for an hour

When a cluster is first started, some graphs, like Services View -> HDFS and Services View -> MapReduce, do not plot a complete hour of data, instead showing data only for the length of time the service has been running. Other graphs display the run of a complete hour.

 3.7.1. Solution

Let the cluster run. After an hour all graphs will show a complete hour of data.

 3.8. Problem: Ambari stops MySQL database during deployment, causing Ambari Server to crash.

The Hive Service uses MySQL Server by default. If you choose MySQL server as the database on the Ambari Server host as the managed server for Hive, Ambari stops this database during deployment and crashes.

 3.8.1. Solution

If you plan to use the default MySQL Server setup for Hive and use MySQL Server for Ambari - make sure that the two MySQL Server instances are different.

If you plan to use the same MySQL Server for Hive and Ambari - make sure to choose the existing database option for Hive.

 3.9. Problem: Service Fails with Unknown Host Exception

JVM networkaddress.cache negative.ttl default setting of 10 (never cache) may result in DNS failure. Long, or multiple queries running on the JVM may fail. Occurs in Java 6,7, and 8.

 3.9.1. Solution

Appropriate values for networkaddress.cache negative ttl depend on various system factors, including network traffic, cluster size, and resource availability. You can set Java VM options in an Ambari-installed cluster using the following procedure:

  1. Edit the template for hadoop-env.sh file. Ambari deploys the template file on your cluster in the following location:

    /var/lib/ambari-server/resources/stacks/{stackName}/{stackVersion}/hooks/before-START/templates/hadoop-env.sh.j2

    where {stackName} and {stackVersion} refer to your specific stack name and version.

  2. Change the following line in the template to add options to all Hadoop processes, then save the file.

    export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}"

  3. Restart Ambari server.

    ambari-server restart
  4. Restart affected services, using the Ambari Web UI.


loading table of contents...