4. Known Issues

In this section:

  • Ambari does not support running or installing stacks on Ubuntu.

  • The component version information displayed by Ambari is based on the Ambari Stack definition. If you have applied patches to the Stack and to your software repository, that component version might differ from the actual version installed. There is no functional impact on Ambari if the patch versions mismatch. If you have any questions on component versions, refer to the rpm version installed on the actual host.

  • BUG-24234: Unable to start/stop services when using Oracle database for Ambari.

    Problem: If you are using Oracle for the Ambari DB, you can run into a scenario when performing a start all/stop all where Ambari becomes unresponsive and the following ORA error is printed to the ambari-server log:

    08:54:51,320 ERROR [qtp1280560314-2070] ReadHandler:84 - Caught a runtime exception executing a query
    Local Exception Stack: 
    Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.4.0.v20120608-r11652): org.eclipse.persistence.exceptions.DatabaseException
    Internal Exception: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000 

    Workaround: Please upgrade to Ambari 1.6.1 and contact Hortonworks Support for a patch to apply.

  • BUG-18118: YARN alert, related to Application Timeline Server displays after enabling security.

    Problem: On a SLES cluster, alert messages may appear, disapper, re-appear and repeat.

    Workaround: Stop Nagios using Ambari UI. If this does not stop the behavior, use

    ps aus | grep nagios

    If the process continues to run, kill the Nagios process using kill -9 .

  • BUG-18094: AMBARI-5847: NodeManager processes are running on hosts that do not have NodeManager component installed.

    Problem: After installing HDP 2.x Stack with YARN, NodeManager components appear to be running on hosts that did not have NodeManager installed after you perform a host reboot. The init.d scripts included in the hadoop-yarn-nodemanager packages set chkconfig on by default, to auto-start on machine reboot. Rebooting the host starts NodeManager processes.

    Workaround: Turn chkconfig off, by executing the following commands on all the hosts in the cluster:

    chkconfig --del hadoop-yarn-nodemanager
    chkconfig --del hadoop-yarn-proxyserver
    chkconfig --del hadoop-yarn-resourcemanager 
  • BUG-18073: AMBARI 5877: On CentOS5, after upgrading Ambari Server from 1.5.1 to 1.6.0 - alert displays for NameNode checkpoint item.

    Problem: The check_checkpoint_time.py script, found in Nagios configs hadoop-commands.cfg is not compatible with Python 2.4, the default Python version for centOS5.

    Workaround: Complete the following steps:

    1. Log in to Ambari Server host.

    2. In /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/NAGIOS/package/templates/hadoop-commands.cfg.j2, edit the following command:

      define command{
       command_name check_checkpoint_time
       command_line python $USER1$/check_checkpoint_time.py -H "$ARG1$" -p $ARG2$ -w $ARG3$ -c $ARG4$ -t $ARG5$ -x $ARG6$
      }

      to

      define command{
       command_name check_checkpoint_time
       command_line python2.6 $USER1$/check_checkpoint_time.py -H "$ARG1$" -p $ARG2$ -w $ARG3$ -c $ARG4$ -t $ARG5$ -x $ARG6$
      }
    3. Restart ambari server.

      ambari-server restart
    4. Restart Nagios using the Ambari Web UI.

  • BUG-18061: AMBARI-5834: Nagios will not start after upgrading from Ambari 1.4.1 to Ambari 1.6.0.

    Problem: When using the HDP-2.x Stack, after upgrading from Ambari 1.4.1 to Ambari 1.6.0, Nagios server does not start, and displays the following error message:

    File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 75, in _getattr_
    raise Fail( "Configuration parameter '"self.name "' was not found in configurations dictionary!")
    Fail: Configuration parameter 'dfs.namenode.checkpoint.txns' was not found in configurations dictionary!

    Ambari 1.4.1 did not have dfs.namenode.checkpoint.txns as a configuration property for the HDP 2.x stack.

    Workaround: Add the dfs.namenode.checkpoint.txns configuration property using Ambari Web:

    1. Browse to Services > HDFS > Config > Custom hdfs-site.xml.

    2. Add dfs.namenode.checkpoint.txns with the value 1000000.

      ** or other appropriate value; 1000000 is the default Ambari uses**

    3. Save the configuration.

    4. Restart HDFS.

    5. Start Nagios server.

  • BUG-10845: AMBARI-5878: When using Ambari 1.6.0 to deploy HDP 1.3 Stack using Blueprints, JobTracker writes jobsummary information to an invalid location.

    Problem: When starting, JobTracker will print out FileNotFoundException.

    Workaround: Using Blueprints, add the mapred_local_dir property to your global config.

  • BUG-18035: AMBARI-5879: Hive/Tez Jobs tab does not show job information.

    Problem: After deploying a cluster via Ambari Blueprints, jobs in the Jobs tab display "No Tez Information". Tez information is not available due to default yarn.timeline-service.webapp.address, yarn.timeline-service.webapp.https.address and yarn.timeline-service.address settings that include "0.0.0.0:port" .

    Workaround: Using Ambari Web UI, do the following steps:

    1. Browse to Services > YARN > Configs.

    2. Replace {0.0.0.0} with {ATS.Server.hostname} in each of the following properties:

      • yarn.timeline-service.webapp.address

      • yarn.timeline-service.webapp.https.address

      • yarn.timeline-service.address

  • BUG-18006: AMBARI-5880: Decommission RegionServer warning message is hidden behind Background Operations pop-up window.

    Problem: For a cluster with HBase installed, when decommissioning an HBase RegionServer, the warning dialog is hidden behind the Background Operations window.

    Workaround: Close, or move the Background Operations window.

  • BUG-17985: AMBARI-5881: Port configurability for Oozie fails on a secure cluster.

    Problem: The Oozie service check fails after enabling security on a cluster having customized port settings.

    Workaround: In Ambari Web, browse to Services > Oozie > Configs and modify the Oozie HTTP and Admin ports to 11000 and 11001, respectively.

  • BUG-17803: AMBARI-5882: After stopping Application Timeline Server, the Jobs page shows "Loading" message indefinitely.

    Problem: With a job running on a 2-node, default cluster, shut down Application Timeline Server. Expected behavior: Linked job should not appear. Display message "YARN ATS Not Running". Actual behavior, Jobs page displays "Loading" message indefinitely.

    Workaround: Reload (refresh) the page. "YARN ATS Not Running" message appears, as appropriate.

  • BUG-17558: AMBARI-5700: Hive installation fails when deploying HDP 1.3 stack with the following error:

    Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: org.apache.hcatalog.security.HdfsAuthorizationProvider
    	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:280)
    	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:670)
    	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    	at 25613sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:601)
    	at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: org.apache.hcatalog.security.HdfsAuthorizationProvider
    	at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:342)
    	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:274)
    	... 7 more
    Caused by: java.lang.ClassNotFoundException: org.apache.hcatalog.security.HdfsAuthorizationProvider
    	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    	at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
    	at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
    	at java.lang.Class.forName0(Native Method)
    	at java.lang.Class.forName(Class.java:266)
    	at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:335)
    	... 8 more

    Problem: The HIVE_AUX_JARS_PATH is:

    if [ "${HIVE_AUX_JARS_PATH}" != "" ]; then
      export HIVE_AUX_JARS_PATH=${HIVE_AUX_JARS_PATH}
    else
      export HIVE_AUX_JARS_PATH=/usr/lib/hcatalog/share/hcatalog/hcatalog-core.jar
    fi

    The HIVE_AUX_JARS_PATH should be:

    if [ "${HIVE_AUX_JARS_PATH}" != "" ]; then
      export HIVE_AUX_JARS_PATH=/usr/lib/hcatalog/share/hcatalog/hcatalog-core.jar:${HIVE_AUX_JARS_PATH}
    else
      export HIVE_AUX_JARS_PATH=/usr/lib/hcatalog/share/hcatalog/hcatalog-core.jar
    fi

    Workaround: Implement either one of the following solutions:

    • Create the directory $HIVE_HOME/auxlib and copy all the jars you would have specified in HIVE_AUX_JARS_PATH, or

    • Create a directory containing all the jars you would have specified including the hcatalog-core.jar in HIVE_AUX_JARS_PATH. Set the HIVE_AUX_JARS_PATH to that directory location.

  • BUG-17511: AMBARI-5883: Ambari installs but does not deploy additional .jar files in oozie.war to support HDP-1 oozie-hive workflows.

    Problem: Manual configuration required to deploy additional ,jar files, post-install.

    Workaround: After installing or upgrading to Ambari 1.6.0, use Ambari Web > Services > Config to add the following property to the oozie-sitel.xml configuration:

    <property>
    <name>oozie.credentials.credentialclasses</name>
    <value>hcat=org.apache.oozie.action.hadoop.HCatCredentials</value>
    </property>

  • BUG-17280: AMBARI-5884: Slave components in decommissioned state restart during Service restarts.

    Problem: Decommission a DataNode and then restart HDFS service. The decommissioned DataNode will restart.

    Workaround: None. This is expected behavior.

  • BUG-16556: AMBARI-5435: "Connection refused" errors in the YARN application logs. Timeline service is not started, but yarn-site.xml has the timeline-related configuration enabled.

    Problem: ATS is turned off in secure clusters installed by Ambari but in the yarn-site.xml, the ATS config is set to true. As a result, there are "Connection refused" errors in the YARN application logs.

    Workaround: In Ambari Web, browse to Services > YARN > Configs. In the yarn-site.xml section, set the following property to false:

    <property>
          <name>yarn.timeline-service.enabled</name>
          <value>false</value>
        </property>
  • BUG-16534: Quick links to Oozie Web UI and Falcon Web UI do not work after reconfiguring port for oozie.base.url .

    Description: This occurs because the Oozie HTTP port (11000) and Admin port (11001) cannot be changed via Ambari. Oozie uses 11001 as the default Admin port.

    Workaround: Reset the Oozie HTTP port and Admin port to 11000 and 11001, respectively.