3. Known Issues

Ambari 2.1.1 has the following known issues, scheduled for resolution in a future release. Also, refer to the Ambari Troubleshooting Guide for additional information.


Table 1.2. Ambari 2.1.1 Known Issues

Apache Jira

HWX Jira




SNMPv3 is not supported.

The SNMPv3 notification method is not supported.

  BUG-41331 Links in 2.1.1. pdf documentation not working. Use working links from 2.1.1 html documentation.


DataNode Fails to Install on RHEL/CentOS 7.

During cluster install, DataNode fails to install with the following error:

Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install snappy-devel' returned 1.
Error: Package: snappy-devel-1.0.5-1.el6.x86_64 (HDP-UTILS-
           Requires: snappy(x86-64) = 1.0.5-1.el6
           Installed: snappy-1.1.0-3.el7.x86_64 (@anaconda/7.1)
               snappy(x86-64) = 1.1.0-3.el7
           Available: snappy-1.0.5-1.el6.x86_64 (HDP-UTILS-
               snappy(x86-64) = 1.0.5-1.el6        

Hadoop requires the snappy-devel package that is a lower version that what is on the machine already. Run the following on the host and retry.

yum remove snappy
yum install snappy-devel            


Static view instances throw a NullPointerException when attempting to access the view.

If you use a static <instance> in your view, you will receive a NullPointerException in ambari-server.log when attempting to use the view. You must instead use a dynamic view instance. Remove the static <instance> from your view.xml.


After upgrading from HDP 2.1 and restarting Ambari, the Admin > Stack and Versions > Versions tab does not show in Ambari Web.

After performing an upgrade from HDP 2.1 and restarting Ambari Server and the Agents, if you browse to Admin > Stack and Versions in Ambari Web, the Versions tab does not display. Give all the Agent hosts in the cluster a chance connect to Ambari Server by wait for Ambari to show the Agent heartbeats as green and then refresh your browser.

AMBARI-12389 BUG-41040

After adding Falcon to your cluster, the Oozie configuration is not properly updated.

After adding Falcon to your cluster using "Add Service", the Oozie configuration is not properly updated. After completion of Add Service wizard, add properties on Services > Oozie > Configs > Advanced > Custom oozie-site. The list of properties can be found here: https://github.com/apache/ambari/blob/branch-2.1/ambari-server/src/main/resources/common-services/FALCON/ Once added, Restart Oozie and execute this command on oozie server host:

su oozie -c '/usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war'

Start Oozie.


Accumulo Master Start fails on secure cluster if pid and log directories for YARN and MapReduce2 are customized.

If you plan to use Accumulo and a Kerberos-enabled cluster, do not customize the pid and log directories for YARN or MapReduce2.

AMBARI-12412 BUG-41016

Storm has no metrics if service is installed via a Blueprint.

The following properties need to be added to storm-site. Browse to Services > Storm > Configs and add the following properties. Restart the Storm service.

[{'class': 'org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink'
, 'parallelism.hint': 1}]
  BUG-40775 If Kerberos is disabled, Accumulo tracer fails to start.

If you cluster includes Accumulo and you disabled Kerberos, Accumulo tracer will fail to start with the following error:

resource_management.core.exceptions.Fail: Execution of 'cat /var/lib/ambari
-agent/data/tmp/pass | ACCUMULO_CONF_DIR=/usr/hdp/current/accumulo
-tracer/conf/server /usr/hdp/current/accumulo-client/bin/accumulo shell -u root -f
/var/lib/ambari-agent/data/tmp/cmds' returned 1. Password: *****
2015-07-04 05:08:44,823 [trace.DistributedTrace] INFO : SpanReceiver
org.apache.accumulo.tracer.ZooTraceClient was loaded successfully.
2015-07-04 05:08:44,897 [shell.Shell] ERROR:
org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_CREDENTIALS 
for user root - Username or Password is Invalid

To correct this situation, the following command must be run on one of the hosts that has an Accumulo process:

ACCUMULO_CONF_DIR=/etc/accumulo/conf/server accumulo init --reset-security 
--user root

It will prompt for the root user's password. The password you enter must match the Accumulo root password in Ambari's configs for Accumulo. Then Accumulo may be started normally.

  BUG-40773 Kafka broker fails to start after disabling Kerberos security.

When enabling Kerberos, Kafka security configuration is set and all the ZooKeeper nodes in Kafka will have ACLs set so that only Kafka brokers can modify entries in ZooKeeper. If you disable Kerberos, you user must set all the Kafka ZooKeeper entries to world readable/writable prior to disabling Kerberos. Before disabling kerberos for Kafka. Log-in as user "kafka" on one of Kafka nodes:

kinit -k -t /etc/security/keytabs/kafka.service.keytab kafka/_HOST

where _HOST should be replaced by the hostname of that node. Run the following command to open zookeeper shell:

/usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh hostname:2181

where hostname here should be replaced by one of the zookeeper nodes

setAcl /brokers world:anyone:crdwa 
setAcl /config world:anyone:crdwa 
setAcl /controller world:anyone:crdwa 
setAcl /admin world:anyone:crdwa

If the above commands do not run prior to disabling Kerberos, the only option is to set "zookeeper.connect property" to a new ZooKeeper root. This can be done by appending "/newroot" to "zookeeper.connect.property" string. For example "host1:port1,host2:port2,host3:port3/newroot"

  BUG-40694 The Slider view is not supported on a cluster with SSL (wire encryption) enabled. Only use the Slider view on clusters without wire encryption enabled. If it is required to run Slider on a cluster with wire encryption enabled, please contact Hortonworks support for further help.
AMBARI-12282 BUG-40615 When changing JDK to Oracle JDK 1.8 option during setup, the JDK is not automatically installed on all hosts. If you attempt to change the Ambari JDK setup (after you have already configured the JDK previously) and choose Option 1 to automatically download + install the Oracle JDK 1.8, the JDK is not automatically installed on all hosts. You must manually download and configured Oracle JDK 1.8 on all hosts to match that of the Ambari Server.
  BUG-40541 If there is a trailing slash in the Ranger External URL the NameNode will fail to startup. Remove the trailing slash from the External URL and and start up the Name Node.
AMBARI-12436 BUG-40481

Falcon Service Check may fail when performing Rolling Upgrade, with the following error:

2015-06-25 18:09:07,235 ERROR - [main:]
 ~ Failed to start ActiveMQ JMS Message Broker.
 Reason: java.io.IOException: Invalid location: 1:6763311, :
 java.lang.NegativeArraySizeException (BrokerService:528) 
 java.io.IOException: Invalid location: 1:6763311, :

This condition is rare.

When performing a Rolling Upgrade from HDP 2.2 to HDP 2.3 and Falcon Service Check fails with the above error, browse to the Falcon ActiveMQ data dir (specified in falcon properties file), remove the corrupted queues, and stop and start the Falcon Server.

rm -rf ./localhost
cd /usr/hdp/current/falcon-server 
su -l <FALCON_USER> 

After switching RegionServer ports, Ambari will report RegionServers are live and dead.

HBase maintains the list of dead servers and live servers according to it's semantics. Normally a new server coming up again with the same port will cause the old server to be removed from the dead server list. But due to port change, it will stay in that list for ~2 hours. If the server does not come at all, it will still be removed from the list after 2 hours. Ambari will alert, based on that list until the RegionServers are removed from the list by HBase.

AMBARI-12283 BUG-40300

After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails.

After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails due to conflicting zk ids. Restart ZooKeeper service to clear the ids.

AMBARI-12179 BUG-39646

When Wire Encryption is enabled, the Tez View cannot connect to ATS with a Local Cluster configuration.

If you configure the Tez View using the Local Cluster configuration, the view reads the "yarn.timeline-service.webapp.address" property to determine the ATS URL. When Wire Encryption is enabled, the view should read "yarn.timeline-service.webapp.https.address" instead. Because of this, the Tez View will not load the Tez job information and you will see an error in the view "Connection Refused". You cannot use Local Cluster configuration option and must manually enter the ATS Cluster configuration information when creating the view if Wire Encryption is enabled. Be sure to use the value from "yarn.timeline-service.webapp.https.address" for the YARN Timeline Server URL.
  BUG-38640 When running Ambari Server as non-root, kadmin couldn't open log file.

When running Ambari Server as non-root, when enabling Kerberos, if kadmin fails to authenticate, you will see the following error in ambari-server.log if Ambari cannot access the kadmind.log.

STDERR: Couldn't open log file /var/log/kadmind.log: Permission denied 
kadmin: GSS-API (or Kerberos) error while initializing kadmin interface

To avoid this error, be sure the kadmind.log file has 644 permissions.


User Views (Files, Hive and Pig) fail to load when accessing a Kerberos-enabled cluster.

See the Ambari Views Guide for more information on configuring your cluster and the User Views to work with a Kerberos-enabled cluster, including setting-up Ambari Server for Kerberos and having Ambari Server as a Hadoop proxy user.
HADOOP-11764 BUG-33763 YARN ATS server will not start if /tmp is set to noexec.

You must specify a different directory for LevelDB (which is embedded in the ATS component) to use when generating temporal info. This can be done by adding the following information to the hadoop-env.sh configuration:

export _JAVA_OPTIONS= "${_JAVA_OPTIONS} -Djava.io.tmpdir=/tktest"

With a Kerberos-enabled cluster that includes Storm, in Ambari Web > Services > Storm, the Summary values for Slots, Tasks, Executors and Topologies show as "n/a". Ambari Server log also includes the following ERROR:

24 Mar 2015 13:32:41,
288 ERROR [pool-2-thread-362] 
AppCookieManager:122 - 
SPNego authentication failed,
cannot get hadoop.auth cookie for URL: 
http: //c6402.ambari.apache.org:8744/api/

When Kerberos is enabled, Storm API requires SPNEGO authentication. Refer to the Ambari Security Guide to Set Up Ambari for Kerberos to enable Ambari to authenticate against the Storm API via SPNEGO.

  BUG-33516 Ranger service cannot be installed in a cluster via Blueprints API.

You must first create your cluster (via Install Wizard or via Blueprints) and then add Ranger service to the cluster.


When using an Agent non-root configuration, if you attempt to register hosts automatically using SSH, the Agent registration will fail.

The option to automatically register hosts with SSH is not supported when using a Agent non-root configuration. You must manually register the Agents.

  BUG-32284 Adding client-only services does not automatically install component dependencies.

When adding client-only services to a cluster (using Add Service), Ambari does not automatically install dependent client components with the newly added clients. On hosts where client components need to be installed, browse to Hosts and to the Host Details page. Click + Add and select the client components to install on that host.

BUG-28245 Attempting to create a Slider app using the same name throws an uncaught JS error. After creating (and deleting) a Slider app, then attempting to create a Slider app again with that same name results in an uncaught error. The application does not show up in the Slider app list. Refresh your browser and the application will be shown in the list table.
AMBARI-12005 BUG-24902 Setting cluster names hangs Ambari.

If you attempt to rename a cluster to a string > 100 chars, Ambari Server will hang. Restart Ambari Server to clear the hang.