Known Issues
Ambari 2.1.2.1 has the following known issues, scheduled for resolution in a future release. Also, refer to the Ambari Troubleshooting Guide for additional information.
Table 1.4. Ambari 2.1.2.1 Known Issues
Apache Jira |
HWX Jira |
Problem |
Solution |
---|---|---|---|
BUG-47666 |
SNMPv3 is not supported. |
The SNMPv3 notification method is not supported. | |
BUG-47104 | Tez service check fails after enabling RM HA, after upgrade to 2.1.2.1. After upgrading to Ambari 2.1.2.1, Tez service check may fail with the following errors "No tez jars found in configured locations. Ignoring for now. Errors may occur" and/or "Could not connect to AM, killing session via YARN". This happens because the Tez libs jar is not copied to the FS. | Install Tez Client on the HistoryServer host. Important: Ensure that Tez Client component is installed on the host that has the MR HistoryServer. Restart HistoryServer and it will upload the Tez libs tarball. | |
BUG-46885 | After moving the Oozie Server via Ambari, Oozie may not function properly if the service account name for Oozie Server is not the default user "oozie". | If you are using a non-default Oozie service account name, you need to go to Services > HDFS > Configs > Custom core-site and set hadoop.proxyuser. <OOZIE_USER>.hosts (where <OOZIE_USER> is the custom Oozie service account name) to the FQDN of the new host where the Oozie Server is running. Save the HDFS configuration and restart Services as normal. | |
BUG-45324 | When Ambari is using Oracle DB and performing a manual HDP 2.2 -> 2.3 upgrade, the Set Current step of upgrade fails. | You must copy the Oracle JDBC driver JAR into the Ambari Server library directory.
Otherwise, you will see an error in ambari-server.log when running upgradestack: 29 Sep 2015 10:51:29,225 ERROR main DBAccessorImpl:102 - Error while creating database accessorjava.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver After copying the Oracle JDBC drive JAR in place, re-run the upgradestack command. | |
BUG-41331 | Links in 2.1.2.1 pdf documentation not working. | Use working links from 2.1.2.1 html documentation. | |
BUG-41044 |
After upgrading from HDP 2.1 and restarting Ambari, the Admin > Stack and Versions > Versions tab does not show in Ambari Web. |
After performing an upgrade from HDP 2.1 and restarting Ambari Server and the Agents, if you browse to Admin > Stack and Versions in Ambari Web, the Versions tab does not display. Give all the Agent hosts in the cluster a chance connect to Ambari Server by wait for Ambari to show the Agent heartbeats as green and then refresh your browser. | |
AMBARI-12389 | BUG-41040 |
After adding Falcon to your cluster, the Oozie configuration is not properly updated. |
After adding Falcon to your cluster using "Add Service", the Oozie configuration is not properly updated. After completion of Add Service wizard, add properties on Services > Oozie > Configs > Advanced > Custom oozie-site. The list of properties can be found here: https://github.com/apache/ambari/blob/branch-2.1/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/oozie-site.xml. Once added, Restart Oozie and execute this command on oozie server host: su oozie -c '/usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war' Start Oozie. |
AMBARI-12412 | BUG-41016 |
Storm has no metrics if service is installed via a Blueprint. |
The following properties need to be added to storm-site. Browse to Services > Storm > Configs and add the following properties. Restart the Storm service. topology.metrics.consumer.register= [{'class': 'org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink' , 'parallelism.hint': 1}] metrics.reporter.register= org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsReporter |
BUG-40773 | Kafka broker fails to start after disabling Kerberos security. | When enabling Kerberos, Kafka security configuration is set and all the ZooKeeper nodes in Kafka will have ACLs set so that only Kafka brokers can modify entries in ZooKeeper. If you disable Kerberos, you user must set all the Kafka ZooKeeper entries to world readable/writable prior to disabling Kerberos. Before disabling kerberos for Kafka. Log-in as user "kafka" on one of Kafka nodes: kinit -k -t /etc/security/keytabs/kafka.service.keytab kafka/_HOST where _HOST should be replaced by the hostname of that node. Run the following command to open zookeeper shell: /usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh hostname:2181 where hostname here should be replaced by one of the zookeeper nodes setAcl /brokers world:anyone:crdwa setAcl /config world:anyone:crdwa setAcl /controller world:anyone:crdwa setAcl /admin world:anyone:crdwa If the above commands do not run prior to disabling Kerberos, the only option is to set "zookeeper.connect property" to a new ZooKeeper root. This can be done by appending "/newroot" to "zookeeper.connect.property" string. For example "host1:port1,host2:port2,host3:port3/newroot" | |
BUG-40694 | The Slider view is not supported on a cluster with SSL (wire encryption) enabled. | Only use the Slider view on clusters without wire encryption enabled. If it is required to run Slider on a cluster with wire encryption enabled, please contact Hortonworks support for further help. | |
BUG-40541 | If there is a trailing slash in the Ranger External URL the NameNode will fail to startup. | Remove the trailing slash from the External URL and and start up the Name Node. | |
AMBARI-12436 | BUG-40481 | Falcon Service Check may fail when performing Rolling Upgrade, with the following error: 2015-06-25 18:09:07,235 ERROR - [main:] ~ Failed to start ActiveMQ JMS Message Broker. Reason: java.io.IOException: Invalid location: 1:6763311, : java.lang.NegativeArraySizeException (BrokerService:528) java.io.IOException: Invalid location: 1:6763311, : java.lang.NegativeArraySizeException at org.apache.kahadb.journal.DataFileAccessor.readRecord(DataFileAccessor.java:94) This condition is rare. | When performing a Rolling Upgrade from HDP 2.2 to HDP 2.3 and Falcon Service Check fails with the above error, browse to the Falcon ActiveMQ data dir (specified in falcon properties file), remove the corrupted queues, and stop and start the Falcon Server. cd <ACTIVEMQ_DATA_DIR> rm -rf ./localhost cd /usr/hdp/current/falcon-server su -l <FALCON_USER> ./bin/falcon-stop ./bin/falcon-start |
BUG-40323 |
After switching RegionServer ports, Ambari will report RegionServers are live and dead. |
HBase maintains the list of dead servers and live servers according to it's semantics. Normally a new server coming up again with the same port will cause the old server to be removed from the dead server list. But due to port change, it will stay in that list for ~2 hours. If the server does not come at all, it will still be removed from the list after 2 hours. Ambari will alert, based on that list until the RegionServers are removed from the list by HBase. | |
AMBARI-12283 | BUG-40300 |
After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails. |
After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails due to conflicting zk ids. Restart ZooKeeper service to clear the ids. |
BUG-28245 | Attempting to create a Slider app using the same name throws an uncaught JS error. | After creating (and deleting) a Slider app, then attempting to create a Slider app again with that same name results in an uncaught error. The application does not show up in the Slider app list. Refresh your browser and the application will be shown in the list table. | |
AMBARI-12005 | BUG-24902 | Setting cluster names hangs Ambari. |
If you attempt to rename a cluster to a string > 100 chars, Ambari Server will hang. Restart Ambari Server to clear the hang. |