Known Issues
Ambari 2.4 has the following known issues, scheduled for resolution in a future release. Also, refer to the Ambari Troubleshooting Guide for additional information.
Table 1.9. Ambari 2.4 Known Issues
Apache Jira |
HWX Jira |
Problem |
Solution |
---|---|---|---|
AMBARI-20119 | BUG-74775 | The Use RedHat Satellite/Spacewalk button does not work. | To use Satellite or Spacewalk server for HDP repository management:
The second command disables external repositories on the nodes that have already been provisioned. |
N/A | BUG-66998 | After changing hadoop.proxyuser.knox.groups or hadoop.proxyuser.knox.hosts to access hive2 through knox, the following error message displays: Error: Failed to validate proxy privilege of knox_dv for v999003 (state=08S01,code=0) | Restart the Hive service, so that it operates using these updated core-site configurations. |
N/A | BUG-64959 | If you have Storm in your cluster and have Kerberos enabled, after Ambari 2.4 upgrade, the Storm summary page in Ambari Web does not show information and exceptions are logged. 24 Aug 2016 14:19:38,107 ERROR [ambari-metrics-retrieval-service-thread-2738] MetricsRetrievalService:421 - Unable to retrieve metrics from http://hcube1-1n02.eng.hortonworks.com:8744/api/v1/cluster/summary. Subsequent failures will be suppressed from the log for 20 minutes. java.io.IOException: Server returned HTTP response code: 500 for URL: http://hcube1-1n02.eng.hortonworks.com:8744/api/v1/cluster/summary at sun.reflect.GeneratedConstructorAccessor228.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) | You need to add the Ambari Server principal name to the Storm nimbus.admins property in Ambari Web > Services > Storm > Configs. |
N/A | BUG-64947 | Filtering by Role on Users page in Ambari Admin Interface is slow with 1000+ users. | If you have 1000+ users from LDAP synchronized into Ambari and you attempt to filtering by Role on the Users page in Ambari Admin Interface, the results will be slow (potential 20 seconds). Reducing the number of users you have synchronized into Ambari is a way to improve this performance. |
N/A | BUG-64912 | Apache Oozie requires a restart after an Atlas configuration update, but may not be included in the services marked as requiring restart in Ambari. | Select Oozie > Service Actions > Restart All to restart Oozie along with the other services. |
AMBARI-18177 | BUG-64326 | If you add or remove ZooKeeper servers and you have Atlas in your cluster, you must update some Atlas properties. | If you are running Atlas in your cluster and make changes to ZooKeeper (adding or removing ZooKeeper server components), you must update the Atlas properties list below to make sure they reflect the list of ZooKeeper servers in the cluster. You can modify the following properties from Ambari Web > Services > Atlas > Configs: atlas.audit.hbase.zookeeper.quorum Sample: node-test0.docker.nxa.io:2181,node-test2.docker.nxa.io:2181,node-test1.docker.nxa.io:2181 atlas.graph.index.search.solr.zookeeper-url Sample: node-test1.docker.nxa.io:2181/infra-solr,node-test0.docker.nxa.io:2181/infra-solr,node-test2.docker.nxa.io:2181/infra-solr atlas.graph.storage.hostname Sample: node-test0.docker.nxa.io,node-test2.docker.nxa.io,node-test1.docker.nxa.io atlas.kafka.zookeeper.connect Sample: node-test0.docker.nxa.io,node-test2.docker.nxa.io,node-test1.docker.nxa.io |
N/A | BUG-57093 | When toggling and saving the Hive service's "Enable Interactive Query" configuration, it's important to wait for the background operations associated with that change to complete before trying to toggle it once more. If you don't wait for background operations to complete and try to re-enable this feature it can cause some of the operations to fail resulting in configuration inconsistencies. | If such inconsistencies are observed then retry toggling and saving of the "Enabling Interactive Query" configuration and wait for all requests (background operations) to complete. |
AMBARI-12436 | BUG-40481 | Falcon Service Check may fail when performing Rolling Upgrade or downgrade, with the following error: 2015-06-25 18:09:07,235 ERROR - [main:] ~ Failed to start ActiveMQ JMS Message Broker. Reason: java.io.IOException: Invalid location: 1:6763311, : java.lang.NegativeArraySizeException (BrokerService:528) java.io.IOException: Invalid location: 1:6763311, : java.lang.NegativeArraySizeException at org.apache.kahadb.journal.DataFileAccessor.readRecord(DataFileAccessor.java:94) This condition is rare. | When performing a Rolling Upgrade from HDP 2.2 to HDP 2.3 (or upgrade > dowgrade > upgrade) and Falcon Service Check fails with the above error, browse to the Falcon ActiveMQ data dir (specified in falcon properties file), remove the corrupted queues, and stop and start the Falcon Server. cd <ACTIVEMQ_DATA_DIR> rm -rf ./localhost cd /usr/hdp/current/falcon-server su -l <FALCON_USER> ./bin/falcon-stop ./bin/falcon-start |
AMBARI-12283 | BUG-40300 |
After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails. |
After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails due to conflicting zk ids. Restart ZooKeeper service to clear the ids. |
N/A | BUG-40694 | The Slider view is not supported on a cluster with SSL (wire encryption) enabled. | Only use the Slider view on clusters without wire encryption enabled. If it is required to run Slider on a cluster with wire encryption enabled, please contact Hortonworks support for further help. |