5. Known Issues

Ambari 2.2.2.0 has the following known issues, scheduled for resolution in a future release. Also, refer to the Ambari Troubleshooting Guide for additional information.

Table 1.6. Ambari 2.2.2 Known Issues

Apache Jira

HWX Jira

Problem

Solution

  BUG-57058

After Downgrading during an HDP 2.2 to HDP 2.3 (or later) upgrade and then enabling Kerberos, Hive Metastore shows alerts.

Perform a cluster upgrade from HDP 2.2 but do not Finalize. Instead, perform a Downgrade. The Downgrade will succeed but if you attempt to enable Kerberos in the cluster, there may be alerts for each Hive Metastore in the cluster. The cluster is fully functional and the alerts are incorrect. To correct clear the alerts, on each host with a Hive Metastore remove the /usr/hdp/current/hive-metastore/conf/conf.server directory.

  BUG-55954

Unable to change ATS heapsize after Ambari 2.2.1.1 (or later) upgrade.

If you have previously installed HDP 2.3 or HDP 2.4 with Ambari Server 2.2.1.0 or earlier, the yarn-env template may have incorrect value that prevents the YARN AppTimelineServer (ATS) heapsize changes from taking effect. To correct this issue, go to Services > YARN > Configs > Advanced > Advanced yarn-env and examine the content of yarn-env template. If the template has this entry:

export YARN_HISTORYSERVER_HEAPSIZE={{apptimelineserver_heapsize}}

The entry needs to be changed to:

export YARN_TIMELINESERVER_HEAPSIZE={{apptimelineserver_heapsize}}
  BUG-55863

HiveServer2 and Hive Metastore can fail to start with the below error after a Downgrade.

If you perform an HDP Downgrade, HiveServer2 and Hive Metastore can fail to start after the downgrade completes. This happens because the Hive config: hive.metastore.schema.verification=true

Error in HiveServer2 and metastore logs :
MetaException(message:Hive Schema version 1.2.0 does not match metastore's
schema version 1.2.1000 Metastore is not upgraded or corrupt)

You can correct this situation, and get the HiveServer2 and Hive Metastore to start by changing the Hive config to:

hive.metastore.schema.verification=false
  BUG-49728

When adding a ZooKeeper service, Kafka property is not updated.

If you are running Kafka and add an additional ZooKeeper server to your cluster, the zookeeper.connect property is not automatically updated to include the newly added ZooKeeper server.

You must manually add the ZooKeeper server to the zookeeper.connect property.

  BUG-40120

After adding Falcon to your cluster, the Oozie configuration is not properly updated.

After adding Falcon to your cluster using "Add Service", the Oozie configuration is not properly updated. After completion of Add Service wizard, add properties on Services > Oozie > Configs > Advanced > Custom oozie-site. The list of properties can be found here: https://github.com/apache/ambari/blob/branch-2.1/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/oozie-site.xml. Once added, Restart Oozie and execute this command on oozie server host:

su oozie -c '/usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war'

Start Oozie.

AMBARI-12412 BUG-41016

Storm has no metrics if service is installed via a Blueprint.

The following properties need to be added to storm-site. Browse to Services > Storm > Configs and add the following properties. Restart the Storm service.

topology.metrics.consumer.register=
[{'class': 'org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink'
, 'parallelism.hint': 1}]
metrics.reporter.register=
org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsReporter
  BUG-40773 Kafka broker fails to start after disabling Kerberos security.

When enabling Kerberos, Kafka security configuration is set and all the ZooKeeper nodes in Kafka will have ACLs set so that only Kafka brokers can modify entries in ZooKeeper. If you disable Kerberos, you user must set all the Kafka ZooKeeper entries to world readable/writable prior to disabling Kerberos. Before disabling kerberos for Kafka. Log-in as user "kafka" on one of Kafka nodes:

kinit -k -t /etc/security/keytabs/kafka.service.keytab kafka/_HOST

where _HOST should be replaced by the hostname of that node. Run the following command to open zookeeper shell:

/usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh hostname:2181

where hostname here should be replaced by one of the zookeeper nodes

setAcl /brokers world:anyone:crdwa 
setAcl /config world:anyone:crdwa 
setAcl /controller world:anyone:crdwa 
setAcl /admin world:anyone:crdwa

If the above commands do not run prior to disabling Kerberos, the only option is to set "zookeeper.connect property" to a new ZooKeeper root. This can be done by appending "/newroot" to "zookeeper.connect.property" string. For example "host1:port1,host2:port2,host3:port3/newroot"

  BUG-40694 The Slider view is not supported on a cluster with SSL (wire encryption) enabled. Only use the Slider view on clusters without wire encryption enabled. If it is required to run Slider on a cluster with wire encryption enabled, please contact Hortonworks support for further help.
  BUG-40541 If there is a trailing slash in the Ranger External URL the NameNode will fail to startup. Remove the trailing slash from the External URL and and start up the Name Node.
AMBARI-12436 BUG-40481

Falcon Service Check may fail when performing Rolling Upgrade, with the following error:

2015-06-25 18:09:07,235 ERROR - [main:]
 ~ Failed to start ActiveMQ JMS Message Broker.
 Reason: java.io.IOException: Invalid location: 1:6763311, :
 java.lang.NegativeArraySizeException (BrokerService:528) 
 java.io.IOException: Invalid location: 1:6763311, :
 java.lang.NegativeArraySizeException
 at
 org.apache.kahadb.journal.DataFileAccessor.readRecord(DataFileAccessor.java:94)

This condition is rare.

When performing a Rolling Upgrade from HDP 2.2 to HDP 2.3 and Falcon Service Check fails with the above error, browse to the Falcon ActiveMQ data dir (specified in falcon properties file), remove the corrupted queues, and stop and start the Falcon Server.

cd <ACTIVEMQ_DATA_DIR>
rm -rf ./localhost
cd /usr/hdp/current/falcon-server 
su -l <FALCON_USER> 
./bin/falcon-stop
./bin/falcon-start
AMBARI-12283 BUG-40300

After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails.

After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails due to conflicting zk ids. Restart ZooKeeper service to clear the ids.

AMBARI-12005 BUG-24902 Setting cluster names hangs Ambari.

If you attempt to rename a cluster to a string > 100 chars, Ambari Server will hang. Restart Ambari Server to clear the hang.