Ambari 2.0.1 has the following known issues, scheduled for resolution in a future release. Please work around the following issues with Ambari 2.0.1:
Table 1.1. Ambari 2.0.1 Known Issues
Apache Jira | HWX Jira | Problem | Solution |
---|---|---|---|
| BUG-38333 | After upgrading to Ambari 2.0.1, on Ambari Server start, the database is checked for consistency. Ambari Server fails to start and the ambari-server.log shows an ERROR related to componentName=STORM_REST_API. This occurs if you have upgraded from HDP 2.0 or HDP 2.1 to HDP 2.2 where Storm REST API component has been removed. As part of that HDP Upgrade, you need to remove the Storm REST API component from Ambari. If you did not perform this step, Ambari Server will fail to start.
30 May 2015 10:32:36,738 ERROR [main] AmbariServer:667 - Failed to run the Ambari Server org.apache.ambari.server.StackAccessException: Stack data, stackName=HDP, stackVersion=2.2, serviceName=STORM, componentName=STORM_REST_API
| Please contact Hortonworks support and reference BUG-38333 |
| BUG-38016 | When creating an EMAIL alert notification, the Save button does not enable | While on the Create Alert Notification form, change the Method to SNMP and fill in Hosts field with the SMTP host, then flip back to EMAIL method. Save button will enable. |
BUG-35925 |
When using non-root Ambari Agents, enabling the Ranger Plugins will cause their associated services to fail to start. |
When using Ambari Ranger, Ambari Agents must be run as root. | |
| BUG-34437 |
HDP 2.2 Stack shows the Spark service version as 1.2.0 incorrectly. |
The Spark version in HDP 2.2 is actually 1.2.1 although the Ambari Web UI shows version 1.2.0, when the Spark service is installed, the 1.2.1 version will be installed. |
|
BUG-34195 |
After performing Ambari Agent upgrade, custom host name configuration is not preserved. |
If you have configured your Ambari Agents for a custom host name, after upgrading the Ambari Agent, the configuration is not preserved in ambari-agent.ini. After agent upgrade, reapply your configuration. |
|
BUG-33767 |
After upgrading to Ambari 2.0, HiveServer2 fails to start if you have databases with a LOCATION that is outside of the hive.metastore.warehouse.dir. The HS2 start operations fails as it attempts to run the Hive metatool service. |
Please contact Hortonworks support and reference BUG-33767. |
|
BUG-33642 |
When attempting to create an alert notification for SNMP and trying to add the recipients property, the UI displays the property is already defined and does not enable the Save button. |
Please contact Hortonworks support and reference BUG-33642. |
AMBARI-10461 |
BUG-33568 |
Spark History Server does not start with non-root agent. When using non-root Ambari Agents, the Spark History Server fails to start with the message "Unable to create file: /etc/spark/conf/spark-defaults.conf". |
Permissions on hosts with the Spark Client and the Spark History Server need to be updated: sudo chown style="font-size:8.0pt;font-family:Courier">sudo chown {{ spark username }}:{{ non-root user's primary group }} /etc/spark/conf sudo chmod 775 /etc/spark/conf sudo chown style="font-size:8.0pt;font-family:Courier">sudo chown {{ spark username }} java-opts sudo chown style="font-size:8.0pt;font-family:Courier">sudo chown {{ spark username }} log4j.properties sudo chown style="font-size:8.0pt;font-family:Courier">sudo chown {{ spark username }} metrics.properties sudo chown style="font-size:8.0pt;font-family:Courier">sudo chown {{ spark username }} spark-env.sh sudo chown style="font-size:8.0pt;font-family:Courier">sudo chown {{ non-root username }} fairscheduler.xml.template sudo chown style="font-size:8.0pt;font-family:Courier">sudo chown {{ non-root username }} log4j.properties.template sudo chown {{ non-root username }} metrics.properties.template sudo chown style="font-size:8.0pt;font-family:Courier">sudo chown {{ non-root username }} slaves.template {{ non-root username }} spark- defaults.conf {{ non-root username }} spark- defaults.conf.template sudo chown style="font-size:8.0pt;font-family:Courier">sudo chown {{ non-root username }} spark-env.sh.template |
|
BUG-33557 |
With a Kerberos-enabled cluster that includes Storm, in Ambari Web > Services > Storm, the Summary values for Slots, Tasks, Executors and Topologies show as "n/a". Ambari Server log also includes the following ERROR: 24 Mar 2015 13:32:41,288 ERROR [pool-2-thread-362] AppCookieManager:122 - SPNego authentication failed, can not get hadoop.auth cookie for URL: http://c6402.ambari.apache.org:8744/api/v1/topology/summary?field=topologies |
When Kerberos is enabled, Storm API requires SPNEGO authentication. Refer to the Ambari Security Guide to Set Up Ambari for Kerberos to enable Ambari to authenticate against the Storm API via SPNEGO. |
|
BUG-33516 |
Ranger service cannot be installed in a cluster via Blueprints API. |
You must first create your cluster (via Install Wizard or via Blueprints) and then add Ranger service to the cluster. |
|
BUG-33474 |
The Oozie service check fails because the host from which the service check is running is not authorized to connect to Oozie server. |
Edit Hadoop's core-site property "hadoop.proxyuser.oozie.hosts" by setting the value to either "*", or the new hostname that now contains Oozie server. Then, restart all of the hosts with stale configs, and re-run the Oozie Service Check. |
|
BUG-33449 |
When running on Ambari in an environment that has umask 027 and running Ambari Server with a non-root sudoer user, server will fail to start with the following message: ERROR: Exiting with exit code -1. REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. root@ip-10-237-169-130:~# more /var/log/ambari-server/ambari-server.out In ambari-server.log, you will see the following exception: 20 Mar 2015 13:09:09,296 ERROR [main] AmbariServer:690 - Failed to run the Ambari Server java.lang.NoClassDefFoundError: Could not initialize class javax.crypto.JceSecurity |
When running ambari-server setup, you must choose the Custom JDK option #3 and be sure to install and setup the JDK + JCE so the Ambari Server has access to the JCE. |
|
BUG-33416 |
On Ubuntu, when attempting to perform automated host registration via SSH, and if your target hosts are set with umask 027, the registration fails with an error. ERROR MESSAGE: scp: /var/lib/ambari-agent/data/tmp: Permission denied |
Use manual agent registration, not automated registration via SSH. |
|
BUG-33219 |
After enabling Kerberos on your cluster that includes the Hive Service, if you add additional Hive Metastore instances, the templeton.hive.properties configuration in webhcat-site gets incorrectly set. |
After adding the Hive Metastore component, review the webhcat-site configuration templeton.hive.properties property and confirm the correct list of Hive Metastore hosts are listed. Adjust manually as appropriate. |
|
BUG-33208 |
After disabling Kerberos, Storm service check fails with the following error: 604 [main] WARN org.apache.storm.curator.retry.ExponentialBackoffRetry - maxRetries too large (60000). Pinning to 29 604 [main] INFO backtype.storm.utils.StormBoundedExponentialBackoffRetry - The baseSleepTimeMs [2000] the maxSleepTimeMs [5] the maxRetries [60000] 604 [main] WARN backtype.storm.utils.StormBoundedExponentialBackoffRetry - Misconfiguration: the baseSleepTimeMs [2000] can't be greater than the maxSleepTimeMs [5]. Exception in thread "main" java.lang.RuntimeException: org.apache.thrift7.transport.TTransportException at backtype.storm.StormSubmitter.topologyNameExists(StormSubmitter.java:308) at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:212) at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:157) |
Remove the following properties from the storm-site configuration: drpc.authorizer, nimbus.authorizer, storm.principal.tolocal, ui.filter |
AMBARI-10094 |
BUG-33203 |
After enabling Kerberos, the Ambari Principal and Keytab configuration show in Ambari Web and are editable. These properties include the Smoke User, HDFS and HBase principals and keytabs for the "headless" Ambari accounts. |
These properties are generated and managed by Ambari. Do not edit these properties. |
|
BUG-33111 |
After enabling Kerberos, HDFS start service fails with error: Fail: Execution of ' -kt /etc/security/keytabs/hdfs.headless.keytab hdfs' returned 127. -bash: -kt: command not found |
Check where kinit Kerberos utility is in the system: find / -name kinit -type f 2>/dev/null Ambari expects the kinit utility to be found at /usr/bin/kinit. If that is not the case, you need to create a symlink from the location: ln -s /usr/share/some/location/kerberos/bin/kinit/usr/bin/kinit |
AMBARI-10080 |
BUG-32855 |
After adding an additional instance of the HiveServer2 component, if you already have multiple Hive Metastore instances configured, the Hive configuration hive.metastore.uris property is modified. The modification changes the order of the Metastore instances which does not effectively change the property, but because the order changes, the config is changed and you are prompted to restart Hive to pickup the change. |
Restart Hive service to pickup the change. |
AMBARI-9978 |
BUG-32851 |
After the host register, the Next button does not enable to continue with the wizard. |
Refresh the browser. |
|
BUG-32779 |
After upgrading to Ambari 2.0 but before removing Ganglia (and adding Ambari Metrics), if you cluster does not include HBase, an alert is shown for the Ganglia Process Monitor. |
In Ambari Web > Alerts, browse the Ganglia HBase Master Process Monitor alert definition and disable the alert. Once you remove Ganglia and add Ambari Metrics, this alert (along with all other Ganglia alerts) will be removed. |
|
BUG-32748 |
After upgrading to HDP 2.2, if you cluster includes the Storm service, an alert will display for the Storm REST API Server. This component (which existed with HDP 2.1) is no longer used with HDP 2.2. |
In Ambari Web > Alerts, browse to the Storm REST API alert definition and disable the alert. |
|
BUG-32733 |
After upgrading an HDP 2.1 cluster to HDP 2.2, if your cluster includes Hive and is using Oracle for the Metastore database, you might see an error when running the schematool. Upgrade script upgrade-0.13.0-to-0.14.0.oracle.sql Error: ORA-00955: name is already used by an existing object (state=42000,code=955) Warning in pre-upgrade script pre-0-upgrade-0.13.0-to-0.14.0.oracle.sql: Schema script failed, errorcode 2 Completed upgrade-0.13.0-to-0.14.0.oracle.sql |
This error message is benign and the upgrade completes successfully. |
|
BUG-32554 |
If you use a Base URL with a : but no port number, validation will pass but during install, yum will fail when attempting to install packages because a Base URL with : and no port is technically not valid. After returning to the Select Stack page to correct the Base URL, Agent registration fails due to the invalid Base URL. |
Perform manual agent registration to avoid Ambari attempting to install the Agent and hitting the invalid Base URL. |
|
BUG-32381 |
If using an Agent non-root configuration, if you attempt to register hosts automatically using SSH, the Agent registration will fail. |
The option to automatically register hosts with SSH is not supported when using a Agent non-root configuration. You must manually register the Agents. |
|
BUG-32310 |
If your cluster includes Spark service, the Download Client Configs option from the Service Actions menu fails with an error "No configuration files defined for the component". |
Spark does not define any client configurations so this is expected behavior. The Download Client Configs option will not produce a configuration file package. |
|
BUG-32284 |
When adding client-only services to a cluster (using Add Service), Ambari does not automatically install dependent client components with the newly added clients. |
On hosts where client components need to be installed, browse to Hosts and to the Host Details page. Click + Add and select the client components to install on that host. |
|
BUG-32265 |
When using HDP 2.1 and Ambari 2.0, if you customize the Storm Service log directory during cluster install, the directory is not used and instead, logs are stored in the default location /var/log/storm. |
Go to Services > Storm > Configs and add to "Advanced storm-env" the following line: export STORM_LOG_DIR={{log_dir}} |
|
BUG-32173 |
If cluster install fails and you attempt to install again, if you included Hive and chose to have Ambari install a MySQL instance, you might receive an "Unable to delete cluster" error when attempting to install again on the same host. In ambari-server.log, you'll see a message: Found non removable hostcomponent when trying to delete service component, clusterName=c, serviceName=HIVE, componentName=MYSQL_SERVER, hostname=c6402.ambari.apache.org |
Browse to the host with the MySQL instance and perform "service mysqld stop". Retry the cluster install. |
|
BUG-32071 |
If you enable Kerberos on a cluster that includes Ranger, with the HBase plugin enabled, during the enable-Kerberos process, the hbase.coprocessor.master.classes and hbase.coprocessor.region.classes configuration properties of HBase are overwritten. |
After the Kerberos process is complete, disable and re-enable the Ranger HBase plugin to get these properties to return to the proper values. |
|
BUG-31780 |
After enabling Kerberos, services fail to start due to kinit not being in default paths (such as /usr/bin/) on the hosts. |
Create a symlink in /usr/bin/ to kinit. |
|
BUG-31217 |
If your directory contains > 1000 users, attempts to sync-ldap users and groups to Ambari will fail. There is a limit of 1000 to the number of entities Ambari can process. |
Perform the sync-ldap using the --users and --groups option to limit the amount of entities to be under 1000 and perform the sync in batches. |
|
BUG-31144 |
Storm secure configuration has to be manually configured - Following configurations are not editable in Ambari: nimbus.admins and nimbus.supervisor.users and ui.filter.params |
Edit /var/lib/ambari-server/resources/common-services/STORM/0.9.1.2.1/package/templates/storm.yaml.j2 manually to add the principal names and restart ambari server. |
AMBARI-9934 |
BUG-30948 |
After registering a version in Ambari and attempting to install the version on cluster hosts, the registered version is not deletable. |
The version will continue to display. |
|
BUG-30921 |
The Ganglia Service is no longer available with Ambari for metrics collection. Although if you use a blueprint that contains Ganglia components, the cluster is created without warning and will contain the Ganglia Service. |
You should not include Ganglia components in your cluster blueprints and migrate to use Ambari Metrics service. |
|
BUG-30779 |
After deploying a cluster that includes Hive using a blueprint, the property hive_dbroot appears in the Hive Configs. |
This property is benign and can be ignored. |
|
BUG-30480 |
After upgrading to Ambari 2.0 and switching to Ambari Metrics (from Ganglia) for metrics collection, Storm metrics are no longer available from Ambari. |
The Storm interfaces to work with Ambari Metrics were not added until HDP 2.2.3. You need to upgrade to HDP 2.2.4 (or later) to obtain Storm metrics from Ambari. |
|
BUG-30278 |
After configuring alert notifications (email or snmp), or when mapping a notification to an alert group, when alerts change state, notifications are not sent. |
Restart Ambari Server for the notification and notification-to-group mapping to take affect. |