Ubuntu hosts not supported at this time.
The component version information displayed by Ambari is based on the Ambari Stack definition. If you have applied patches to the Stack and to your software repository, that component version might differ from the actual version installed. There is no functional impact on Ambari if the patch versions mismatch. If you have any questions on component versions, refer to the rpm version installed on the actual host.
BUG-11634: Upgraded single-node cluster to HDP-2.0.6, may be missing yarn job summary entries.
Problem: After upgrade of a cluster to HDP 2 using Ambari, you may notice that the yarn job summary entries are missing. This typically happens if the YARN ResourceManager host is shared with MapReduce2 components.
Workaround: To fix this issue modify the
log4j.properties
file at/etc/hadoop/conf
on the ResourceManager host by adding the following lines:Note Modify the value for
log4j.appender.RMSUMMARY.File
property to contain the actual value ofyarn_log_dir_prefix
andyarn_user
. You can get the values from the latest global config type. Useconfigs.sh
tool to read the global config type.# # Job Summary Appender # # Use following logger to send summary to separate file defined by # hadoop.mapreduce.jobsummary.log.file rolled daily: # hadoop.mapreduce.jobsummary.logger=INFO,JSA # hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger} hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender # Set the ResourceManager summary log filename yarn.server.resourcemanager.appsummary.log.file=hadoop-mapreduce.jobsummary.log # Set the ResourceManager summary log level and appender yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger} #yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY # To enable AppSummaryLogging for the RM, # set yarn.server.resourcemanager.appsummary.logger to # <LEVEL>,RMSUMMARY in hadoop-env.sh # Appender for ResourceManager Application Summary Log # Requires the following properties to be set # - hadoop.log.dir (Hadoop Log directory) # - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename) # - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender) log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender log4j.appender.RMSUMMARY.File=[yarn_log_dir_prefix]/[yarn_user]/${yarn.server.resourcemanager.appsummary.log.file} log4j.appender.RMSUMMARY.MaxFileSize=256MB log4j.appender.RMSUMMARY.MaxBackupIndex=20 log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n log4j.appender.JSA.layout=org.apache.log4j.PatternLayout log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n log4j.appender.JSA.DatePattern=.yyyy-MM-dd log4j.appender.JSA.layout=org.apache.log4j.PatternLayout log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger} log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
BUG-11607: Add
dfs.journalnode.edits.dir
property on upgrading to HDP 2.0.6.0 stack.Problem: If you are upgrading from a stack version prior to HDP 2.0.6.0, the Enable NameNode HA Wizard fails due to a missing property expected in hdfs-site.xml.
Workaround: From the HDFS service page add the following property in the Custom hdfs-site.xml section:
<property> <name>dfs.journalnode.edits.dir</name> <value>/grid/0/hdfs/journal</value> </property>
BUG-11600: Hive check execute fails after upgrading from BWGA with Oracle.
Problem: When Ambari is upgraded to 1.4.2 and security is enabled, the Hive service check can fail due to a conflicting combination of authorization properties.
Workaround: Disable authorization. Using Ambari UI, set
hive.security.authorization.enabled
to false. Or, verify that the correct combination of authorization properties are used. For example:hive.security.authorization.manager : org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider hive.security.metastore.authorization.manager : org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider hive.security.authenticator.manager : org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator
BUG-11571: HA wizard freezes after adding host to upgraded cluster.
Problem: Deployed 3-node cluster using Ambari 1.4.2 version with HDP 1.3.3 stack on SUSE with MySQL DB for Hive and Oozie.
Upgraded Ambari to Ambari 1.4.2.
Upgraded HDP from 1.3.3 to 2.0.6.
Configured hdfs-site.xml on all hosts.
Added 4th host.
The HA wizard holds on 2nd step and outputs the following JS errors:
Uncaught TypeError: Cannot set property 'addNNHost' of undefined db.js:428 App.db.setRollBackHighAvailabilityWizardAddNNHost db.js:428 module.exports.Em.Route.extend.step2.Em.Route.extend.next high_availability_routes.js:145 Ember.StateManager.Ember.State.extend.sendRecursively ember-latest.js:15579 Ember.StateManager.Ember.State.extend.send ember-latest.js:15564 App.WizardStep5Controller.Em.Controller.extend.submit step5_controller.js:646 ActionHelper.registeredActions.(anonymous function).handler ember-latest.js:19458 (anonymous function) ember-latest.js:11250 f.event.dispatch jquery-1.7.2.min.js:3 h.handle.i jquery-1.7.2.min.js:3
Workaround: Close the other open windows and login again from the current window.
BUG-11553: Unable to start gmond process after upgrade to HDP 2.0.6 Stack from HDP 1.3.2 Stack.
Problem: gmond process fails to start on a host during an upgrade
Workaround: Use the following steps to work around the issue:
Log onto the host where gmond fails to start.
For the gmond process that fails go to the corresponding directory. For example for HDPSlaves go to:
/var/run/ganglia/hdp/HDPSlaves/
Remove the PID in the directory.
Stop gmond.
service hdp-gmond stop
Start gmond.
service hdp-gmond start
BUG-11374: After performing a cluster install using local repositories, UI incorrectly says "no".
Problem: If you have entered your own repositories during Install Wizard > Advanced Repository Options and then you add hosts to the cluster, the Review page of the Add Hosts Wizard shows
Local Repository = No
.BUG-11105: During upgrade fs.checkpoint.size needs to be in proper units.
Problem: Previous versions of Ambari assumed this setting to be in GB. The setting is in bytes.
Workaround:In HDFS Services Configs General, enter an appropriate integer value in bytes to set the HDFS maximum edit log size for checkpointing. For example, 500000000.
BUG-9597: Log4j property file is overwritten during HDFS/ZooKeeper/Oozie services Start.
Problem: The Log4j property file is overwritten during HDFS/ZooKeeper/Oozie services Start. When the client state became became
installed_and_configured
after Service Start:{'hdp-hadoop::client': stage => 2, service_state => installed_and_configured}
BUG-8898: Ambari no longer stops iptables on Ambari Server or Ambari Agent start.
Problem: Prior to HDP 2.0, the Ambari server and agents automatically stopped iptables if they were already running. With the release of HDP 2.0, Ambari does not stop iptables.
Workaround: Disable iptables manually or configure your network for the necessary ports (see Configuring Ports for Hadoop