This is the documentation for CDH 5.1.x. Documentation for other versions is available at Cloudera Documentation.

Tips and Guidelines

Selecting Appropriate JAR files for your MRv1 and YARN Jobs

Each implementation of the CDH 5 MapReduce framework (MRv1 and YARN) consists of the artifacts (JAR files) that provide MapReduce functionality as well as auxiliary utility artifacts that are used during the course of the MapReduce job. When you submit a job either explicitly (using the Hadoop launcher script) or implicitly (via Java implementations) it is extremely important that you make sure that you reference utility artifacts that come with the same version of MapReduce implementation that is running on your cluster. The following table summarizes the names and location of these artifacts:

Name

MRv1 location

YARN location

streaming

/usr/lib/hadoop-0.20-mapreduce/contrib/streaming/
hadoop-streaming-2.0.0-mr1-cdh<version>.jar
/usr/lib/hadoop-mapreduce/
hadoop-streaming.jar

rumen

N/A

/usr/lib/hadoop-mapreduce/
hadoop-rumen.jar

hadoop examples

/usr/lib/hadoop-0.20-mapreduce/
hadoop-examples.jar
/usr/lib/hadoop-mapreduce/
hadoop-mapreduce-examples.jar

distcp v1

/usr/lib/hadoop-0.20-mapreduce/
hadoop-tools.jar
/usr/lib/hadoop-mapreduce/
hadoop-extras.jar

distcp v2

N/A

/usr/lib/hadoop-mapreduce/
hadoop-distcp.jar

hadoop archives

/usr/lib/hadoop-0.20-mapreduce/
hadoop-tools.jar
/usr/lib/hadoop-mapreduce/
hadoop-archives.jar

Improving Performance

This section provides solutions to some performance problems, and describes configuration best practices.

  Important:

If you are running CDH over 10Gbps Ethernet, improperly set network configuration or improperly applied NIC firmware or drivers can noticeably degrade performance. Work with your network engineers and hardware vendors to make sure that you have the proper NIC firmware, drivers, and configurations in place and that your network performs properly. Cloudera recognizes that network setup and upgrade are challenging problems, and will make best efforts to share any helpful experiences.

Disabling Transparent Hugepage Compaction

Most Linux platforms supported by CDH 5 include a feature called transparent hugepage compaction which interacts poorly with Hadoop workloads and can seriously degrade performance.

Symptom: top and other system monitoring tools show a large percentage of the CPU usage classified as "system CPU". If system CPU usage is 30% or more of the total CPU usage, your system may be experiencing this issue.

What to do:
  Note: In the following instructions, defrag_file_pathname depends on your operating system:
  • Red Hat/CentOS: /sys/kernel/mm/redhat_transparent_hugepage/defrag
  • Ubuntu/Debian, OEL, SLES: /sys/kernel/mm/transparent_hugepage/defrag
  1. To see whether transparent hugepage compaction is enabled, run the following command and check the output:
    $ cat defrag_file_pathname
    • [always] never means that transparent hugepage compaction is enabled.
    • always [never] means that transparent hugepage compaction is disabled.
  2. To disable transparent hugepage compaction, add the following command to /etc/rc.local :
     echo never > defrag_file_pathname

You can also disable transparent hugepage compaction interactively (but remember this will not survive a reboot).

To disable transparent hugepage compaction temporarily as root:
# echo 'never' > defrag_file_pathname 
To disable transparent hugepage compaction temporarily using sudo:
$ sudo sh -c "echo 'never' > defrag_file_pathname" 

Setting the vm.swappiness Linux Kernel Parameter

vm.swappiness is a Linux Kernel Parameter that controls how aggressively memory pages are swapped to disk. It can be set to a value between 0-100; the higher the value, the more aggressive the kernel is in seeking out inactive memory pages and swapping them to disk.

You can see what value vm.swappiness is currently set to by looking at /proc/sys/vm; for example:

cat /proc/sys/vm/swappiness

On most systems, it is set to 60 by default. This is not suitable for Hadoop clusters nodes, because it can cause processes to get swapped out even when there is free memory available. This can affect stability and performance, and may cause problems such as lengthy garbage collection pauses for important system daemons. Cloudera recommends that you set this parameter to 0; for example:

# sysctl -w vm.swappiness=0 

Improving Performance in Shuffle Handler and IFile Reader

The MapReduce shuffle handler and IFile reader use native Linux calls (posix_fadvise(2) and sync_data_range) on Linux systems with Hadoop native libraries installed. The subsections that follow provide details.

Shuffle Handler

You can improve MapReduce Shuffle Handler Performance by enabling shuffle readahead. This causes the TaskTracker or Node Manager to pre-fetch map output before sending it over the socket to the reducer.

  • To enable this feature for YARN, set the mapreduce.shuffle.manage.os.cache property to true (default). To further tune performance, adjust the value of the mapreduce.shuffle.readahead.bytes property. The default value is 4MB.
  • To enable this feature for MRv1, set the mapred.tasktracker.shuffle.fadvise property to true (default). To further tune performance, adjust the value of the mapred.tasktracker.shuffle.readahead.bytes property. The default value is 4MB.

IFile Reader

Enabling IFile readahead increases the performance of merge operations. To enable this feature for either MRv1 or YARN, set the mapreduce.ifile.readahead property to true (default). To further tune the performance, adjust the value of the mapreduce.ifile.readahead.bytes property. The default value is 4MB.

Best Practices for MapReduce Configuration

The configuration settings described below can reduce inherent latencies in MapReduce execution. You set these values in mapred-site.xml.

Send a heartbeat as soon as a task finishes

Set the mapreduce.tasktracker.outofband.heartbeat property to true to let the TaskTracker send an out-of-band heartbeat on task completion to reduce latency; the default value is false:

<property>
    <name>mapreduce.tasktracker.outofband.heartbeat</name>
    <value>true</value>
</property>

Reduce the interval for JobClient status reports on single node systems

The jobclient.progress.monitor.poll.interval property defines the interval (in milliseconds) at which JobClient reports status to the console and checks for job completion. The default value is 1000 milliseconds; you may want to set this to a lower value to make tests run faster on a single-node cluster. Adjusting this value on a large production cluster may lead to unwanted client-server traffic.

<property>
    <name>jobclient.progress.monitor.poll.interval</name>
    <value>10</value>
</property>

Tune the JobTracker heartbeat interval

Tuning the minimum interval for the TaskTracker-to-JobTracker heartbeat to a smaller value may improve MapReduce performance on small clusters.

<property>
    <name>mapreduce.jobtracker.heartbeat.interval.min</name>
    <value>10</value>
</property>

Start MapReduce JVMs immediately

The mapred.reduce.slowstart.completed.maps property specifies the proportion of Map tasks in a job that must be completed before any Reduce tasks are scheduled. For small jobs that require fast turnaround, setting this value to 0 can improve performance; larger values (as high as 50%) may be appropriate for larger jobs.

<property>
    <name>mapred.reduce.slowstart.completed.maps</name>
    <value>0</value>
</property>

Best practices for HDFS Configuration

This section indicates changes you may want to make in hdfs-site.xml.

Improve Performance for Local Reads

  Note:

Also known as short-circuit local reads, this capability is particularly useful for HBase and Cloudera Impala™. It improves the performance of node-local reads by providing a fast path that is enabled in this case. It requires libhadoop.so (the Hadoop Native Library) to be accessible to both the server and the client.

libhadoop.so is not available if you have installed from a tarball. You must install from an .rpm, .deb, or parcel in order to use short-circuit local reads.

Configure the following properties in hdfs-site.xml as shown:

<property>
    <name>dfs.client.read.shortcircuit</name>
    <value>true</value>
</property>

<property>
    <name>dfs.client.read.shortcircuit.streams.cache.size</name>
    <value>1000</value>
</property>


<property>
    <name>dfs.client.read.shortcircuit.streams.cache.expiry.ms</name>
    <value>10000</value>
</property>

<property>
    <name>dfs.domain.socket.path</name>
    <value>/var/run/hadoop-hdfs/dn._PORT</value>
</property>
  Note:

The text _PORT appears just as shown; you do not need to substitute a number.

If /var/run/hadoop-hdfs/ is group-writable, make sure its group is root.

Tips and Best Practices for Jobs

This section describes changes you can make at the job level.

Use the Distributed Cache to Transfer the Job JAR

Use the distributed cache to transfer the job JAR rather than using the JobConf(Class) constructor and the JobConf.setJar() and JobConf.setJarByClass() method.

To add JARs to the classpath, use -libjars <jar1>,<jar2>, which will copy the local JAR files to HDFS and then use the distributed cache mechanism to make sure they are available on the task nodes and are added to the task classpath.

The advantage of this over JobConf.setJar is that if the JAR is on a task node it won't need to be copied again if a second task from the same job runs on that node, though it will still need to be copied from the launch machine to HDFS.

  Note:

-libjars works only if your MapReduce driver uses ToolRunner. If it doesn't, you would need to use the DistributedCache APIs (Cloudera does not recommend this).

For more information, see item 1 in the blog post How to Include Third-Party Libraries in Your MapReduce Job.

Changing the Logging Level on a Job (MRv1)

You can change the logging level for an individual job. You do this by setting the following properties in the job configuration (JobConf):

  • mapreduce.map.log.level
  • mapreduce.reduce.log.level

Valid values are NONE, INFO, WARN, DEBUG, TRACE, and ALL.

Example:

JobConf conf = new JobConf();
...

conf.set("mapreduce.map.log.level", "DEBUG");
conf.set("mapreduce.reduce.log.level", "TRACE");
...

Setting Quotas

As the system administrator, you can set quotas in HDFS for:
  • The number of file and directory names used; and
  • The amount of space used by given directories.
Points to note:
  • The quotas for names and the quotas for space are independent of each other.
  • File and directory creation fails if the creation would cause the quota to be exceeded.
  • Allocation fails if the quota would prevent a full block from being written; keep this in mind if you are using a large block size.
  • If you are using replication, remember that each replica of a block counts against the quota.

Commands

To set space quotas on a directory:
dfsadmin -setSpaceQuota n directory
where n is a number of bytes and directory is the directory the quota applies to. You can specify multiple directories in a single command; napplies to each.
To remove space quotas from a directory:
dfsadmin -clrSpaceQuota directory
You can specify multiple directories in a single command.
To set name quotas on a directory:
dfsadmin -setQuota n directory
where n is the number of file and directory names in directory. You can specify multiple directories in a single command; napplies to each.
To remove name quotas from a directory:
dfsadmin -clrQuota directory
You can specify multiple directories in a single command.

For More Information

For more information, see the HDFS Quotas Guide.

Page generated September 3, 2015.