Use a workstation with access to the Internet and download the tarball image of the appropriate Hortonworks yum repository.
Table 4.3. Deploying HDP - Option I Cluster OS HDP Repository Tarballs RHEL/CentOS 5.x
HDP Repository:
wget http://public-repo-1.hortonworks.com/HDP/centos5/HDP-1.3.2.0-centos5-rpm.tar.gz
HDP-Utils Repository:
wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.16/repos/centos5/HDP-UTILS-1.1.0.16-centos5.tar.gz
Ambari Repository (Optional):
wget http://public-repo-1.hortonworks.com/ambari/centos5/ambari-1.2.5.17-centos5.tar.gz
RHEL/CentOS 6.x
HDP Repository:
wget http://public-repo-1.hortonworks.com/HDP/centos6/HDP-1.3.2.0-centos6-rpm.tar.gz
HDP-Utils Repository:
wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.16/repos/centos6/HDP-UTILS-1.1.0.16-centos6.tar.gz
Ambari Repository (Optional):
wget http://public-repo-1.hortonworks.com/ambari/centos6/ambari-1.2.5.17-centos6.tar.gz
SLES 11
HDP Repository:
wget http://public-repo-1.hortonworks.com/HDP/suse11/HDP-1.3.2.0-suse11-rpm.tar.gz
HDP-Utils Repository:
wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.16/repos/suse11/HDP-UTILS-1.1.0.16-suse11.tar.gz
Ambari Repository (Optional):
wget http://public-repo-1.hortonworks.com/ambari/suse11/ambari-1.2.5.17-suse11.tar.gz
Note The EPEL repository is not available as a tarball currently. Use one of Options II through IV to provide access to the EPEL repository.
Create an HTTP server.
On the mirror server, install an HTTP server (such as Apache httpd) using the instructions provided here.
Activate this web server.
Ensure that the firewall settings (if any) allow inbound HTTP access from your cluster nodes to your mirror server.
Note If you are using EC2, make sure that SELinux is disabled.
On your mirror server, create a directory for your web server.
For example, from a shell window, type:
For RHEL/CentOS:
mkdir –p
/var/www/html/hdp/
For SLES:
mkdir –p
/srv/www/htdocs/rpms
If you are using a symlink, enable the
followsymlinks
on your web server.
Copy the HDP Repository Tarballs to the directory created in step 3, and untar these tarballs.
Verify the configuration.
The configuration is successful, if you can access the above directory through your web browser.
To test this out, browse to the following URLs:
HDP Repository:
http://$yourwebserver/hdp/HDP/$os/1.x/updates/1.3.2.0/hdp.repo
HDP-Utils Repository (Optional):
http://yourwebserver/hdp/HDP-UTILS-1.1.0.16/repos/$os
Ambari Repository (Optional):
http://$yourwebserver/hdp/ambari/$os/1.x/updates/1.2.5.17
where
$os
can be centos5, centos6, or suse11.
For each repository, you should see directory listing for the HDP or HDP-UTILS, or Ambari components along with the RPMs.
Configure the
yum
orzypper
clients on all the nodes in your cluster.Fetch the yum configuration file from your mirror server.
HDP Repository:
http://$yourwebserver/hdp/HDP/$os/1.x/updates/1.3.2.0/hdp.repo
HDP-Utils Repository (Optional):
http://yourwebserver/hdp/HDP-UTILS-1.1.0.16/repos/$os/hdp-util.repo
Ambari Repository (Optional):
http://$yourwebserver/hdp/ambari/$os/1.x/updates/1.2.5.17/ambari.repo
where,
$os
can be centos5, centos6, or suse11.updates
is GA for 1.x.0 releases and updates for 1.x.x releases.
Store the repo files in a temporary location.
Edit the repo files, changing the value of the baseurl property to the local mirror URL.
HDP Repository: Edit the
hdp.repo
file changing the baseurl property as shown below:[HDP-1.x] name=Hortonworks Data Platform Version - HDP-1.x baseurl=http://$yourwebserver/HDP/$os/1.x/updates gpgcheck=0 gpgkey=//public-repo-1.hortonworks.com/HDP/$os/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1 [HDP-UTILS-1.1.0.16] name=Hortonworks Data Platform Utils Version - HDP-UTILS-1.1.0.16 baseurl=http://$yourwebserver/HDP-UTILS-1.1.0.16/repos/$os gpgcheck=0 gpgkey=http://$yourwebserver/HDP/$os/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1 [updates-HDP-1.x] name=HDP-1.x - updates baseurl=http://$yourwebserver/HDP/$os/1.x/updates gpgcheck=0 gpgkey=http://$yourwebserver/HDP/$os/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1
HDP Utils Repository: Edit the
hdp-utils.repo
file changing the baseurl property as shown below:[HDP-UTILS-1.1.0.16] name=Hortonworks Data Platform Version - HDP-UTILS-1.1.0.16 baseurl= http://$yourwebserver/HDP-UTILS-1.1.0.16/repos/$os gpgcheck=0 enabled=1 priority=1
Ambari Repository (Optional):
[ambari-1.x] name=Ambari 1.x baseurl=http://$yourwebserver/hdp/ambari/$os/1.x/updates gpgcheck=0 gpgkey=http://$yourwebserver/ambari/$os/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1 [HDP-UTILS-1.1.0.16] name=Hortonworks Data Platform Utils Version - HDP-UTILS-1.1.0.16 baseurl=http://$yourwebserver/HDP-UTILS-1.1.0.16/repos/$os gpgcheck=0 gpgkey=http://$yourwebserver/ambari/$os/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1 [updates-ambari-1.2.5.17] name=ambari-1.2.5.17 - updates baseurl=http://$yourwebserver/ambari/$os/1.x/updates/1.2.5.17 gpgcheck=0 gpgkey=http://$yourwebserver/ambari/$os/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1
where
Copy the yum/zypper client configuration file to all nodes in your cluster.
For RHEL and CentOS: Use scp or pdsh to copy the client yum configuration file to
/etc/yum.repos.d/
directory on every node in the cluster.For SLES:
Store the repo files back into the repository location in the web server.
HDP Repository:
cp /tmp/hdp.repo /srv/www/htdocs/rpms/hdp/HDP/suse11/1.x/updates
HDP-Utils Repository (Optional):
cp /tmp/hdp.repo /srv/www/htdocs/rpms/hdp/HDP-UTILS-1.1.0.16/suse11/
Ambari Repository (Optional):
cp /tmp/ambari.repo /srv/www/htdocs/rpms/hdp/ambari/suse11/1.x/updates/1.2.5.17
On every node, invoke the following command:
HDP Repository:
zypper addrepo -r http://$yourwebserver/hdp/HDP/suse11/1.x/updates/1.3.2.0/hdp.repo
HDP Repository:
zypper addrepo -r http://$yourwebserver/hdp/HDP-UTILS-1.1.0.16/suse11/hdp-util.repo
Ambari Repository (Optional):
zypper addrepo -r http://$yourwebserver/hdp/ambari/suse11/1.x/updates/1.2.5.17/ambari.repo
If your cluster runs CentOS or RHEL, and if you have multiple repositories configured in your environment, deploy the following plugin on all the nodes in your cluster.
Install the plugin.
For RHEL and CentOS v5.x
yum install yum-priorities
For RHEL and CentOS v6.x
yum install yum-plugin-priorities
Edit the
/etc/yum/pluginconf.d/priorities.conf
file to add the following:[main] enabled=1 gpgcheck=0