Chapter 5. Using Apache Sqoop
Sidebar
Prev
|
Next
Hortonworks Manuals
Hortonworks Data Platform
Chapter 5. Using Apache Sqoop
Legal notices
Contents
Search
1. Using Data Integration Services Powered by Talend
1. Using Data Integration Services Powered by Talend
2. Prerequisites
3. Instructions
3.1. Deploying Talend Open Studio
3.1.1. Step 1: Download and launch the application
3.1.2. Step 2: Create a new project
3.2. Writing first Talend job for data import
3.2.1. Step 1: Create a new job
3.2.2. Step 2: Create a sample input file
3.2.3. Step 3: Build the job
3.2.4. Step 4: Run the job
3.2.5. Step 5: Verify the import operation
3.3. Modifying the job to perform data analysis
3.3.1. Step 1: Add Pig component from the Big Data Palette
3.3.2. Step 2: Define basic settings for Pig component
3.3.3. Step 3: Connect the Pig and HDFS components
3.3.4. Step 4: Add and connect Pig aggregate component
3.3.5. Step 5: Define basic settings for Pig Aggregate component
3.3.6. Step 6: Define aggregation function for the data
3.3.7. Step 7: Add and connect Pig data storage component
3.3.8. Step 8: Define basic settings for data storage component
3.3.9. Step 9: Run the modified Talend job
3.3.10. Step 10: Verify the results
2. Using HDP for Metadata Services (HCatalog)
1. Using WebHCat
3. Using Apache Hive
4. Using HDP for Workflow and Scheduling (Oozie)
5. Using Apache Sqoop
1. Apache Sqoop
1.1. Sqoop Connectors
1.2. Sqoop Import Table Commands
1.3. Netezza Connector
1.3.1. Extra Arguments
1.3.2. Direct Mode
1.3.3. Null String Handling
6. Installing and Configuring Flume in HDP
1. Installing and Configuring Flume in HDP
2. Understand Flume
2.1. Flume Components
2.1.1. Events
2.1.2. Sources
2.1.3. Channels
2.1.4. Sinks
2.1.5. Agents
3. Install Flume
3.1. Prerequisites
3.2. Installation
3.3. Users
3.4. Directories
4. Configure Flume
5. HDP and Flume
5.1. Sources
5.2. Channels
5.3. Sinks
6. A Simple Example
Search
Search Highlighter (On/Off)