Building the project and uploading the JAR
First, you compile the UDF code into a JAR, and then you add the JAR to Cloudera Data Warehouse object storage.
-
Build the IntelliJ project.
... [INFO] Building jar: /Users/max/IdeaProjects/hiveudf/target/TypeOf-1.0-SNAPSHOT.jar [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 14.820 s [INFO] Finished at: 2019-04-03T16:53:04-07:00 [INFO] Final Memory: 26M/397M [INFO] ------------------------------------------------------------------------ Process finished with exit code 0
- In IntelliJ, navigate to the JAR in the /target directory of the project.
-
In Cloudera Data Warehouse, click Overview > Database Catalogs, select your Database Catalog, and click options
, and then click Edit.
-
Upload the JAR to the Hive warehouse on Cloudera Data Warehouse
object storage.
- AWS object storage
In AWS, upload the JAR file to a bucket on S3 that you can access, for example S3a://my-bucket/path. Add an external AWS S3 bucket if necessary.
- Azure object storage
In Azure, upload the JAR file to a default Azure Blob Storage (ABFS) location that you can access, for example abfs://my-storage/path.
- AWS object storage
- In IntelliJ, click Save.
- Click Actions > Deploy Client Configuration.
- Restart the Hive Virtual Warehouse.