Building the project and uploading the JAR

First, you compile the UDF code into a JAR, and then you add the JAR to Cloudera Data Warehouse object storage.

You have the EnvironmentAdmin role permissions to upload the JAR to your object storage.
  1. Build the IntelliJ project.
    ...
    [INFO] Building jar: /Users/max/IdeaProjects/hiveudf/target/TypeOf-1.0-SNAPSHOT.jar
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 14.820 s
    [INFO] Finished at: 2019-04-03T16:53:04-07:00
    [INFO] Final Memory: 26M/397M
    [INFO] ------------------------------------------------------------------------
                        
    Process finished with exit code 0
  2. In IntelliJ, navigate to the JAR in the /target directory of the project.
  3. In Cloudera Data Warehouse, click Overview > Database Catalog, click options for your Database Catalog, and then click Edit.
  4. Upload the JAR to the Hive warehouse on CDW object storage.
    • AWS object storage

      In AWS, upload the JAR file to a bucket on S3 that you can access, for example S3a://my-bucket/path. Add an external AWS S3 bucket if necessary.

    • Azure object storage

      In Azure, upload the JAR file to a default Azure Blob Storage (ABFS) location that you can access, for example abfs://my-storage/path.

  5. In IntelliJ, click Save.
  6. Click Actions > Deploy Client Configuration.
  7. Restart the Hive Virtual Warehouse.