Prerequisites for downloading and uploading Model artifacts in air-gapped environment

Before downloading or uploading models, ensure you have the following tools and configurations installed on the host that is connected to the airgap setup. This might be your bastion host.

  • Make sure, you install the followings:

    • pip install -U "huggingface_hub[cli]"
    • pip install awscli==1.35.0 is required for on premises setups.
    • pip install pyyaml
    • https://org.ngc.nvidia.com/setup/installers/cli for NVIDIA NGC catalog Models.

      Make sure you configure the NVIDIA NGC client with the credentials provided by Cloudera.

    • Make sure the Python version is 3.10.12 or higher and lower than version 3.11.
  • Consider the following configuration details for NVIDA NGC:
    echo 'export NGC_CLI_API_KEY=<key>' >> ~/.bashrc 
          echo 'export NGC_API_KEY=<key>' >> ~/.bashrc 
          echo 'export NGC_CLI_ORG=<org>' >> ~/.bashrc

    If the system has ~/.bash_profile follow the above steps, but replace bashrc with bash_profile.

  • Installing the NVIDIA Inference Microservice (NIM) CLI
    This procedure details how Cloudera organization accounts can request early access and install the NIM CLI.
    1. Obtaining Early Access to NIM CLI: Navigate to the NVIDIA developer portal and follow the on-screen instructions to request early access.
    2. Installing NIM CLI: Once early access is granted, use the following steps to download and install the NIM CLI:
      1. Download the installer using the NVIDIA GPU Cloud (NGC) CLI:
        ngc registry resource download-version nvidia/nim-tools/nimtools_installer:0.0.8
      2. Navigate to the installer directory:
        cd nimtools_installer_v0.0.8/
      3. Run the Python installation script. Make sure you provide your NGC service key and the --nimcli-only flag:
        python3 nimtools_installer.py --ngc-api-key [***your-ngc-service-key***] --nimcli-only
  • Download the following script to enable downloading Model repositories from the Hugging Face or NVIDIA NGC catalog and uploading Models to on premises storage providers.

    Download the script from here:https://raw.githubusercontent.com/cloudera/Model-Hub/refs/heads/main/airgap-scripts/pvc/1.5.5-sp1/import_to_airgap.py.

    If you are still using Cloudera AI on premises 1.5.5 and have not upgraded to Cloudera AI on premises 1.5.5 SP1, use the following script: https://github.com/cloudera/Model-Hub/blob/main/airgap-scripts/pvc/1.5.5/import_to_airgap.py.

    The script has the following parameters:

    Table 1.
    Parameter Value Description
    -do Activates download mod
    -rt hf Repository type: hf for Hugging Face, use ngc for NVIDIA NGC catalog
    -t hf_hVQbUsafafafafadfadfsNAynASXJoTCWHAEkj

    Hugging Face API token for authentication, required for private or gated Models

    The Hugging Face token -t is required for accessing gated Models

    Models or Models that require authentication. For more information about tokens, see:https://huggingface.co/docs/hub/en/security-tokens.

    -p $PWD/models Local destination path where Model files are downloaded (uses current working directory)
    -ri Nvidia/Llama-3.1-Nemotron-70B-Instruct-HF

    Repository ID for the model on Hugging Face. profileId of the optimization profile in the NGC specification for the model.

    You can obtain the ri argument for Hugging Face as follows:
    1. Open up the Hugging Face page at: https://huggingface.co/.
    2. Search for the required model.

      The page of the model displays.

    3. Click the icon next to the name of the model, and copy the model ID, that is the ri argument to be used.
    -ns ngc_spec.yaml NGC Specification File: Required when downloading NGC models.