Configuring Cloudera AI Inference service and Amazon Bedrock to set up Cloudera Copilot

To use Cloudera Copilot, as a Site Administrator, you can configure the credentials either in Cloudera AI Inference service or in Amazon Bedrock depending on where you want to deploy your custom model.

To configure Cloudera Copilot to use Cloudera AI Inference service:

  1. Set up Cloudera AI Inference Service. Recommended models are:
    • Llama 2 70b Chat
    • CodeLlama 34b - instruct
    • Mistral 7b
  2. Deploy a model to Cloudera AI Inference Service.
  3. Navigate to CDP > Management Console > User Profile and generate a pair of Access keys and Private keys. Download the Credentials file for future reference.
  4. Navigate to CML > User Settings > Environment Variables and add the following:
    1. CDP_PRIVATE_KEY: Private key generated in the preceding step
    2. CDP_ACCESS_KEY_ID: Access key ID generated in the preceding step
    3. CDP_REGION: Region of your CDP deployment
    4. CDP_ENDPOINT_URL: Endpoint URL of your CDP deployment
    5. ENDPOINT_URL: Base URL for other legacy CDP API services. CDP configuration process does not prompt for this value.
  5. Install CDP CLI to authorize requests to Cloudera AI Inference Service Endpoints.
    1. Create a new project in your ML Workspace.
    2. Start a new JupyterLab session.
    3. Open Terminal Access. A terminal window appears.
    4. Run the pip install cdpcli command.

To configure Cloudera Copilot to use Amazon Bedrock:

  1. Generate a pair of Access and Secret keys through AWS IAM.
  2. Navigate to CML > User Settings > Environment Variables and add the following:
    • AWS_SECRET_ACCESS_KEY
    • AWS_ACCESS_KEY_ID
    • AWS_DEFAULT_REGION