Configuring Cloudera AI Inference service and Amazon Bedrock to set up Cloudera Copilot

To set up the Cloudera Copilot, configure Cloudera AI Inference service or Amazon Bedrock.

You must configure Cloudera AI Inference service to use the Cloudera Copilot.

  1. Set up Cloudera AI Inference Service. Recommended models are:
    • Llama 2 70b Chat
    • CodeLlama 34b - instruct
    • Mistral 7b
  2. Deploy a model to Cloudera AI Inference Service.
  3. Navigate to CDP > Management Console > User Profile and generate a pair of Access keys and Private keys. Download the Credentials file for future reference.
  4. Navigate to CML > User Settings > Environment Variables and add the following:
    1. CDP_PRIVATE_KEY: Private key generated in the preceding step
    2. CDP_ACCESS_KEY_ID: Access key ID generated in the preceding step
    3. CDP_REGION: Region of your CDP deployment
    4. CDP_ENDPOINT_URL: Endpoint URL of your CDP deployment
    5. ENDPOINT_URL: Base URL for other legacy CDP API services. CDP configuration process does not prompt for this value.
  5. Install CDP CLI to authorize requests to Cloudera AI Inference Service Endpoints.
    1. Create a new project in your ML Workspace.
    2. Start a new JupyterLab session.
    3. Open Terminal Access. A terminal window appears.
    4. Run the pip install cdpcli command.

You must configure Amazon Bedrock service to use the Cloudera Copilot.

  1. Generate a pair of Access and Secret keys through AWS IAM.
  2. Navigate to CML > User Settings > Environment Variables and add the following:
    • AWS_SECRET_ACCESS_KEY
    • AWS_ACCESS_KEY_ID
    • AWS_DEFAULT_REGION