Configuring Cloudera AI Inference service and Amazon Bedrock to set up Cloudera Copilot
To use Cloudera Copilot, as a Site Administrator, set up Cloudera AI Inference service model endpoints or configure the credentials in Amazon Bedrock depending on where you want to deploy your custom model.
Cloudera AI Inference service is a production-grade serving environment for traditional, generative AI, and LLM models. It is designed to handle the challenges of production deployments, such as high availability, fault tolerance, and scalability. Follow the steps to configure Cloudera AI Inference service model endpoints to use Cloudera Copilot.
- Configure authentication, and authorization, and import a model from NGC.
- Create a Cloudera AI Inference service instance.
- Create a model endpoint using UI.
No additional credentials need to be configured to use Cloudera AI Inference service model endpoints with Cloudera Copilot.
To use Cloudera Copilot, as a Site Administrator, you can configure the credentials to use Amazon Bedrock models.
- Generate a pair of Access and Secret keys through AWS IAM. For more information, see Manage access keys for IAM users.
- In the Cloudera Data Platform console, click the
Cloudera Machine Learning tile.
The Cloudera Machine Learning Workspaces page displays.
- Click on the workspace name.
The Workspaces Home page displays.
- Click Site Administration in the left navigation menu.
The Site Administration page displays.
- Click
- AWS_SECRET_ACCESS_KEY
- AWS_ACCESS_KEY_ID
- AWS_DEFAULT_REGION
and add the following obtained in Step 1: