Cloudera AI Inference servicePDF version

Making an inference call to a Model Endpoint with Open Inference Protocol

Cloudera AI Inference service serves predictive ONNX models using the Nvidia Triton Server. The deployed model endpoints are compliant with Open Interface Protocol version 2.

We want your opinion

How can we improve this page?

What kind of feedback do you have?