Limitations and restrictions for Cloudera AI Inference service

Consider the listed limitations and restrictions when using Cloudera AI Inference service.

  • API Stability: Both the Cloudera AI Control Plane and Cloudera AI Inference service workload APIs and CLIs are under active development and are subject to change in a backward-incompatible way.
  • Cloud Platforms: Cloudera AI Inference service is available on Cloudera Embedded Container Service and Openshift platforms.
  • One Instance: Only one Cloudera AI Inference service instance per cluster is supported.
  • Configuration: The Cloudera control plane must be configured with the LDAP authentication which Knox leverages to authenticate users accessing Cloudera AI Inference service.
  • Logging: All Kubernetes pod logs, including pods that are running model servers, are scraped by the platform log aggregator service (fluentd). Model endpoint logs can be viewed from the Cloudera AI Inference service GUI. To view logs of other pods, you must first obtain the kubeconfig of the cluster and use the kubectl command. The Diagnostic bundle feature is not available with model endpoints.
  • Namespace: Model endpoints can only be deployed in the serving-default namespace.