Limitations and Restrictions

Lists the limitations and restrictions when using Cloudera AI Inference service.

  • API Stability: Both the Cloudera AI Control Plane and Cloudera AI Inference service workload APIs and CLIs are under active development and are subject to change in a backward-incompatible way.
  • Cloud Platforms: Cloudera AI Inference service is available on AWS and Azure.
  • Supported Instance Types: Cloudera AI Inference service supports the same cloud instance types as those of Cloudera AI Workbenches with a few exceptions. See Known Issues for information on unsupported instance types. The type or size of the model you want to deploy determines the cloud compute instance type. Some highly optimized versions of Large Language Models, for instance, work only on specific GPU architectures.
  • CLI Only Interface for control plane operations: Cloudera Data Platform CLI user interface is available for control plane operations to manage the life cycle of a Cloudera AI Inference service instance.
  • No Non-Transparent Proxy Support: Cloudera AI Inference service has not been tested with a non-transparent proxy (NTP) setup in a private cluster. However, it works in a vanilla private cluster.
  • No UDR Support in Azure: Cloudera AI Inference service has not been tested with User-Defined Route (UDR) setup of Azure clusters.
  • Public Load Balancer: By default, Cloudera AI Inference service uses a private load balancer for cluster ingress. If you use a public load balancer instead, set the use PublicLoadBalancer parameter value to true in the creation payload.

    If you are on AWS and use a private load balancer for cluster ingress, you must have a VPN connection between your corporate network and the Virtual Private Cloud (VPC) in which the Cloudera AI Inference service is deployed. The Cloudera AI Inference service UI requires VPN connection.

  • Logging: All Kubernetes pod logs, including pods that are running model servers, are scraped by the platform log aggregator service (fluentd). Model endpoint logs can be viewed from the Cloudera AI Inference service GUI. To view logs of other pods, you must first obtain the kubeconfig of the cluster and use the kubectl command. It is not possible to retrieve historical pod logs, and therefore there is no Diagnostic bundle feature at this time.
  • Namespace: Model endpoints can only be deployed in the serving-default namespace.